Open MPI

{{short description|Message Passing Interface software library}}

{{Distinguish|OpenMP}}

{{Infobox Software

|name = Open MPI

|logo = Open MPI logo.png

|screenshot =

|caption =

|collapsible =

|author =

|developer =

|released =

|discontinued =

| latest release version = {{wikidata|property|preferred|references|edit|P348|P548=Q2804309}} | latest release date = {{Start date and age|{{wikidata|qualifier|preferred|single|P348|P548=Q2804309|P577}}|df=yes}}

|latest preview date = {{start date and age|2023|09|29}}

|programming language =

|operating system = Unix, Linux, macOS, FreeBSD{{Cite web|url=https://www.freshports.org/net/openmpi2|title=FreshPorts -- net/Openmpi2: High Performance Message Passing Library}}

|platform = Cross-platform

|size =

|language =

|genre = Library

|license = New BSD License

|website = {{url|http://www.open-mpi.org/}}

}}

Open MPI is a Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI). It is used by many TOP500 supercomputers including Roadrunner, which was the world's fastest supercomputer from June 2008 to November 2009,

{{cite web

| url=http://www.open-mpi.org/papers/sc-2008/jsquyres-cisco-booth-talk-1up.pdf

|author=Jeff Squyres

| publisher=Open MPI Project

| title=Open MPI: 10^15 Flops Can't Be Wrong

| access-date=2011-09-27}} and K computer, the fastest supercomputer from June 2011 to June 2012.{{cite web

| url=http://www.fujitsu.com/downloads/TC/sc10/programming-on-k-computer.pdf

| publisher=Fujitsu

| title=Programming on K computer

| access-date=2012-01-17}}{{cite web

| url=http://blogs.cisco.com/performance/open-mpi-powers-8-petaflops/

| publisher=Cisco Systems

| title=Open MPI powers 8 petaflops

| access-date=2011-09-27

| archive-url=https://web.archive.org/web/20110628064742/http://blogs.cisco.com/performance/open-mpi-powers-8-petaflops/

| archive-date=2011-06-28

| url-status=dead

}}

Overview

Open MPI represents the merger between three well-known MPI implementations:

with contributions from the PACX-MPI team at the University of Stuttgart. These four institutions comprise the founding members of the Open MPI development team.

The Open MPI developers selected these MPI implementations as excelling in one or more areas. Open MPI aims to use the best ideas and technologies from the individual projects and create one world-class open-source MPI implementation that excels in all areas. The Open MPI project specifies several top-level goals:

  • to create a free, open source software, peer-reviewed, production-quality complete MPI-3.0 implementation
  • to provide extremely high, competitive performance (low latency or high bandwidth)
  • to involve the high-performance computing community directly with external development and feedback (vendors, 3rd party researchers, users, etc.)
  • to provide a stable platform for 3rd-party research and commercial development
  • to help prevent the "forking problem" common to other MPI projects[https://www.open-mpi.org/faq/?category=general#preventing-forking Preventing forking is a goal; how will you enforce that?]
  • to support a wide variety of high-performance computing platforms and environments

Code modules

The Open MPI code has 3 major code modules:

  • OMPI - MPI code
  • ORTE - the Open Run-Time Environment
  • OPAL - the Open Portable Access Layer

Commercial implementations

  • Sun HPC Cluster Tools - beginning with version 7, Sun switched to Open MPI
  • Bullx MPI—In 2010 Bull announced the release of bullx MPI, based on Open MPI{{cite web

| url=http://www.wcm.bull.com/internet/pr/new_rend.jsp?DocId=612919&lang=en

| author=Aurélie Negro

| publisher=Bull SAS

| title=Bull launches bullx supercomputer suite

| url-status=dead

| access-date=2013-09-27

| archive-url=https://web.archive.org/web/20140421173258/http://www.wcm.bull.com/internet/pr/new_rend.jsp?DocId=612919&lang=en

| archive-date=2014-04-21

}}

Consortium

Open MPI development is performed within a consortium of many industrial and academic partners. The consortium also covers several other software projects such as the hwloc (Hardware Locality) library which takes care of discovering and modeling the topology of parallel platforms.

See also

References

{{reflist}}