OpenMP#Clauses

{{Short description|Open standard for parallelizing}}

{{Distinguish|Open MPI}}

{{Infobox software

| name = OpenMP

| logo = File:OpenMP logo.png

| author = OpenMP Architecture Review Board{{cite web | url= http://openmp.org/wp/about-openmp/ |title=About the OpenMP ARB and |publisher= OpenMP.org |date=2013-07-11 |access-date=2013-08-14 |url-status= dead | archive-url= https://web.archive.org/web/20130809153922/http://openmp.org/wp/about-openmp/ | archive-date= 2013-08-09 }}

| developer = OpenMP Architecture Review Board

| latest_release_version = 6.0

| latest_release_date = {{start date and age|2024|11}}

| operating_system = Cross-platform

| platform = Cross-platform

| genre = Extension to C, C++, and Fortran; API

| license = Various{{cite web|url=https://www.openmp.org/resources/openmp-compilers-tools/ |title=OpenMP Compilers & Tools |publisher=OpenMP.org |date=November 2019 |access-date=2020-03-05}}

| website = {{URL|openmp.org}}

}}

OpenMP is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran,{{cite book|last=Gagne|first=Abraham Silberschatz, Peter Baer Galvin, Greg|title=Operating system concepts|publisher=Wiley|location=Hoboken, N.J.|isbn=978-1-118-06333-0|pages=181–182|edition=9th|date=2012-12-17}} on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, Windows and OpenHarmony. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.[http://openmp.org/wp/2008/10/openmp-tutorial-at-supercomputing-2008/ OpenMP Tutorial at Supercomputing 2008][http://openmp.org/wp/2009/04/download-book-examples-and-discuss/ Using OpenMP – Portable Shared Memory Parallel Programming – Download Book Examples and Discuss]{{Cite web |title=OpenAtom OpenHarmony |url=https://docs.openharmony.cn/pages/v5.0/en/application-dev/napi/openmp-overview.md |access-date=2025-03-02 |website=docs.openharmony.cn}}

OpenMP is managed by the nonprofit technology consortium OpenMP Architecture Review Board (or OpenMP ARB), jointly defined by a broad swath of leading computer hardware and software vendors, including Arm, AMD, IBM, Intel, Cray, HP, Fujitsu, Nvidia, NEC, Red Hat, Texas Instruments, and Oracle Corporation.

OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer.

An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), such that OpenMP is used for parallelism within a (multi-core) node while MPI is used for parallelism between nodes. There have also been efforts to run OpenMP on software distributed shared memory systems,{{cite journal |last=Costa |first=J.J.|display-authors=etal|date=May 2006 |title=Running OpenMP applications efficiently on an everything-shared SDSM |journal=Journal of Parallel and Distributed Computing |volume=66 |issue=5 |pages=647–658 |doi=10.1016/j.jpdc.2005.06.018 |hdl=2117/370260 |hdl-access=free }} to translate OpenMP into MPI{{cite book |last1=Basumallik |first1=Ayon |last2=Min |first2=Seung-Jai |last3=Eigenmann |first3=Rudolf |title=2007 IEEE International Parallel and Distributed Processing Symposium |chapter=Programming Distributed Memory Systems Using OpenMP |pages=1–8 |location=New York |publisher=IEEE Press |year=2007 |doi=10.1109/IPDPS.2007.370397|isbn=978-1-4244-0909-9 |citeseerx=10.1.1.421.8570 |s2cid=14237507 }} A [https://www.cs.rochester.edu/~cding/Announcements/HIPS07/openmp.pdf preprint is available on Chen Ding's home page]; see especially Section 3 on Translation of OpenMP to MPI.{{cite journal |last1=Wang |first1=Jue |last2=Hu |first2=ChangJun |last3=Zhang |first3=JiLin |last4=Li |first4=JianJiang |date=May 2010 |title=OpenMP compiler for distributed memory architectures |journal=Science China Information Sciences |volume=53 |issue=5 |pages=932–944 |doi=10.1007/s11432-010-0074-0 |doi-access=free }} ({{as of| 2016}} the KLCoMP software described in this paper does not appear to be publicly available)

and to extend OpenMP for non-shared memory systems.

[https://software.intel.com/en-us/articles/cluster-openmp-for-intel-compilers Cluster OpenMP] (a product that used to be available for Intel C++ Compiler versions 9.1 to 11.1 but was dropped in 13.0)

Design

File:Fork join.svg where the primary thread forks off a number of threads which execute blocks of code in parallel]]

{{See also|Fork–join model}}

OpenMP is an implementation of multithreading, a method of parallelizing whereby a primary thread (a series of instructions executed consecutively) forks a specified number of sub-threads and the system divides a task among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.

The section of code that is meant to run in parallel is marked accordingly, with a compiler directive that will cause the threads to form before the section is executed. Each thread has an ID attached to it which can be obtained using a function (called omp_get_thread_num()). The thread ID is an integer, and the primary thread has an ID of 0. After the execution of the parallelized code, the threads join back into the primary thread, which continues onward to the end of the program.

By default, each thread executes the parallelized section of code independently. Work-sharing constructs can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism and data parallelism can be achieved using OpenMP in this way.

The runtime environment allocates threads to processors depending on usage, machine load and other factors. The runtime environment can assign the number of threads based on environment variables, or the code can do so using functions. The OpenMP functions are included in a header file labelled {{mono|omp.h}} in C/C++.

History

The OpenMP Architecture Review Board (ARB) published its first API specifications, OpenMP for Fortran 1.0, in October 1997. In October the following year they released the C/C++ standard. 2000 saw version 2.0 of the Fortran specifications with version 2.0 of the C/C++ specifications being released in 2002. Version 2.5 is a combined C/C++/Fortran specification that was released in 2005.{{Citation needed|date=September 2022}}

Up to version 2.0, OpenMP primarily specified ways to parallelize highly regular loops, as they occur in matrix-oriented numerical programming, where the number of iterations of the loop is known at entry time. This was recognized as a limitation, and various task parallel extensions were added to implementations. In 2005, an effort to standardize task parallelism was formed, which published a proposal in 2007, taking inspiration from task parallelism features in Cilk, X10 and Chapel.{{cite conference |first1=Eduard |last1=Ayguade |first2=Nawal |last2=Copty |first3=Alejandro |last3=Duran |first4=Jay |last4=Hoeflinger |first5=Yuan |last5=Lin |first6=Federico |last6=Massaioli |first7=Ernesto |last7=Su |first8=Priya |last8=Unnikrishnan |first9=Guansong |last9=Zhang |title=A proposal for task parallelism in OpenMP |conference=Proc. Int'l Workshop on OpenMP |year=2007 |url=http://people.ac.upc.edu/aduran/papers/2007/tasks_iwomp07.pdf}}

Version 3.0 was released in May 2008. Included in the new features in 3.0 is the concept of tasks and the task construct,{{cite web|url=http://www.openmp.org/mp-documents/spec30.pdf |title=OpenMP Application Program Interface, Version 3.0 |date=May 2008 |access-date=2014-02-06 |publisher=openmp.org}} significantly broadening the scope of OpenMP beyond the parallel loop constructs that made up most of OpenMP 2.0.{{cite conference |title=A Runtime Implementation of OpenMP Tasks |first1=James |last1=LaGrone |first2=Ayodunni |last2=Aribuki |first3=Cody |last3=Addison |first4=Barbara |last4=Chapman|author4-link=Barbara Chapman |conference=Proc. Int'l Workshop on OpenMP |year=2011 |pages=165–178 |doi=10.1007/978-3-642-21487-5_13 |citeseerx=10.1.1.221.2775}}

Version 4.0 of the specification was released in July 2013.{{cite web |url=http://openmp.org/wp/openmp-40-api-released/ |title=OpenMP 4.0 API Released |publisher=OpenMP.org |date=2013-07-26 |access-date=2013-08-14 |url-status=dead |archive-url=https://web.archive.org/web/20131109175921/http://openmp.org/wp/openmp-40-api-released/ |archive-date=2013-11-09 }} It adds or improves the following features: support for accelerators; atomics; error handling; thread affinity; tasking extensions; user defined reduction; SIMD support; Fortran 2003 support.{{cite web|url=http://www.openmp.org/mp-documents/OpenMP4.0.0.pdf |title=OpenMP Application Program Interface, Version 4.0 |date=July 2013 |access-date=2014-02-06 |publisher=openmp.org}}{{full citation needed|date=March 2015}}

Version 5.2 of OpenMP was released in November 2021.{{cite web |url=https://www.openmp.org/specifications/|title=OpenMP 5.2 Specification}}

Version 6.0 was released in November 2024.{{cite web|url=https://www.openmp.org/home-news/openmp-arb-releases-openmp-6-0-for-easier-programming/|title=OpenMP ARB Releases OpenMP 6.0 for Easier Programming}}

Note that not all compilers (and OSes) support the full set of features for the latest version/s.

Core elements

File:OpenMP language extensions.svg

The core elements of OpenMP are the constructs for thread creation, workload distribution (work sharing), data-environment management, thread synchronization, user-level runtime routines and environment variables.

In C/C++, OpenMP uses #pragmas. The OpenMP specific pragmas are listed below.

= Thread creation =

The pragma omp parallel is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original thread will be denoted as master thread with thread ID 0.

Example (C program): Display "Hello, world." using multiple threads.

  1. include
  2. include

int main(void)

{

#pragma omp parallel

printf("Hello, world.\n");

return 0;

}

Use flag -fopenmp to compile using GCC:

$ gcc -fopenmp hello.c -o hello -ldl

Output on a computer with two cores, and thus two threads:

Hello, world.

Hello, world.

However, the output may also be garbled because of the race condition caused from the two threads sharing the standard output.

Hello, wHello, woorld.

rld.

Whether printf is atomic depends on the underlying implementation{{Cite web|url=https://stackoverflow.com/a/40186101|title=C - How to use printf() in multiple threads}} unlike C++11's std::cout, which is thread-safe by default.{{Cite web|url=https://en.cppreference.com/w/cpp/io/cout|title=std::cout, std::wcout - cppreference.com}}

= Work-sharing constructs =

Used to specify how to assign independent work to one or all of the threads.

  • omp for or omp do: used to split up loop iterations among the threads, also called loop constructs.
  • sections: assigning consecutive but independent code blocks to different threads
  • single: specifying a code block that is executed by only one thread, a barrier is implied in the end
  • master: similar to single, but the code block will be executed by the master thread only and no barrier implied in the end.

Example: initialize the value of a large array in parallel, using each thread to do part of the work

int main(int argc, char **argv)

{

int a[100000];

#pragma omp parallel for

for (int i = 0; i < 100000; i++) {

a[i] = 2 * i;

}

return 0;

}

This example is embarrassingly parallel, and depends only on the value of {{mono|i}}. The OpenMP {{mono|parallel for}} flag tells the OpenMP system to split this task among its working threads. The threads will each receive a unique and private version of the variable.{{Cite web | url=http://supercomputingblog.com/openmp/tutorial-parallel-for-loops-with-openmp/ | title=Tutorial – Parallel for Loops with OpenMP| date=2009-07-14}} For instance, with two worker threads, one thread might be handed a version of {{mono|i}} that runs from 0 to 49999 while the second gets a version running from 50000 to 99999.

=Variant directives=

Variant directives are one of the major features introduced in OpenMP 5.0 specification to facilitate programmers to improve performance portability. They enable adaptation of OpenMP pragmas and user code at compile time. The specification defines traits to describe active OpenMP constructs, execution devices, and functionality provided by an implementation, context selectors based on the traits and user-defined conditions, and metadirective and declare directive directives for users to program the same code region with variant directives.

  • The metadirective is an executable directive that conditionally resolves to another directive at compile time by selecting from multiple directive variants based on traits that define an OpenMP condition or context.
  • The declare variant directive has similar functionality as metadirective but selects a function variant at the call-site based on context or user-defined conditions.

The mechanism provided by the two variant directives for selecting variants is more convenient to use than the C/C++ preprocessing since it directly supports variant selection in OpenMP and allows an OpenMP compiler to analyze and determine the final directive from variants and context.

// code adaptation using preprocessing directives

int v1[N], v2[N], v3[N];

  1. if defined(nvptx)

#pragma omp target teams distribute parallel for map(to:v1,v2) map(from:v3)

for (int i= 0; i< N; i++)

v3[i] = v1[i] * v2[i];

  1. else

#pragma omp target parallel for map(to:v1,v2) map(from:v3)

for (int i= 0; i< N; i++)

v3[i] = v1[i] * v2[i];

  1. endif

// code adaptation using metadirective in OpenMP 5.0

int v1[N], v2[N], v3[N];

  1. pragma omp target map(to:v1,v2) map(from:v3)

#pragma omp metadirective \

when(device={arch(nvptx)}: target teams distribute parallel for)\

default(target parallel for)

for (int i= 0; i< N; i++)

v3[i] = v1[i] * v2[i];

= Clauses =

Since OpenMP is a shared memory programming model, most variables in OpenMP code are visible to all threads by default. But sometimes private variables are necessary to avoid race conditions and there is a need to pass values between the sequential part and the parallel region (the code block executed in parallel), so data environment management is introduced as data sharing attribute clauses by appending them to the OpenMP directive. The different types of clauses are:

; Data sharing attribute clauses:

  • shared: the data declared outside a parallel region is shared, which means visible and accessible by all threads simultaneously. By default, all variables in the work sharing region are shared except the loop iteration counter.
  • private: the data declared within a parallel region is private to each thread, which means each thread will have a local copy and use it as a temporary variable. A private variable is not initialized and the value is not maintained for use outside the parallel region. By default, the loop iteration counters in the OpenMP loop constructs are private.
  • default: allows the programmer to state that the default data scoping within a parallel region will be either shared, or none for C/C++, or shared, firstprivate, private, or none for Fortran. The none option forces the programmer to declare each variable in the parallel region using the data sharing attribute clauses.
  • firstprivate: the data is private to each thread, but initialized using the value of the variable using the same name from the master thread.
  • lastprivate: the data is private to each thread. The value of this private data will be copied to a global variable using the same name outside the parallel region if current iteration is the last iteration in the parallelized loop. A variable can be both firstprivate and lastprivate.
  • threadprivate: The data is a global data, but it is private in each parallel region during the runtime. The difference between threadprivate and private is the global scope associated with threadprivate and the preserved value across parallel regions.

; Synchronization clauses:

  • critical: the enclosed code block will be executed by only one thread at a time, and not simultaneously executed by multiple threads. It is often used to protect shared data from race conditions.
  • atomic: the memory update (write, or read-modify-write) in the next instruction will be performed atomically. It does not make the entire statement atomic; only the memory update is atomic. A compiler might use special hardware instructions for better performance than when using critical.
  • ordered: the structured block is executed in the order in which iterations would be executed in a sequential loop
  • barrier: each thread waits until all of the other threads of a team have reached this point. A work-sharing construct has an implicit barrier synchronization at the end.
  • nowait: specifies that threads completing assigned work can proceed without waiting for all threads in the team to finish. In the absence of this clause, threads encounter a barrier synchronization at the end of the work sharing construct.

; Scheduling clauses:

  • schedule (type, chunk): This is useful if the work sharing construct is a do-loop or for-loop. The iterations in the work sharing construct are assigned to threads according to the scheduling method defined by this clause. The three types of scheduling are:
  • #static: Here, all the threads are allocated iterations before they execute the loop iterations. The iterations are divided among threads equally by default. However, specifying an integer for the parameter chunk will allocate chunk number of contiguous iterations to a particular thread.
  • #dynamic: Here, some of the iterations are allocated to a smaller number of threads. Once a particular thread finishes its allocated iteration, it returns to get another one from the iterations that are left. The parameter chunk defines the number of contiguous iterations that are allocated to a thread at a time.
  • #guided: A large chunk of contiguous iterations are allocated to each thread dynamically (as above). The chunk size decreases exponentially with each successive allocation to a minimum size specified in the parameter chunk

; IF control:

  • if: This will cause the threads to parallelize the task only if a condition is met. Otherwise the code block executes serially.

; Data copying:

  • copyin: similar to firstprivate for private variables, threadprivate variables are not initialized, unless using copyin to pass the value from the corresponding global variables. No copyout is needed because the value of a threadprivate variable is maintained throughout the execution of the whole program.
  • copyprivate: used with single to support the copying of data values from private objects on one thread (the single thread) to the corresponding objects on other threads in the team.

; Reduction:

  • reduction (operator | intrinsic : list): the variable has a local copy in each thread, but the values of the local copies will be summarized (reduced) into a global shared variable. This is very useful if a particular operation (specified in operator for this particular clause) on a variable runs iteratively, so that its value at a particular iteration depends on its value at a prior iteration. The steps that lead up to the operational increment are parallelized, but the threads updates the global variable in a thread safe manner. This would be required in parallelizing numerical integration of functions and differential equations, as a common example.

; Others:

  • flush: The value of this variable is restored from the register to the memory for using this value outside of a parallel part
  • master: Executed only by the master thread (the thread which forked off all the others during the execution of the OpenMP directive). No implicit barrier; other team members (threads) not required to reach.

= User-level runtime routines =

Used to modify/check the number of threads, detect if the execution context is in a parallel region, how many processors in current system, set/unset locks, timing functions, etc

= Environment variables =

A method to alter the execution features of OpenMP applications. Used to control loop iterations scheduling, default number of threads, etc. For example, OMP_NUM_THREADS is used to specify number of threads for an application.

Implementations

OpenMP has been implemented in many commercial compilers. For instance, Visual C++ 2005, 2008, 2010, 2012 and 2013 support it (OpenMP 2.0, in Professional, Team System, Premium and Ultimate editions[http://msdn2.microsoft.com/en-us/library/hs24szh9(vs.80).aspx Visual C++ Editions, Visual Studio 2005][http://msdn2.microsoft.com/en-us/library/hs24szh9(vs.90).aspx Visual C++ Editions, Visual Studio 2008][http://msdn2.microsoft.com/en-us/library/hs24szh9(vs.100).aspx Visual C++ Editions, Visual Studio 2010]), as well as Intel Parallel Studio for various processors.David Worthington, [http://www.sdtimes.com/intel_addresses_development_life_cycle_with_parallel_studio/about_intel_and_multicore/33497 "Intel addresses development life cycle with Parallel Studio"] {{Webarchive|url=https://web.archive.org/web/20120215032407/http://www.sdtimes.com/INTEL_ADDRESSES_DEVELOPMENT_LIFE_CYCLE_WITH_PARALLEL_STUDIO/About_INTEL_and_MULTICORE/33497 |date=2012-02-15 }}, SDTimes, 26 May 2009 (accessed 28 May 2009) Oracle Solaris Studio compilers and tools support the latest OpenMP specifications with productivity enhancements for Solaris OS (UltraSPARC and x86/x64) and Linux platforms. The Fortran, C and C++ compilers from The Portland Group also support OpenMP 2.5. GCC has also supported OpenMP since version 4.2.

Compilers with an implementation of OpenMP 3.0:

  • GCC 4.3.1
  • Mercurium compiler
  • Intel Fortran and C/C++ versions 11.0 and 11.1 compilers, Intel C/C++ and Fortran Composer XE 2011 and Intel Parallel Studio.
  • IBM XL compiler[http://www-01.ibm.com/software/awdtools/xlcpp/linux/features/?S_CMP=rnav "XL C/C++ for Linux Features"], (accessed 9 June 2009)
  • Sun Studio 12 update 1 has a full implementation of OpenMP 3.0{{cite web|url=http://developers.sun.com/sunstudio/features/ |title=Oracle Technology Network for Java Developers | Oracle Technology Network | Oracle |publisher=Developers.sun.com |access-date=2013-08-14}}
  • Multi-Processor Computing

Several compilers support OpenMP 3.1:

  • GCC 4.7{{cite web|url=https://gcc.gnu.org/wiki/openmp |title=openmp – GCC Wiki |publisher=Gcc.gnu.org |date=2013-07-30 |access-date=2013-08-14}}
  • Intel Fortran and C/C++ compilers 12.1{{cite web |author=Kennedy |first=Patrick |date=2011-09-06 |title=Intel® C++ and Fortran Compilers now support the OpenMP* 3.1 Specification | Intel® Developer Zone |url=http://software.intel.com/en-us/articles/intel-c-and-fortran-compilers-now-support-the-openmp-31-specification/ |access-date=2013-08-14 |publisher=Software.intel.com}}
  • IBM XL C/C++ compilers for AIX and Linux, V13.1{{Cite web|url=https://www.ibm.com/support/docview.wss?uid=swg27007322&aid=1|title = IBM XL C/C++ compilers features| website=IBM |date = 13 December 2018}} & IBM XL Fortran compilers for AIX and Linux, V14.1{{Cite web|url=http://www-01.ibm.com/support/docview.wss?uid=swg27007323&aid=1|title=IBM XL Fortran compilers features|date=13 December 2018}}
  • LLVM/Clang 3.7{{cite web|url=http://llvm.org/releases/3.7.0/tools/clang/docs/ReleaseNotes.html#openmp-support |title=Clang 3.7 Release Notes |publisher=llvm.org |access-date=2015-10-10}}
  • Absoft Fortran Compilers v. 19 for Windows, Mac OS X and Linux{{cite web|url=https://www.absoft.com/ |title=Absoft Home Page |access-date=2019-02-12}}

Compilers supporting OpenMP 4.0:

  • GCC 4.9.0 for C/C++, GCC 4.9.1 for Fortran{{cite web|url=https://www.gnu.org/software/gcc/gcc-4.9/changes.html |title=GCC 4.9 Release Series – Changes |publisher=www.gnu.org }}
  • Intel Fortran and C/C++ compilers 15.0{{cite web |url=https://software.intel.com/en-us/articles/openmp-40-features-in-intel-compiler-150 |title=OpenMP* 4.0 Features in Intel Compiler 15.0 |publisher=Software.intel.com |date=2014-08-13 |access-date=2014-11-10 |archive-date=2018-11-16 |archive-url=https://web.archive.org/web/20181116055150/https://software.intel.com/en-us/articles/openmp-40-features-in-intel-compiler-150 |url-status=dead }}
  • IBM XL C/C++ for Linux, V13.1 (partial) & XL Fortran for Linux, V15.1 (partial)
  • LLVM/Clang 3.7 (partial)

Several Compilers supporting OpenMP 4.5:

  • GCC 6 for C/C++ {{cite web|url=https://www.gnu.org/software/gcc/gcc-6/changes.html |title=GCC 6 Release Series - Changes |publisher=www.gnu.org }}
  • Intel Fortran and C/C++ compilers 17.0, 18.0, 19.0 {{cite web |title=OpenMP Compilers & Tools |url=https://www.openmp.org/resources/openmp-compilers-tools/ |website=openmp.org |publisher=www.openmp.org |access-date=29 October 2019 |ref=openmp-tools}}
  • LLVM/Clang 12 {{Cite web|title=OpenMP Support — Clang 12 documentation|url=https://clang.llvm.org/docs/OpenMPSupport.html#openmp-5-0-implementation-details|access-date=2020-10-23|website=clang.llvm.org}}

Partial support for OpenMP 5.0:

  • GCC 9 for C/C++ {{Cite web|title=GOMP — An OpenMP implementation for GCC - GNU Project - Free Software Foundation (FSF)|url=https://gcc.gnu.org/projects/gomp/|access-date=2020-10-23|website=gcc.gnu.org|archive-date=2021-02-27|archive-url=https://web.archive.org/web/20210227083702/http://gcc.gnu.org/projects/gomp/|url-status=dead}}
  • Intel Fortran and C/C++ compilers 19.1 {{Cite web|title=OpenMP* Support|url=https://www.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/optimization-and-programming-guide/openmp-support.html|access-date=2020-10-23|website=Intel|language=en}}
  • LLVM/Clang 12

Auto-parallelizing compilers that generates source code annotated with OpenMP directives:

  • iPat/OMP
  • Parallware
  • PLUTO
  • ROSE (compiler framework)
  • S2P by KPIT Cummins Infosystems Ltd.
  • [https://link.springer.com/chapter/10.1007/978-3-030-58144-2_16 ComPar]
  • [https://arxiv.org/pdf/2204.12835.pdf PragFormer]

Several profilers and debuggers expressly support OpenMP:

  • Intel VTune Profiler - a profiler for the x86 CPU and Xe GPU architectures
  • Intel Advisor - a design assistance and analysis tool for OpenMP and MPI codes
  • Allinea Distributed Debugging Tool (DDT) – debugger for OpenMP and MPI codes
  • Allinea MAP – profiler for OpenMP and MPI codes
  • TotalView - debugger from Rogue Wave Software for OpenMP, MPI and serial codes
  • ompP – profiler for OpenMP
  • VAMPIR – profiler for OpenMP and MPI code

Pros and cons

{{More citations needed section|date=February 2017}}

Pros:

  • Portable multithreading code (in C/C++ and other languages, one typically has to call platform-specific primitives in order to get multithreading).
  • Simple: need not deal with message passing as MPI does.
  • Data layout and decomposition is handled automatically by directives.
  • Scalability comparable to MPI on shared-memory systems.{{cite journal|doi=10.1016/j.parco.2012.05.005|title=OpenMP parallelism for fluid and fluid-particulate systems|year=2012|last1=Amritkar|first1=Amit|last2=Tafti|first2=Danesh|last3=Liu|first3=Rui|last4=Kufrin|first4=Rick|last5=Chapman|first5=Barbara|author5-link=Barbara Chapman|journal=Parallel Computing|volume=38|issue=9|page=501}}
  • Incremental parallelism: can work on one part of the program at one time, no dramatic change to code is needed.
  • Unified code for both serial and parallel applications: OpenMP constructs are treated as comments when sequential compilers are used.
  • Original (serial) code statements need not, in general, be modified when parallelized with OpenMP. This reduces the chance of inadvertently introducing bugs.
  • Both coarse-grained and fine-grained parallelism are possible.
  • In irregular multi-physics applications which do not adhere solely to the SPMD mode of computation, as encountered in tightly coupled fluid-particulate systems, the flexibility of OpenMP can have a big performance advantage over MPI.{{cite journal|doi=10.1016/j.jcp.2013.09.007|title=Efficient parallel CFD-DEM simulations using OpenMP|year=2014|last1=Amritkar|first1=Amit|last2=Deb|first2=Surya|last3=Tafti|first3=Danesh|journal=Journal of Computational Physics|volume=256|page=501|bibcode=2014JCoPh.256..501A|title-link=CFD-DEM}}
  • Can be used on various accelerators such as GPGPU[https://www.openmp.org/updates/openmp-accelerator-support-gpus/ OpenMP Accelerator Support for GPUs] and FPGAs.

Cons:

  • Risk of introducing difficult to debug synchronization bugs and race conditions.[http://developers.sun.com/solaris/articles/cpp_race.html Detecting and Avoiding OpenMP Race Conditions in C++]{{Cite web |url=http://software.intel.com/en-us/articles/32-openmp-traps-for-c-developers |title=Alexey Kolosov, Evgeniy Ryzhkov, Andrey Karpov 32 OpenMP traps for C++ developers |access-date=2009-04-15 |archive-date=2017-07-07 |archive-url=https://web.archive.org/web/20170707064110/https://software.intel.com/en-us/articles/32-openmp-traps-for-c-developers |url-status=dead }}
  • {{As of|2017}} only runs efficiently in shared-memory multiprocessor platforms (see however Intel's [http://software.intel.com/en-us/articles/cluster-openmp-for-intel-compilers Cluster OpenMP] {{Webarchive|url=https://web.archive.org/web/20181116055322/https://software.intel.com/en-us/articles/cluster-openmp-for-intel-compilers |date=2018-11-16 }} and other distributed shared memory platforms).
  • Requires a compiler that supports OpenMP.
  • Scalability is limited by memory architecture.
  • No support for compare-and-swap.Stephen Blair-Chappell, Intel Corporation, Becoming a Parallel Programming Expert in Nine Minutes, presentation on ACCU 2010 conference
  • Reliable error handling is missing.
  • Lacks fine-grained mechanisms to control thread-processor mapping.
  • High chance of accidentally writing false sharing code.

Performance expectations

One might expect to get an N times speedup when running a program parallelized using OpenMP on a N processor platform. However, this seldom occurs for these reasons:

  • When a dependency exists, a process must wait until the data it depends on is computed.
  • When multiple processes share a non-parallel proof resource (like a file to write in), their requests are executed sequentially. Therefore, each thread must wait until the other thread releases the resource.
  • A large part of the program may not be parallelized by OpenMP, which means that the theoretical upper limit of speedup is limited according to Amdahl's law.
  • N processors in a symmetric multiprocessing (SMP) may have N times the computation power, but the memory bandwidth usually does not scale up N times. Quite often, the original memory path is shared by multiple processors and performance degradation may be observed when they compete for the shared memory bandwidth.
  • Many other common problems affecting the final speedup in parallel computing also apply to OpenMP, like load balancing and synchronization overhead.
  • Compiler optimisation may not be as effective when invoking OpenMP. This can commonly lead to a single-threaded OpenMP program running slower than the same code compiled without an OpenMP flag (which will be fully serial).

Thread affinity

Some vendors recommend setting the processor affinity on OpenMP threads to associate them with particular processor cores.{{cite journal|doi=10.1535/itj.1104.08|title= Multi-Core Software|date=2007-11-15|last1=Chen|first1=Yurong|journal=Intel Technology Journal|volume=11|issue=4}}{{cite web|url=http://www.spec.org/omp/results/res2008q1/omp2001-20080128-00288.html|title=OMPM2001 Result|date=2008-01-28|publisher=SPEC}}{{cite web|url=http://www.spec.org/omp/results/res2003q2/omp2001-20030401-00079.html|title=OMPM2001 Result|date=2003-04-01|publisher=SPEC|access-date=2008-03-28|archive-date=2021-02-25|archive-url=https://web.archive.org/web/20210225023254/http://www.spec.org/omp/results/res2003q2/omp2001-20030401-00079.html|url-status=dead}}

This minimizes thread migration and context-switching cost among cores. It also improves the data locality and reduces the cache-coherency traffic among the cores (or processors).

Benchmarks

A variety of benchmarks has been developed to demonstrate the use of OpenMP, test its performance and evaluate correctness.

Simple examples

  • OmpSCR: OpenMP Source Code Repository

Performance benchmarks include:

  • NAS Parallel Benchmark
  • Barcelona OpenMP Task Suite a collection of applications that allow to test OpenMP tasking implementations.
  • SPEC series
  • SPEC OMP 2012
  • The SPEC ACCEL benchmark suite testing OpenMP 4 target offloading API
  • The SPEChpc 2002 benchmark
  • CORAL benchmarks
  • Exascale Proxy Applications
  • Rodinia focusing on accelerators.
  • Problem Based Benchmark Suite

Correctness benchmarks include:

  • OpenMP Validation Suite
  • OpenMP Validation and Verification Testsuite
  • DataRaceBench is a benchmark suite designed to systematically and quantitatively evaluate the effectiveness of OpenMP data race detection tools.
  • AutoParBench is a benchmark suite to evaluate compilers and tools which can automatically insert OpenMP directives.

See also

References

{{Reflist|30em}}

Further reading

{{refbegin}}

  • Quinn Michael J, Parallel Programming in C with MPI and OpenMP McGraw-Hill Inc. 2004. {{ISBN|0-07-058201-7}}
  • R. Chandra, R. Menon, L. Dagum, D. Kohr, D. Maydan, J. McDonald, Parallel Programming in OpenMP. Morgan Kaufmann, 2000. {{ISBN|1-55860-671-8}}
  • R. Eigenmann (Editor), M. Voss (Editor), OpenMP Shared Memory Parallel Programming: International Workshop on OpenMP Applications and Tools, WOMPAT 2001, West Lafayette, IN, USA, July 30–31, 2001. (Lecture Notes in Computer Science). Springer 2001. {{ISBN|3-540-42346-X}}
  • B. Chapman, G. Jost, R. van der Pas, D.J. Kuck (foreword), Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press (October 31, 2007). {{ISBN|0-262-53302-2}}
  • Tom Deakin and Timothy G. Mattson: Programming Your GPU with OpenMP: Performance Portability for GPUs, The MIT Press, ISBN 978-0-262547536 (Nov, 7,2023).
  • Parallel Processing via MPI & OpenMP, M. Firuziaan, O. Nommensen. Linux Enterprise, 10/2002
  • [https://web.archive.org/web/20080705180752/http://msdn.microsoft.com/msdnmag/issues/05/10/OpenMP/default.aspx MSDN Magazine article on OpenMP]
  • [http://openmp.org/mp-documents/omp-hands-on-SC08.pdf SC08 OpenMP Tutorial] {{Webarchive|url=https://web.archive.org/web/20130319133253/http://openmp.org/mp-documents/omp-hands-on-SC08.pdf |date=2013-03-19 }} (PDF) – Hands-On Introduction to OpenMP, Mattson and Meadows, from SC08 (Austin)
  • [https://www.openmp.org/specifications/ OpenMP Specifications] {{Webarchive|url=https://web.archive.org/web/20210302235321/https://www.openmp.org/specifications/ |date=2021-03-02 }}
  • [http://www.openmp.org/wp-content/uploads/F95_OpenMPv1_v2.pdf Miguel Hermanns:Parallel Programming in Fortran 95 using OpenMP (19th, April, 2002)] (PDF) (OpenMP ver.1 and ver.2)

{{refend}}