partitioned global address space
{{Short description|Parallel programming model paradigm in computer science}}
In computer science, partitioned global address space (PGAS) is a parallel programming model paradigm. PGAS is typified by communication operations involving a global memory address space abstraction that is logically partitioned, where a portion is local to each process, thread, or processing element.Almasi, George. [https://www.infosun.fim.uni-passau.de/cl/lehre/sem-ws1516/PGAS.pdf "PGAS (Partitioned Global Address Space) Languages."], Encyclopedia of Parallel Computing, Springer, (2011): 1539-1545. https://doi.org/10.1007/978-0-387-09766-4_210Cristian Coarfă; Yuri Dotsenko; John Mellor-Crummey, [http://caf.rice.edu/publications/caf-upc-ppopp05.pdf "An Evaluation of Global Address Space Languages: Co-Array Fortran and Unified Parallel C"] The novelty of PGAS is that the portions of the shared memory space may have an affinity for a particular process, thereby exploiting locality of reference in order to improve performance. A PGAS memory model is featured in various parallel programming languages and libraries, including: Coarray Fortran, Unified Parallel C, [https://web.archive.org/web/20060615004330/http://www.eecs.berkeley.edu/Research/Projects/CS/parallel/castle/split-c/ Split-C], Fortress, Chapel, X10, [http://upcxx.lbl.gov UPC++], [http://www.pgas2013.org.uk/sites/default/files/finalpapers/Day2/R4/2_paper6.pdf Coarray C++], Global Arrays, [http://www.dash-project.org/ DASH] and SHMEM. The PGAS paradigm is now an integrated part of the Fortran language, as of Fortran 2008 which standardized coarrays.
The various languages and libraries offering a PGAS memory model differ widely in other details, such as the base programming language and the mechanisms used to express parallelism. Many PGAS systems combine the advantages of a SPMD programming style for distributed memory systems (as employed by MPI) with the data referencing semantics of shared memory systems. In contrast to message passing, PGAS programming models frequently offer one-sided communication operations such as Remote Memory Access (RMA), whereby one processing element may directly access memory with affinity to a different (potentially remote) process, without explicit semantic involvement by the passive target process. PGAS offers more efficiency and scalability than traditional shared-memory approaches with a flat address space, because hardware-specific data locality can be explicitly exposed in the semantic partitioning of the address space.
A variant of the PGAS paradigm, asynchronous partitioned global address space (APGAS) augments the programming model with facilities for both local and remote asynchronous task creation.Tim Stitt, [http://cnx.org/content/m20649/latest/ "An Introduction to the Partitioned Global Address Space (PGAS) Programming Model"] Two programming languages that use this model are Chapel and X10.
Examples
- Coarray FortranNumrich, R.W., Reid, J., [https://doi.org/10.1145/289918.289920 Co-array Fortran for parallel programming]. ACM SIGPLAN Fortran Forum 17(2), 1–31 (1998).J. Reid: [https://doi.org/10.1145/1837137.1837138 Coarrays in the Next Fortran Standard]. SIGPLAN Fortran Forum 29(2), 10–27 (July 2010) now an integrated part of the language as of Fortran 2008GCC wiki, [https://gcc.gnu.org/wiki/Coarray Coarray support in gfortran as specified in the Fortran 2008 standard]
- Unified Parallel CW. Chen, D. Bonachea, J. Duell, P. Husbands, C. Iancu, K. Yelick. [https://doi.org/10.1145/782814.782825 A Performance Analysis of the Berkeley UPC Compiler] 17th Annual International Conference on Supercomputing (ICS), 2003. https://doi.org/10.1145/782814.782825Tarek El-Ghazawi, William Carlson, Thomas Sterling, and Katherine Yelick. [https://books.google.com/books?id=n4pknjxmh7EC UPC: distributed shared memory programming]. John Wiley & Sons, 2005.UPC Consortium, [https://doi.org/10.2172/1134233 UPC Language and Library Specifications, v1.3], Lawrence Berkeley National Lab Tech Report LBNL-6623E, Nov 2013. https://doi.org/10.2172/1134233 an explicitly parallel SPMD dialect of the ISO C programming language
- ChapelBradford L. Chamberlain, [https://chapel-lang.org/publications/PMfPC-Chapel.pdf Chapel], [https://mitpress.mit.edu/programming-models-parallel-computing Programming Models for Parallel Computing], edited by Pavan Balaji, MIT Press, November 2015. a parallel language originally developed by Cray under the DARPA HPCS project
- [http://upcxx.lbl.gov UPC++],John Bachan, Scott B. Baden, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Dan Bonachea, Paul H. Hargrove, Hadia Ahmed. "[https://doi.org/10.25344/S4V88H UPC++: A High-Performance Communication Framework for Asynchronous Computation]",
In 33rd IEEE International Parallel & Distributed Processing Symposium (IPDPS'19), May 20–24, 2019. https://doi.org/10.25344/S4V88H A C++ template library that provides PGAS communication operations designed to support high-performance computing on exascale supercomputers, including Remote Memory Access (RMA) and Remote Procedure Call (RPC)
- [http://www.pgas2013.org.uk/sites/default/files/finalpapers/Day2/R4/2_paper6.pdf Coarray C++]T. A. Johnson: [https://www.research.ed.ac.uk/portal/files/19680805/pgas2013proceedings.pdf Coarray C++]. Proceedings of the 7th
International Conference on PGAS Programming Models. pp. 54–66. PGAS’13 (2013), a C++ library developed by Cray, providing a close analog to Fortran coarray functionality
- Global ArraysNieplocha, Jaroslaw; Harrison, Robert J.; Littlefield, Richard J. (1996). [https://doi.org/10.1007/BF00130708 Global arrays: A nonuniform memory access programming model for high-performance computers]. The Journal of Supercomputing. 10 (2): 169–189. a library supporting parallel scientific computing on distributed arrays
- [http://www.dash-project.org/ DASH] K. Furlinger, C. Glass, A. Knupfer, J. Tao, D. Hunich, et al. [https://doi.org/10.1007/978-3-319-14313-2_46 DASH: Data Structures and Algorithms with Support for Hierarchical Locality]. Euro-Par Parallel Processing Workshops (2014). a C++ template library for distributed data structures with support for hierarchical locality
- SHMEM a family of libraries supporting parallel scientific computing on distributed arrays
- X10P. Charles, C. Grothoff, V. Saraswat, C. Donawa, A. Kielstra, et al. [https://doi.org/10.1145/1103845.1094852 X10: an object-oriented approach to nonuniform cluster computing]. Proceedings of the 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA’05) (2005). a parallel language developed by IBM under the DARPA HPCS project
- Fortress a parallel language developed by Sun Microsystems under the DARPA HPCS project
- [http://titanium.cs.berkeley.edu/ Titanium] Katherine Yelick, Paul Hilfinger, Susan Graham, Dan Bonachea, Jimmy Su, Amir Kamil, Kaushik Datta, Phillip Colella, and Tong Wen, [https://dx.doi.org/10.1177/1094342007078449 "Parallel Languages and Compilers: Perspective from the Titanium Experience"], The International Journal Of High Performance Computing Applications, August 1, 2007, 21(3):266-290Katherine Yelick, Susan Graham, Paul Hilfinger, Dan Bonachea, Jimmy Su, Amir Kamil, Kaushik Datta, Phillip Colella, Tong Wen, [https://escholarship.org/uc/item/60v4m2ph "Titanium"], [https://dx.doi.org/10.1007/978-0-387-09766-4_516 Encyclopedia of Parallel Computing], edited by David Padua, (Springer: 2011) Pages: 2049-2055 an explicitly parallel dialect of Java developed at UC Berkeley to support scientific high-performance computing on large-scale multiprocessors
- [https://web.archive.org/web/20060615004330/http://www.eecs.berkeley.edu/Research/Projects/CS/parallel/castle/split-c/ Split-C]Culler, D. E., Dusseau, A., Goldstein, S. C., Krishnamurthy, A., Lumetta, S., Von Eicken, T., & Yelick, K. [https://doi.org/10.1109/SUPERC.1993.1263470 Parallel programming in Split-C]. In Supercomputing'93: Proceedings of the 1993 ACM/IEEE conference on Supercomputing (pp. 262-273). IEEE. a parallel extension of the C programming language that supports efficient access to a global address space
- The Adapteva Epiphany architecture is a manycore network on a chip processor with scratchpad memory addressable between cores.
See also
External links
- [http://cnx.org/content/m20649/latest/ An Introduction to the Partitioned Global Address Space Model]
- [http://upc.gwu.edu/tutorials/tutorials_sc2003.pdf Programming in the Partitioned Global Address Space Model] {{Webarchive|url=https://web.archive.org/web/20100612064551/http://upc.gwu.edu/tutorials/tutorials_sc2003.pdf |date=2010-06-12 }} (2003)
- [https://gasnet.lbl.gov GASNet Communication System] - provides a software infrastructure for PGAS languages over high-performance networksBonachea D, Hargrove P.[https://doi.org/10.25344/S4QP4W GASNet-EX: A High-Performance, Portable Communication Library for Exascale] Proceedings of Languages and Compilers for Parallel Computing (LCPC'18). Oct 2018. https://doi.org/10.25344/S4QP4W