Petascale computing

{{short description|Computer systems capable of one petaFLOPS}}

{{Multiple issues|

{{Update|date=April 2014}}

{{more footnotes|date=February 2009}}

}}

Petascale computing refers to computing systems capable of performing at least 1 quadrillion (10^15) floating-point operations per second (FLOPS). These systems are often called petaflops systems and represent a significant leap from traditional supercomputers in terms of raw performance, enabling them to handle vast datasets and complex computations.

Definition

{{see|FLOPS|TOP500|computer performance}}

Floating point operations per second (FLOPS) are one measure of computer performance. FLOPS can be recorded in different measures of precision, however the standard measure (used by the TOP500 supercomputer list) uses 64 bit (double-precision floating-point format) operations per second using the High Performance LINPACK (HPLinpack) benchmark.

The metric typically refers to single computing systems, although can be used to measure distributed computing systems for comparison. It can be noted that there are alternative precision measures using the LINPACK benchmarks which are not part of the standard metric/definition. It has been recognized that HPLinpack may not be a good general measure of supercomputer utility in real world application, however it is the common standard for performance measurement.{{cite journal |last1=Bourzac |first1=Katherine |title=Supercomputing poised for a massive speed boost |journal=Nature |date=November 2017 |volume=551 |issue=7682 |pages=554–556 |doi=10.1038/d41586-017-07523-y |url=https://doi.org/10.1038/d41586-017-07523-y |access-date=3 June 2022|doi-access=free }}{{cite web |last1=Reed |first1=Daniel |last2=Dongarra |first2=Jack |title=Exascale Computing and Big Data: The Next Frontier |url=http://www.netlib.org/utk/people/JackDongarra/PAPERS/Exascale-Reed-Dongarra.pdf |access-date=3 June 2022}}

History

The petaFLOPS barrier was first broken on 16 September 2007 by the distributed computing Folding@home project.{{cite journal |author= Michael Gross |title= Folding research recruits unconventional help |journal= Current Biology |year= 2012 |volume= 22 |issue= 2 |pages= R35–R38 |doi= 10.1016/j.cub.2012.01.008 |pmid= 22389910|doi-access= free }} The first single petascale system, the Roadrunner, entered operation in 2008.{{cite book|title=The potential impact of high-end capability computing on four illustrative fields of science and engineering|author=National Research Council (U.S.)|publisher=The National Academies|year=2008|url=https://books.google.com/books?id=XHvp6eTSypIC&pg=PA11 |page=11 | isbn=978-0-309-12485-0}} The Roadrunner, built by IBM, had a sustained performance of 1.026 petaFLOPS. The Jaguar became the second computer to break the petaFLOPS milestone, later in 2008, and reached a performance of 1.759 petaFLOPS after a 2009 update.{{cite web|author=National Center for Computational Sciences (NCCS)|url=http://www.nccs.gov/jaguar/|title=World's Most Powerful Supercomputer for Science!|publisher=NCCS|year=2010|accessdate=2010-06-26|url-status=dead|archiveurl=https://web.archive.org/web/20091127092438/http://www.nccs.gov/jaguar/|archivedate=2009-11-27}}

In 2020, Fugaku became the fastest supercomputer in the world, reaching 415 petaFLOPS in June 2020. Fugaku later achieved an Rmax of 442 petaFLOPS in November of the same year.

By 2022, exascale computing had been reached with the development of Frontier, surpassing Fugaku with an Rmax of 1.102 exaFLOPS in June 2022.{{Cite web |title=June 2022 {{!}} TOP500 |url=https://top500.org/lists/top500/2022/06 |access-date=2024-11-21 |website=www.top500.org}}

= Artificial intelligence =

Modern artificial intelligence (AI) systems require large amounts of computational power to train model parameters. OpenAI employed 25,000 Nvidia A100 GPUs to train GPT-4, using a total of 133 septillion floating-point operations.{{Cite web |last=Minde |first=Tor Björn |date=2023-10-08 |title=Generative AI does not run on thin air |url=https://www.ri.se/en/news/blog/generative-ai-does-not-run-on-thin-air |access-date=2024-03-29 |website=RISE}}

See also

References

{{reflist|refs=

{{cite web |title=FREQUENTLY ASKED QUESTIONS |url=https://www.top500.org/resources/frequently-asked-questions/ |website=www.top500.org |accessdate=23 June 2020}}

{{cite book |editor1-last=Kogge |editor1-first=Peter |title=ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems |date=1 May 2008 |publisher=United States Government |url=https://sites.astro.caltech.edu/~george/aybi199/ExascaleReport.pdf |access-date=28 September 2008}}

}}