Intel iPSC
{{Short description|Line of computers in the 1980s and 1990s}}
The Intel Personal SuperComputer (Intel iPSC) was a product line of parallel computers in the 1980s and 1990s.
The iPSC/1 was superseded by the Intel iPSC/2, and then the Intel iPSC/860.
iPSC/1
In 1984, Justin Rattner became manager of the Intel Scientific Computers group in Beaverton, Oregon. He hired a team that included mathematician Cleve Moler.
The iPSC used a Hypercube internetwork topology of connections between the processors internally inspired by the Caltech Cosmic Cube research project.
For that reason, it was configured with nodes numbering with power of two, which correspond to the corners of hypercubes of increasing dimension.{{Cite web |title= The Personal SuperComputer |publisher= Computer History Museum |url= http://www.computerhistory.org/revolution/supercomputers/10/74/286 |access-date= November 4, 2013 }}
Image:Intel iPSC-1 (1985) - Computer History Museum (2007-11-10 22.58.31 by Carlo Nardone) edit1.jpg {{smaller|1=(S
see also other photo)}}]]
Intel announced the iPSC/1 in 1985, with 32 to 128 nodes connected with Ethernet into a hypercube. The system was managed by a personal computer of the PC/AT era running Xenix, the "cube manager".{{Cite web |title= Intel iPSC/1 |first= Paul R. |last=Pierce |url= http://www.piercefuller.com/library/10166.html |url-status=dead |archive-url= https://web.archive.org/web/20130603024007/http://www.piercefuller.com/library/10166.html |archive-date= June 3, 2013 |access-date= November 4, 2013 }} Each node had a 80286 CPU with 80287 math coprocessor, 512K of RAM, and eight Ethernet ports (seven for the hypercube interconnect, and one to talk to the cube manager).{{Cite web |title= The Intel Hypercube, part 1 |date= October 28, 2013 |first=Cleve |last=Moler |url= http://blogs.mathworks.com/cleve/2013/10/28/the-intel-hypercube-part-1/ |access-date= November 4, 2013 |author-link= Cleve Moler }}
A message passing interface called NX that was developed by Paul Pierce evolved throughout the life of the iPSC line.{{Cite journal |title=The NX message passing interface |first=Paul |last=Pierce |journal= Parallel Computing |volume= 20 |number= 4 |pages= 1285–1302 |date= April 1994 |doi= 10.1016/0167-8191(94)90023-X }}
Because only the cube manager had connections to the outside world, developing and debugging applications was difficult.{{Cite book |chapter= An I/O management system for the iPSC/1 hypercube |chapter-url=https://dl.acm.org/doi/abs/10.1145/75427.1030220 |first1= Martin J. |last1=Schedlbauer |title= Proceedings of the 17th Conference on ACM Annual Computer Science Conference |isbn=978-0-89791-299-0 |doi=10.1145/75427 |page= 400 |year= 1989 }}
The basic models were the iPSC/d5 (five-dimension hypercube with 32 nodes), iPSC/d6 (six dimensions with 64 nodes), and iPSC/d7 (seven dimensions with 128 nodes).
Each cabinet had 32 nodes, and prices ranged up to about half a million dollars for the four-cabinet iPSC/d7 model.
Extra memory (iPSC-MX) and vector processor (iPSC-VX) models were also available, in the three sizes. A four-dimensional hypercube was also available (iPSC/d4), with 16 nodes.http://delivery.acm.org/10.1145/70000/63074/p1207-orcutt.pdf{{Dead link |date= November 2013 }}
iPSC/1 was called the first parallel computer built from commercial off-the-shelf parts.{{Cite web |title= Other Artifacts in the Collection |first= Paul R. |last=Pierce |url= http://www.piercefuller.com/collect/other.html |url-status=dead |archive-url= https://web.archive.org/web/20130603024131/http://www.piercefuller.com/collect/other.html |archive-date= June 3, 2013 |access-date= November 4, 2013 }} This allowed it to reach the market about the same time as its competitor from nCUBE, even though the nCUBE project had started earlier.
Each iPSC cabinet was (overall) 127 cm x 41 cm x 43 cm. Total computer performance was estimated at 2 MFLOPS.
Memory width was 16-bit.
Serial #1 iPSC/1 with 32 nodes was delivered to Oak Ridge National Laboratory in 1985.{{Cite web |title= ORNL HPCC history (timeline details) |first= Betsy A. |last=Riley |url= http://www.csm.ornl.gov/SC98/timetab.html}}{{Cite web |title= History of Supercomputing |url= http://www.csm.ornl.gov/ssi-expo/histext.html}}
iPSC/2
File:Intel iPSC 2 16-node parallel computer.gif
The Intel iPSC/2 was announced in 1987.
It was available in several configurations, the base setup being one cabinet with 16 Intel 80386 processors at 16 MHz, each with 4 MB of memory and a 80387 coprocessor on the same module.{{Cite web |title= Intel iPSC/2 (Rubik) |work= Computer Museum |publisher= Katholieke Universiteit Leuven |url= http://www.cs.kuleuven.be/museum/multiproc/rubik-E.html |access-date= November 4, 2013 }} The operating system and user programs were loaded from a management PC. This PC was typically an Intel 301 with a special interface card. Instead of Ethernet, a custom Direct-Connect Module with eight channels of about 2.8 Mbyte/s data rate each was used for hypercube interconnection.
The custom interconnect hardware resulting in higher cost, but reduced communication delays.{{Cite book |title= Data-parallel Programming on MIMD Computers |first1= Philip J. |last1=Hatcher |first2=Michael Jay |last2=Quinn |publisher= MIT Press |year= 1991 |page= 7 |url= https://books.google.com/books?id=1sEQ97wq7KgC&pg=PA7 |isbn= 9780262082051 }}
The software in the management processor was called the System Resource Manager instead of "cube manager".
The system allows for expansion up to 128 nodes, each with processor and coprocessor.{{Cite book |title=Computer Organization and Design |first= P. Pal |last=Chauddhuri |publisher= PHI Learning |year= 2008 |page= 826 |url= https://books.google.com/books?id=5LNwVRpfkRgC&pg=PA826 |isbn= 9788120335110 }}
The base modules could be upgraded to the SX (Scalar eXtension) version by adding a Weitek 1167 floating point unit.{{Cite book |title= Parallel Methods for VLSI Layout Design |first= Si. Pi |last=Ravikumār |publisher= Greenwood Publishing Group |year= 1996 |page= 183 |url= https://books.google.com/books?id=VPXAxkTKxXIC&pg=PA183 |isbn= 9780893918286 }}
Another configuration allowed for each processor module to be paired with a VX (Vector eXtension) module with a dedicated multiplication and addition units. This has the downside that the number of available interface card slots is halved. Having multiple cabinets as part of the same iPSC/2 system is necessary to run the maximum number of nodes and allow them to connect to VX modules.{{Cite book |title= Supercomputing in Engineering Analysis |chapter= Advanced Architecture Computers |author1-link=Jack Dongarra |first1=Jack |last1=Dongarra |author2-link=Iain S. Duff |first2=Iain S. |last2=Duff |editor-first= Hojjat |editor-last=Adeli |publisher= CRC Press |year= 1991 |pages= 51–54 |chapter-url= https://books.google.com/books?id=3GqB6DknJl0C&pg=PA51 |isbn= 9780893918286 }}
The nodes of iPSC/2 ran the proprietary NX/2 operating system, while the host machine ran System V or Xenix.{{Cite book |chapter= The NX/2 operating system |volume= 1 |publisher= ACM |first= Paul |last=Pierce |title= Proceedings of the third conference on Hypercube concurrent computers and applications Architecture, software, computer systems, and general issues - |year= 1988 |pages= 384–390 |isbn= 978-0-89791-278-5 |doi= 10.1145/62297.62341|url=http://portal.acm.org/citation.cfm?id=62341|series= C3P |s2cid= 45688408 }}
Nodes could be configured like the iPSC/1 without and local disk storage, or use one of the Direct-Connect Module connections with a clustered file system (called concurrent file system at the time).{{Cite book |chapter= Performance measurement of a parallel Input/Output system for the Intel iPSC/2 Hypercube |publisher= ACM |first1= James C. |last1=French |first2=Terrence W. |last2=Pratt |first3=Mriganka |last3=Das |title= Proceedings of the 1991 ACM SIGMETRICS conference on Measurement and modeling of computer systems - SIGMETRICS '91 |date= May 1991 |pages= 178–187 |isbn= 978-0-89791-392-8 |doi= 10.1145/107971.107990 |s2cid= 13899933 }}
Using both faster node computing elements and the interconnect system improved application performance over the iPSC/1.{{Cite book |chapter= Application performance improvement on the iPSC/2 computer |volume= 1 |publisher= ACM |first1=S. |last1=Arshi |first2=R. |last2=Asbury |first3=J. |last3=Brandenburg |first4=D. |last4=Scott |title= Proceedings of the third conference on Hypercube concurrent computers and applications Architecture, software, computer systems, and general issues - |year= 1988 |pages= 149–154 |isbn= 978-0-89791-278-5 |doi= 10.1145/62297.62316 |s2cid= 46148117 }}{{Cite journal |title=Benchmarking the iPSC/2 hypercube multiprocessor |first1=Luc |last1=Bomans |first2=Dirk |last2=Roose |journal= Concurrency: Practice and Experience |volume= 1 |number= 1 |pages= 3–18 |date= September 1989 |doi= 10.1002/cpe.4330010103 }}
An estimated 140 iPSC/2 systems were built.{{Cite book |title= Massively Parallel, Optical, and Neural Computing in the United States |chapter= Commercially Available Systems |editor1= Gilbert Kalb |editor2=Robert Moxley |publisher= IOS Press |year= 1992 |pages= 17–18 |chapter-url= https://books.google.com/books?id=7Ao4TBf4uv4C&pg=PA17 |isbn= 9781611971507 }}
iPSC/860
File:Intel iPSC 860 32-node parallel computer front panel.gif. August 22, 1995.]]
Intel announced the iPSC/860 in 1990. The iPSC/860 consisted of up to 128 processing elements connected in a hypercube, each element consisting of an Intel i860 at 40–50 MHz or Intel 80386 microprocessor.{{Cite web |title= iPSC/860 Guide |first= Siddharthan |last=Ramachandramurthi |year= 1996 |publisher= Computational Science Education Project at Oak Ridge National Laboratory |url= http://www.phy.ornl.gov/csep/CSEP/IP/IP.html |access-date= November 4, 2013 }}
Memory per node was increased to 8 MB and a similar Direct-Connect Module was used, which limited the size to 128 nodes.{{Cite book |title= Domain-based Parallelism and Problem Decomposition Methods in Computational Science and Engineering |chapter= Parallel Implicit Methods for Aerodynamic Applications on Unstructured Grids |first= V. |last=Venkatakrishnan |editor1-first= David E. |editor1-last=Keyes |editor2-first=Y. |editor2-last=Saad |editor3-first=Donald G. |editor3-last=Truhlar |publisher= SIAM |year= 1991 |page= 66 |chapter-url= https://books.google.com/books?id=_Ls0E-ITVpgC&pg=PA66 |isbn= 9781611971507 }}
File:Intel iPSC 860 32-node parallel computer (door open).gif
One customer was the Oak Ridge National Laboratory. The performance of the iPSC/860 was analyzed in several research projects.{{Cite journal |title= Evaluating the basic performance of the Intel iPSC/860 parallel computer |first1= Rudolf |last1=Berrendorf |first2=Jukka |last2=Helin |journal= Concurrency: Practice and Experience |volume= 4 |number= 3 |pages= 223–240 |date= May 1992 |doi= 10.1002/cpe.4330040303 }}{{Cite journal |title= Performance of the Intel iPSC/860 and Ncube 6400 hypercubes |first=T.H. |last=Dunigan |journal= Parallel Computing |volume= 17 |number= 10–11 |pages= 1285–1302 |date= December 1991 |doi= 10.1016/S0167-8191(05)80039-0 |url= https://zenodo.org/record/1259901 }} The iPSC/860 was also the original development platform for the Tachyon parallel ray tracing engine{{Cite book|date = 1996-07-01|pages = 138–141|doi = 10.1109/MPIDC.1996.534105|first1 = J.|last1 = Stone|first2 = M.|last2 = Underwood| title=Proceedings. Second MPI Developer's Conference | chapter=Rendering of numerical flow simulations using MPI |isbn = 978-0-8186-7533-1|citeseerx = 10.1.1.27.4822|s2cid = 16846313}}{{Cite thesis|url = http://scholarsmine.mst.edu/masters_theses/1747/|title = An Efficient Library for Parallel Ray Tracing and Animation |type = Masters |date = January 1998|publisher = Computer Science Department, University of Missouri-Rolla, April 1998|last = Stone|first = John E.}} that became part of the SPEC MPI 2007 benchmark, and is still widely used today.{{Cite book|date = 2013-08-01|pages = 43–50|doi = 10.1109/XSW.2013.10|first1 = J.E.|last1 = Stone|first2 = B.|last2 = Isralewitz|first3 = K.|last3 = Schulten| title=2013 Extreme Scaling Workshop (XSW 2013) | chapter=Early experiences scaling VMD molecular visualization and analysis jobs on blue waters |isbn = 978-1-4799-3691-5|citeseerx = 10.1.1.396.3545|s2cid = 16329833}}
The iPSC line was superseded by a research project called the Touchstone Delta at the California Institute of Technology which evolved into the Intel Paragon.
{{clear}}