Skip to main content

Currently Skimming:

3 Brief History of Supercomputing
Pages 28-66

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 28...
... government funding for research on cryptanalysis, nuclear weapons, and other defense applications in its first several decades.2 Arguably, the first working, modern, electronic, digital computer was the Colossus machine, put into operation at Bletchley Park, 1An expanded version of much of the analysis in this chapter will be found in "An Economic History of the Supercomputer Industry," by Kenneth Flamm, 2004. 2In Chapter 3, "Military Roots," of Creating the Computer: Government, Industry, and High Technology (Brookings Institution Press, 1988)
From page 29...
... engineers at the Naval Computing Machinery Laboratory (a National Cash Register plant in Dayton, Ohio, deputized into the war effort) were building copies or improved versions of Bletchley Park electronic cryptanalysis machines, as well as computers of their own design.
From page 30...
... The link between both cryptanalytical and nuclear design applications and highperformance computing goes back to the very first computers. ENIAC's designers, Eckert and Mauchly, built the first working stored program electronic computer in the United States in 1949 (the BINAC)
From page 31...
... The Atomic Energy Commission (AEC) set up a formal computer research program in 1956 and contracted with IBM for the Stretch system and with Sperry Rand (which acquired both the Eckert-Mauchly computer group and ERA in the 1950s)
From page 32...
... Washington, D.C.: Brookings Institution Press. est available commercial machines, were the IBM 7030 Stretch and Sperry Rand UNIVAC LARC, delivered in the early 1960s.7 These two machines established a pattern often observed in subsequent decades: The government-funded supercomputers were produced in very limited numbers and delivered primarily to government users.
From page 33...
... These techniques are now used in most advanced microprocessors, such as the Intel Pentium and the Motorola/IBM PowerPC."8 Similarly, LARC technologies were used in Sperry Rand's UNIVAC III.9 Yet another feature of the supercomputer marketplace also became established over this period: a high mortality rate for the companies involved. IBM exited the supercomputer market in the mid-1970s.
From page 34...
... Because the specific applications and codes they ran for defense applications were often secret, frequently were tied to special-purpose custom hardware and peripherals built in small numbers, and changed quickly over time, the avail
From page 35...
... Figure 3.2 shows that the cost of sustained computing power on the Cray-1 was roughly comparable to that of the cost/performance champion of the day, the Apple II microcomputer. During this period, IBM retreated from the supercomputer market, instead focusing on its fast-growing and highly profitable commercial computer systems businesses.
From page 36...
... Historically, the rationale for Japanese government support in semiconductors had been to serve as a stepping-stone for creating a globally competitive computer industry, since the semiconductor divisions of the large Japanese electronics companies had also produced computers sold in a protected Japanese market. Aided by their new capabilities in semiconductors and a successful campaign to acquire key bits of IBM's mainframe technology, by the mid-1980s Japanese computer companies were ship
From page 37...
... The other was the High Speed Computing System for Scientific and Technological Uses project, also called the SuperSpeed project, which focused on supercomputing technology.16 At roughly the same time, the three large Japanese electronics companies manufacturing mainframe computers began to sell supercomputers at home and abroad. The Japanese vendors provided good vectorizing compilers with their vector supercomputers.
From page 38...
... Furthermore, many technologists believed that continued advances in computer capability based on merely increasing the clock rates of traditional computer processor designs were doomed to slow down as inherent physical limits to the size of semiconductor electronic components were approached. In addition, Amdahl's law was expected to restrict increases in performance due to an increase in the number of processors used in parallel.18 The approach to stimulating innovation was to fund an intense effort to do what had not previously been done -- to create a viable new architecture for massively parallel computers, some of them built around commodity processors, and to demonstrate that important applications could benefit from massive parallelism.
From page 39...
... 20A list of failed industrial ventures in this area, many inspired by SCI, includes Alliant, American Supercomputer, Ametek, AMT, Astronautics, BBN Supercomputer, Biin, CDC/ ETA Systems, Chen Systems, Columbia Homogeneous Parallel Processor, Cogent, Cray Computer, Culler, Cydrome, Denelcor, Elxsi, Encore, E&S Supercomputers, Flexible, Goodyear, Gould/SEL, Intel Supercomputer Division, IPM, iP-Systems, Kendall Square Research, Key, Multiflow, Myrias, Pixar, Prevec, Prisma, Saxpy, SCS, SDSA, Stardent (Stellar and Ardent) , Supercomputer Systems Inc., Suprenum, Synapse, Thinking Machines, Trilogy, VItec, Vitesse, Wavetracer (E.
From page 40...
... Kincaid, and F.T. Krogh, 1979, "Basic Linear Algebra Subprograms for Fortran Usage," ACM Transactions on Mathematical Software 5:308-325; J.J.
From page 41...
... If a richer and more portable software base became available for these systems, the cost of their adoption would be reduced. If so, the difference in price trends between custom and commodity processors would eventually make a parallel supercomputer built using commodity components a vastly more economically attractive proposition than the traditional approach using custom processors.
From page 42...
... Thus, although it is true that there was an extraordinarily high mortality rate among the companies that developed parallel computer architectures in the 1980s and early 1990s, much was learned from the technical failures as well as the successes. Important architectural and conceptual problems were confronted, parallel systems were made to work at a much larger scale than in the past, and the lessons learned were 23The term "killer micro" was popularized by Eugene Brooks in his presentation to the Teraflop Computing Panel, "Attack of the Killer Micros," at Supercomputing 1989 in Reno, Nev.
From page 43...
... of the HPC marketplace. Though dreams of effortless parallelism seem as distant as ever, the fact is that the supercomputer marketplace today is dominated by a new class of useful, commodity-processor-based parallel systems that -- while not necessarily the most powerful high-performance systems available-are the most widely used.
From page 44...
... , available data on the supercomputer marketplace (based on the TOP500 list of June 2004) show it is dominated by U.S.
From page 45...
... Thus, if Rmax is used as a proxy for market share, then the TOP500 list greatly exaggerates the dollar value of the market share of commodity systems. The TOP500 data merit analyzing because the changes and the evolution trends identified in the analysis are real.
From page 46...
... has shown much greater unevenness over this period but on average seems roughly comparable. Interestingly, the performance of the least capable machines on the list has been improving more rapidly than 26 ASCI White and ASCI Red are two supercomputers installed at DOE sites as part of the ASC strategy.
From page 47...
... 90 Rmax/mean 4 sd/mean 80 3.5 70 3 Earth Simulator 60 2.5 50 /mean 2 viation/mean max 40 de R ASCI Red ASCI White 1.5 30 Standard 1 20 10 0.5 0 0 93 93 94 94 95 95 96 96 97 97 98 98 99 99 00 00 01 01 02 02 03 03 04 Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun FIGURE 3.4 Rmax dispersion in TOP500.
From page 48...
... There is one qualification to this picture of a thriving industrial market for high-end systems, however: The growing qualitative gap between the scale and types of systems used by industry and by cutting-edge government users, with industry using less and less of the most highly capable systems than it used to. There have been no industrial users in the top 20 systems for the last 3 years, contrast 100% 90% 80% 70% 60% 50% 40% 30% Vendor Academic 20% Res&Govt Industrial 10% 0% 93 93 94 94 95 95 96 96 97 97 98 98 99 00 00 01 01 02 02 03 03 04 Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun-99Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun FIGURE 3.5 TOP500 by installation type.
From page 49...
... . Measuring market share by share of total computing capability sold (total Rmax)
From page 50...
... 20% 10% 0% 93 93 94 94 95 95 96 96 97 97 98 98 99 99 00 00 01 01 02 02 03 04 Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec-03un J FIGURE 3.8 Rmax share of TOP500 machines by maker.
From page 51...
... 100% 90% 80% 70% 60% U.S. Japan 50% Europe Other 40% 30% 20% 10% 0% 93 93 94 94 95 95 96 96 97 97 98 98 99 99 00 00 01 01 02 02 03 03 04 Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun Dec Jun FIGURE 3.10 U.S.
From page 52...
... The Japanese Earth Simulator was far and away the top machine from 2002 through mid-2004, but most of the computers arrayed behind it were American-made, unlike the situation in 1994. A similar conclusion holds if we consider access by U.S.
From page 53...
... As described earlier, capable Japanese supercomputer vendors for the first time began to win significant sales in international markets. The Japanese vendors saw their share of vector computer installations double, from over 20 percent to over 40 percent over the 6 years from 1986 to 1992.28 The second development was the entry of new types of products -- for example, non-vector supercomputers, typically massively parallel ma 28These data are taken from H.W.
From page 54...
... One impetus for the development of these systems was DARPA's Strategic Computing Initiative in the 1980s, in part a reaction to the data depicted in Figure 3.12, discussed earlier, and other U.S. government initiatives that coordinated with and followed this initial effort.
From page 55...
... These are labeled as full custom systems. All traditional vector supercomputers fall into this category, as do massively parallel systems using custom processors and interconnects.
From page 56...
... annual growth rates in performance; hybrid systems showed the least growth in Linpack performance. Trend lines fitted to Figure 3.13 have slopes yielding annual growth rates in Rmax of 111 percent for commodity systems, 94 percent for custom systems, and 73 percent for hybrid systems.31 This is considerably faster than annual growth rates in single-processor floating-point performance shown on other benchmarks, suggesting that increases in the number of processors and improvements in the interconnect performance yielded supercomputer performance gains significantly greater than those due to component processor improvement alone for both commodity and custom systems.
From page 57...
... 0% 93 93 94 94 95 95 96 96 97 97 98 98 99 99 00 00 01 01 02 02 03 03 04 unJ Dec unJ Dec unJ Dec unJ Dec Jun Dec unJ Dec unJ Dec unJ Dec Jun Dec unJ Dec unJ Dec unJ FIGURE 3.14 Share of TOP500 by system type.
From page 58...
... . The three Japanese vector supercomputer makers accounted for another 22 percent of TOP500 performance (see Figure 3.17)
From page 59...
... Of the five U.S. companies with significant market share on this chart, two (Intel and Thinking Machines, second only to Cray)
From page 60...
... NEC 6% Hitachi 1% All Others CrayDell Fujitsu 2% 8% 3% 2% Sun 1% H-P 19% SGI 3% Self-made 1% Linux Networx 3% IBM 51% FIGURE 3.18 TOP500 market share (Rmax) by company, June 2004.
From page 61...
... are now larger than two of the three traditional Japanese supercomputer vendors. The most successful Japanese producer, NEC, has about half of the TOP500 market share it had in 1993.
From page 62...
... . Aerodynamic design using a supercomputer resulted in the design of an airfoil with 40 percent less drag than the design using previous experimental techniques (p.
From page 63...
... 2002. Report on High Performance Computing for the National Security Community.
From page 64...
... Although Sandia's analysis capabilities had been devel oped in support of DOE's stockpile stewardship program, they contained physical models appropriate to the accident environment. These models were used where they were unique within the partnership and where Sandia's massively parallel computers and ASC code infrastructure were needed to accommodate very large and computationally intense simula tions.
From page 65...
... Spillover Effects Advanced computer research programs have had major payoffs in terms of technologies that enriched the computer and communication industries. As an example, the DARPA VLSI program in the 1970s had major payoffs in developing timesharing, computer networking, workstations, computer graphics, windows and mouse user interface technology, very large integrated circuit design, reduced instruction set computers, redundant arrays of inexpensive disks, parallel computing, and digital libraries.42 Today's personal computers, e-mail, networking, data storage all reflect these advances.
From page 66...
... were initially developed for supercomputers. These technologies were developed in a complex interaction involving researchers at universities, the national laboratories, and companies.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.