Skip to main content

Currently Skimming:

6 Supercomputing Infrastructures and Institutions
Pages 157-179

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 157...
... The Encarta dictionary defines an ecosystem as "a localized group of interdependent organisms together with the environment that they inhabit and depend on." A supercomputer ecosystem is a continuum of computing platforms, system software, and the people who know how to exploit them to solve supercomputing applications such as those discussed in Chapter 4. In supercomputing ecosystems, the "organisms" are the technologies that mutually reinforce one another and are mutually interdependent.
From page 158...
... Some widely used chemistry programs use this recompute strategy. Another common example of the impact of system performance characteristics on programming is that a message passing programming style is most often used when shared memory performance is below some threshold, even if shared memory programming tools are provided.
From page 159...
... Additional examples of software technology that may be required for a supercomputing ecosystem to be effective are global parallel file systems and fault tolerance. Libraries are also part of the ecosystem.
From page 160...
... of the ecosystem associated with that supercomputer.1 Many industrial users depend on commercial software packages such as MSC NASTRAN or Gaussian. If those packages run poorly or not at all on a given supercomputer, the industrial users will be missing from the ecosystem, reducing the financial viability of that supercomputer.
From page 161...
... From this process point of view, there is very little difference between supercomputing systems and generic computing systems except that, since the architectural platform differences are so radical, it can be much more expensive to port applications in the supercomputing ecosystem than in the generic ecosystem. That expense, coupled with the very small number of supercomputers sold, greatly inhibits the development and porting of commercial software packages to supercomputer platforms.
From page 162...
... How Ecosystems Get Established Traditionally, supercomputing ecosystems have grown up around a particular computer vendor's family of products, e.g., the Cray Research family of vector computers, starting with the Cray-1 and culminating in the T-90, and the IBM SP family of parallel computers. While a given model's lifetime is but a few years, the similarity of the architecture of various generations of hardware provides an opportunity for systems and application software to be developed and to mature.
From page 163...
... Reinforcing the trend toward clusters are factors such as these: · The low entry cost, which enables even small university groups to acquire them; · Their proliferation, which provides a training ground for many people, some of whom will use them as a development platform for software tools, libraries, and application programs, thus adding technologies to the ecosystem; · The local control that a group has over its cluster, which simplifies management and accounting; · The relative ease of upgrades to new processor and interconnection technologies; and · Their cost effectiveness for many classes of applications. Software challenges remain for the nascent cluster ecosystem.
From page 164...
... Horizontal integration can provide a less arduous migration path from one supercomputer platform to another and thus a longer-lived, though less tightly coupled, ecosystem. Those advantages are gained through the use of portable software environments and less reliance on the highly specific characteristics of the hardware or proprietary vendor software.
From page 165...
... Furthermore, an integrator may not have the scope to provide the kind of ongoing customer support that was available from vertically integrated companies. An example of a vertically integrated ecosystem that did not survive for very long is the Thinking Machines CM-5, a product of Thinking Machines Corporation (TMC)
From page 166...
... This is an obstacle to the establishment of new supercomputing ecosystems. Horizontal integration is another obstacle to the establishment of new ecosystems.
From page 167...
... Software controlled prefetching did not catch on, because one could not coordinate multiple microprocessor designers and multiple compiler providers. There are clear historical precedents for vertically integrated firms successfully introducing a new design (for instance, the introduction of the Cray)
From page 168...
... This can be true even for new models from established vendors. The effort required to adapt most supercomputer application programs to new environments is substantial.
From page 169...
... Similarly, one can envision strategies for application programs that would lower the barriers for new supercomputing ecosystems to evolve. An example is the relatively new type of application programs known as community codes.
From page 170...
... 9See . 10 Google has been aggressively recruiting computer science graduates with advanced degrees and advertising openings at top conferences, such as the International Symposium on Computer Architectures, the top computer architecture conference.
From page 171...
... As senior professionals move out of supercomputing, it becomes harder to maintain the knowledge and skill levels that come from years of experience. At the other end of the people pipeline are the graduate students who will eventually become the next generation of senior supercomputing researchers and practitioners.
From page 172...
... The key institutions in academia have been the NSF centers and partnerships (currently with leading-edge sites at Illinois, San Diego, and Pittsburgh and with partners at many universities) that together provide a national, high-end computational infrastructure for academic super 13More information on computational science and engineering graduate programs can be found in SIAM's Working Group on CSE Education, at .
From page 173...
... The centers have brought together computer scientists, computational scientists, and scientists from a broad array of disciplines that use computer simulations, together with their research students, promoting fertile interdisciplinary interaction. However, NSF funding for the PACI program stayed flat despite major increases in NSF's budget.
From page 174...
... A center normally employs professional staff to help run the installation as well as to help users run and improve their application codes to best effect. Supercomputing centers are typically housed in special-purpose facilities that provide the needed physical plant, notably floor space, structural support, cooling, and power.
From page 175...
... In most instances, a supercomputing center is part of a larger organization that includes researchers who use the computational facilities, computational science software developers, and education and training groups. Having local users provides continuing dialogue for improving the center's offerings and provides the justification for the host institution to house the facility.
From page 176...
... Commercial supercomputers allow relatively inexpensive simulations to replace costly experiments, saving both time and money. An example is the crash testing of automobiles.
From page 177...
... Figure 6.1 shows the relative share of the various sectors of the technical computing market in 1998-2003, the importance of the scientific research and classified defense sectors, the relative growth of new sectors such as biosciences, and the relative stability of sectors such as mechanical engineering. It also shows that no market is so large as to dominate all 1998 1999 2000 2001 2002 2003 Other 100% Technical Management and Support 90% Simulation Scientific Research and R&D 80% Mechanical Design/Engineering Analysis 70% Mechanical Design and Drafting 60% Imaging Geoscience and Geo 50% engineering Electrical Design/Engineering 40% Analysis Economics/Financial 30% Digital Content Creation and Distribution 20% Classified Defense 10% Chemical Engineering Biosciences 0% FIGURE 6.1 Revenue share of industry/applications segments, 1998-2003.
From page 178...
... This maximizes the potential return on investment when developing a product but has the unfortunate effect of delivering suboptimal performance to the different end users. Figures 6.2 and 6.3 show the evolution of the worldwide technical computing market from 1998 to 2003.
From page 179...
... to assemble PC clusters from existing office equipment means that today, managers of large commercial enterprises are often unaware of the supercomputers within their own companies. The overall decline in the technical computing market indicated by these charts may be due to this effect.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.