Skip to main content

Currently Skimming:

10 The Future of Supercomputing--Conclusions and Recommendations
Pages 225-246

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 225...
... vendors has come from Japanese vendors. While Japan has enhanced vector-based supercomputing, culminating in the Earth Simulator, the United States has made major innovations in parallel supercomputing through the use of commodity components.
From page 226...
... Supercomputing has been of great importance throughout its history because it has been the enabler of important advances in crucial aspects of national defense, in scientific discovery, and in addressing problems of societal importance. At the present time, supercomputing is used to tackle challenging problems in stockpile stewardship, in defense intelligence, in climate prediction and earthquake modeling, in transportation, in manufacturing, in societal health and safety, and in virtually every area of basic science understanding.
From page 227...
... Conclusion: Commodity clusters satisfy the needs of many super computer users. However, some important applications need the better main memory bandwidth and latency hiding that are avail able only in custom supercomputers; many need the better global bandwidth and latency interconnects that are available only in cus tom or hybrid supercomputers; and most would benefit from the simpler programming model that can be supported well on custom systems.
From page 228...
... Educated and skilled people are an important part of the supercomputing ecosystem. Supercomputing experts need a mix of specialized knowledge in the applications with which they work and in the various supercomputing technologies.
From page 229...
... Similarly, to ensure its access to specialized custom supercomputers that would not be produced without government involvement, DoD needs the same kind of analysis of capabilities and investment strategy. The strategy should aim at leveraging trends in the commercial computing marketplace as much as possible, but in the end, responsibility for an effective R&D and procurement strategy rests with the government agencies that need the custom supercomputers.
From page 230...
... Conclusion: The government has lost opportunities for important advances in applications using supercomputing, in supercomputing technology, and in ensuring an adequate supply of supercomputing ecosystems in the future. Instability of long-term funding and un certainty in policies have been the main contributors to this loss.
From page 231...
... For instance, many of the technologies, in particular the software, need to be broadly available across all platforms. If the agencies are not jointly responsible and jointly accountable, the resources spent on supercomputing technologies are likely to be wasted as efforts are duplicated in some areas and underfunded in others.
From page 232...
... on which other agencies depend. Similarly, House and Senate appropriation committees would ensure (1)
From page 233...
... Until such a structure is in place, the agencies whose missions rely on supercomputing must take responsibility for the future availability of leading supercomputing capabilities. That responsibility extends to the basic research on which future supercomputing depends.
From page 234...
... U.S. leadership in unique supercomputing technologies, such as custom architectures, is endangered by inadequate funding, inadequate long-term plans, and the lack of coordination among the agencies that are the major funders of supercomputing R&D.
From page 235...
... No investment that would match the time scale and magnitude of the Japanese investment in the Earth Simulator has been made in the United States. The agencies responsible for supercomputing can ensure that key supercomputing technologies, such as custom high-bandwidth processors, will be available to satisfy their needs only by maintaining our nation's world leadership in these technologies.
From page 236...
... Another unique supercomputing technology identified in this report is that of custom switches and custom, memory-connected switch interfaces. Companies such as Cray, IBM, and SGI have developed such technologies and have used them exclusively for their own products -- the Cray Red Storm interconnect is a recent example.
From page 237...
... That soft ware includes operating systems, libraries, compilers, software de velopment and data analysis tools, application codes, and databases. The committee believes that the current low-level, uncoordinated investment in supercomputing software significantly constrains the effectiveness of supercomputing.
From page 238...
... Such integration was traditionally done by vertically integrated vendors, but new models are needed in the current, less integrated world of supercomputing. As it invests in supercomputing software, the government must carefully balance its need to ensure the availability of software against the possibility of driving its commercial suppliers out of business by subsidizing their competitors, be they in government laboratories or in other companies.
From page 239...
... Government agencies responsible for super computing should increase their levels of stable, robust, sustained multiagency investment in basic research. More research is needed in all the key technologies required for the design and use of super computers (architecture, software, algorithms, and applications)
From page 240...
... Thus, continued improvement in supercomputer performance at current rates will require a massive increase in parallelism, requiring significant research progress in algorithms and software. As the relative latencies of memory accesses and global communications increase, the performance of many scientific codes will shrink, relative to the performance of more cache friendly and more loosely coupled commercial codes.
From page 241...
... Many of the roadblocks faced today by supercomputing are roadblocks that affect all computing, but affect supercomputing earlier and to a more significant extent. One such roadblock is the memory wall,3which is due to the slower progress in memory speeds than in processor speeds.
From page 242...
... In light of the relatively small community of supercomputing researchers, international collaborations are particularly beneficial. The climate modeling community, for one, has long embraced that view.
From page 243...
... The benefit of denying potential adversaries or proliferators access to key supercomputing technology has to be carefully weighed against the damage that export controls do to research within the United States, to the supercomputing industry, and to international collaborations. Recommendation 8.
From page 244...
... They have been responsive to their scientific users in installing and supporting software packages and providing help to both novice and experienced users. However, some of the centers in the PACI program have increased the scope of their activities, even in the face of a flat budget, to include research in networking and grid computing and to expand their education mission.
From page 245...
... Finally, the mechanism used for allocating supercomputing resources must ensure that almost all of the computer time on capability systems is allocated to jobs for which that capability is essential. The Earth Simulator usage policies are illustrative.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.