Skip to main content

Currently Skimming:

9 Stewardship and Funding of Supercomputing
Pages 206-224

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 206...
... Supercomputing plays a major role in stockpile stewardship, in intelligence collection and analysis, and in many areas of national defense. For those applications, the government cannot rely on external sources of technology and expertise.
From page 207...
... 2004. Federal Plan for High-End Computing: Report of the High-End Computing Revitalization Task Force (HECRTF)
From page 208...
... The recent report by the JASONs3 noted the need for increased capacity computing for the DOE/NNSA Stockpile Stewardship Program. (As pointed out previously, users of capability computing are also users of capacity computing.)
From page 209...
... However, participants in many such companies say that there is no longer a successful profit-making business model, in part because highly skilled software professionals are so attractive to larger companies. For example, many companies that were developing compilers, libraries, and tools for high-performance computing went out of business, were bought, or no 6The recent development of the X1 was largely vertically integrated, but the development of other Cray products such as Red Storm is not.
From page 210...
... The Need for Stability The committee heard repeatedly from people with whom members spoke about the difficulties and the disincentives caused by the lack of long-term planning and the lack of stability in government programs. In order to undertake ambitious projects, retain highly skilled people, achieve challenging goals, and create and maintain complex ecosystems, organizations of all kinds need to be able to depend on predictable government commitments -- both to programs and to ongoing funding for those programs.7 If that stability is absent, companies will go out of business or move in other directions, researchers will shift to other topics, new professionals will specialize in other skills, corporate memory is lost, and progress on hard problems slows or stops.
From page 211...
... The dislocations caused by increasing local and remote memory latencies will require fundamental changes in supercomputer architecture; the challenge of running computations with many millions of independent operations will require fundamental changes in programming models; the size of the machines and the potential increase in error rates will require new approaches to fault-tolerance; and the increased complexity of supercomputing platforms and the increased complexity of supercomputing applications will require new approaches to the process of mapping an application to a platform and new paradigms for programming languages, compilers, run-time systems, and operating systems. Restoring a vigorous, effective research program is imperative to address these challenges.
From page 212...
... The model is not a simple pipeline or funnel model, where many ideas flourish at the basic research level, to be downselected into a few prototypes and one or two winning products. Rather, it is a spiral evo lution with complex interactions whereby projects inspire one another; whereby ideas can sometimes migrate quickly from basic research to prod ucts and may sometimes require multiple iterations of applied research; and whereby failures are as important as successes in motivating new basic research and new products.
From page 213...
... Although there has been basic research in general-purpose computing technologies with broad markets, and there has been significant expenditure in advanced development efforts such as the ASC program and the TeraGrid, there has been relatively little investment in basic research in supercomputing architecture and software over the past
From page 214...
... Research in supercomputer architecture, systems software, programming models, algorithms, tools, mathematical methods, and so forth is not the same as research in using supercomputing to address challenging applications. Both kinds of research are important, but they require different kinds of expertise; they are, in general, done by different people, and it is a mistake to confuse them and to fail to support both.
From page 215...
... Successful partnerships are those from which both the technology researchers and the applications researchers benefit-the technology researchers by getting feedback about the quality and utility of their results; the applications researchers by advancing their application solutions. As part of the transfer of research to production, prototyping activities should normally include industrial partners and partners from government national laboratories.
From page 216...
... The current Blue Book12 has a category called HighEnd Computing Research and Development. (This annual publication is a supplement to the President's budget submitted to Congress that tracks coordinated IT research and development, including HPC, across the federal government.13)
From page 217...
... That estimate does not include support for applications research that uses supercomputing -- it includes only support for research that directly enables advances in supercomputers themselves. Also, it does not include advanced development, testbeds, and prototyping activities that are closer to product creation (such as DARPA's HPCS program)
From page 218...
... This estimate does not include the cost of meeting capacity computing needs. The Need for People The report presented in Chapter 6 some results from the most recent Taulbee Survey, which showed that only 35 people earned Ph.D.'s in scientific computing in 2002.
From page 219...
... There has also been legislation for that purpose. For instance, Finding 5 of the High-Performance Computing Act of 1991 stated as follows: "Several Federal agencies have ongoing high performance computing programs, but improved long-term interagency coordination, cooperation, and planning would enhance the effectiveness of these programs."18 Among its provisions, the Act directed the President to "imple 18The House-Senate compromise version of S
From page 220...
... A roadmap starts with a set of quantitative goals, such as the target time to solution for certain weapons simulations or the target cost per solution for certain climate simulations. It identifies the components required to achieve these goals, along with their quantitative properties, and describes how they will enable achievement of the final quantitative goals.
From page 221...
... So a supercomputing roadmap will necessarily be somewhat different from the semiconductor industry roadmap. In particular, some components of the roadmap will be inputs from the computer industry, basically a set of different technology curves (such as for commercial processors and for custom interconnects)
From page 222...
... Here are some possible outcomes of this roadmap process: · Performance models will show that some applications scale on commodity-cluster technology curves to achieve their goals. For these applications, no special government intervention is needed.
From page 223...
... In computational sciences, reduced NSF support for longterm basic research is not compensated for by an increase in DOE support through the SciDAC program, because the latter's 5-year project goals are relatively near term. The significant DARPA investment in the HPCS program has not extended to the support of basic research.
From page 224...
... It is important that research programs be perceived as addressing grand challenges: The grand engineering challenge of building systems of incredible complexity that are at the forefront of computer technology and the grand scientific challenges addressed by these supercomputers. It is also important that government agencies, supercomputing centers, and the broad supercomputing community do not neglect cultivating an image they may take too much for granted.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.