Skip to main content

Currently Skimming:

4 The Demand for Supercomputing
Pages 67-103

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 67...
... Approximations are made when scientists use partial differential equations to model a physical phenomenon. To make the solution feasible, compromises must be made in the resolution of the grids used to discretize 67
From page 68...
... As computational power increases, the fidelity of the models can be increased, compromises in the methods can be eliminated, and the accuracy of the computed answers improves. An exact solution is never expected, but as the fidelity increases, the error decreases and results become increasingly useful.
From page 69...
... These codes became available to industrial users in the 1980s. Through the 1980s and into the 1990s, automotive companies ran 4Testimony of Dimitri Kusnezov, Director, Office of Advanced Simulation and Computing, NNSA, U.S.
From page 70...
... Subcommittees visited DOE weapons laboratories, DOE science laboratories, the National Security Agency, and the Japanese Earth Simulator. In addition, the committee held a 2-day applications workshop in Santa Fe, New Mexico, in September 2003, during which approximately 20 experts discussed their applications and their computing requirements.
From page 71...
... Many U.S. high-end computational resources and a large part of the Japanese Earth Simulator are devoted to predicting climate variations and anthropogenic climate change, so as to anticipate and be able to mitigate harmful impacts on humanity.
From page 72...
... Supercomputing in solidearth geophysics involves a large amount of data handling and simulation for a range of problems in petroleum exploration, with potentially huge economic benefits. Scientific studies of plate tectonics and Earth as a geodynamo require immense supercomputing power.
From page 73...
... Computational modeling used in applications that seek fundamental understanding enhances applications that solve real-world needs. Thus, basic understanding of plasma physics and materials facilitates stockpile stewardship, while basic results in weather prediction can facilitate climate modeling.
From page 74...
... McMillan et al., LLNL, "Computational Challenges in Nuclear Weapons Simulation," and by Robert Weaver, LANL, "Computational Challenges to Supercomputing from the Los Alamos Crestone Project: A Personal Perspective." Both papers were prepared for the committee's applications workshop at Santa Fe, N.M., in September 2003.
From page 75...
... These practices have allowed the ASC community to begin taking advantage of new processor technology as it becomes available. 13Testimony of Dimitri Kusnezov, Director, Office of Advanced Simulation and Computing, U.S.
From page 76...
... 15This subsection is based on excerpts from the white paper "Computational Challenges in Signals Intelligence," prepared by Gary Hughes, NSA, and William Carlson and Francis Sullivan, Institute for Defense Analyses, Center for Computational Science, for the committee's Santa Fe, N.M., applications workshop, September 2003.
From page 77...
... There are two main uses of supercomputing driven by the Signals Intelligence mission: intelligence processing (IP) and intelligence analysis (IA)
From page 78...
... Many of these defense applications require computational fluid dynamics (CFD) , computational structural mechanics (CSM)
From page 79...
... Washington, NCAR, "Computer Architectures and Climate Modeling," and by Richard D Loft, NCAR, "Supercomputing Challenges for Geoscience Applications," both prepared for the committee's applications workshop in Santa Fe, N.M., in September 2003.
From page 80...
... Climate modeling requires multi-thousand-year simulations to produce equilibrium climate and its signals of natural variability, multihundred-year simulations to evaluate climate change beyond equilibrium (including possible abrupt climatic change) , many tens of runs to determine the envelope of possible climate changes for a given emission scenario, and a multitude of scenarios for future emissions of greenhouse gases and human responses to climate change.
From page 81...
... All of the above considerations point to a massive need for increased computational resources, since current climate models typically have grid sizes of hundreds of kilometers, have few components and oversimplified parameterizations, have rarely reached equilibrium, and have rarely simulated future climate changes beyond a century. Moreover, they are seldom run in ensembles or for multiple-emission scenarios.
From page 82...
... Continuing progress in climate prediction can come from further increases in computing power beyond a factor of 1,000. One detailed study of computational increases needed for various facets of climate modeling has shown the need for an ultimate overall increase in computer power of at least a billion-fold.23 (Such a large increase could also be used for complex systems in plasma physics and astrophysics.)
From page 83...
... This increases the total amount of computation by a factor of 1,000. · Increase the completeness of the coupled model by adding to each component model important interactive physical, chemical, and biological processes that heretofore have been omitted owing to their computational complexity.
From page 84...
... Associated challenges include advancing computer technology, developing algorithms, and improving theoretical formulation -- all of which will contribute to better overall time-to-solution capabilities. 25This subsection is based on excerpts from the white paper "Plasma Science," prepared by W.M.
From page 85...
... This changed in the late 1990s, when government-funded scientists and engineers began migrating to distributed memory systems. The main CAE applications used in the automotive industry contain millions of lines of code and have proven very 26Based on excerpts from the white paper "High Performance Computing in the Auto Industry," by Vincent Scarafino, Ford Motors, prepared for the committee's Santa Fe, N.M., applications workshop, September 2003.
From page 86...
... . The use of computing in the automotive industry has come about in response to (1)
From page 87...
... In the past 10 years there has been considerable evolution in the use of supercomputing in the automotive industry. Ten years ago, CAE was used to simulate a design.
From page 88...
... Engineers in the future will expect CAE tools to automatically explore variations in design parameters in order to optimize their designs. John Hallquist of Livermore Software Technology Corporation believes that fully exploiting these advances in automotive CAE will require a seven-order-of-magnitude increase beyond the computing power brought to bear today.30 This would allow, among other things, much greater attention to occupant safety requirements, including aspects of offset frontal crash, side impact, out-of-position occupants, and more humanlike crash dummies.
From page 89...
... To do this for the entire jet engine would require sustained computing power of 50 Tflops for the same period. This is to be compared with many millions of dollars, several years, and many designs and redesigns for physical prototyping.32 In summary, transportation companies currently save hundreds of millions of dollars using supercomputing in their new vehicle design and development processes.
From page 90...
... Some of the grand challenges posed by this paradigm are outlined below, along with the associated computational complexity: 34This subsection is based in part on excerpts from the white papers "Quantum Mechanical Simulations of Biochemical Processes," by Michael Colvin, LLNL, and "Supercomputing in Computational Molecular Biology," by Gene Myers, UC Berkeley, both prepared for the committee's Santa Fe, N.M., applications workshop, September 2003.
From page 91...
... Computational modeling and prediction of protein structures remain the only hope. This problem, called the protein-folding problem, is regarded as the holy grail of biochemistry.
From page 92...
... Societal Health and Safety Computational simulation is a critical tool of scientific investigation and engineering design in many areas related to societal health and safety, including aerodynamics; geophysics; structures; manufacturing processes with phase change; and energy conversion processes. Insofar as these mechanical systems can be described by conservation laws expressed as partial differential equations, they may be amenable to analysis using supercomputers.
From page 93...
... This effect is caused by the focusing or deflection of seismic waves by underground rock structures. If the underground rock struc 35Based on excerpts from the white paper "Supercomputing for PDE-based Simulations in Mechanics," by David Keyes, Columbia University, prepared for the committee's Santa Fe, N.M., applications workshop, September 2003.
From page 94...
... 38Ibid. 39This subsection is based on excerpts from the white paper "High Performance Computing and Petroleum Reservoir Simulation," by John Killough, Landmark Graphics Corporation, prepared for the committee's Santa Fe, N.M., applications workshop, September 2003.
From page 95...
... To do this simulation properly requires incorporating the correct multirheological behavior of rocks (elastic, brittle, viscous, plastic, history-dependent, and so forth) , which results in a wide range of length scales and time scales, into a threedimensional, spherical model of the entire Earth, another grand challenge that will require substantially more computing power to address.40 40For more information, see .
From page 96...
... Alternatively, one might choose to simulate the same size system, using supercomputing power to treat structures on a much wider range of 41This subsection is based on excerpts from the white paper "Future Supercomputing Needs and Opportunities in Astrophysics," by Paul Woodward, University of Minnesota, prepared for the committee's Santa Fe, N.M., applications workshop, September 2003.
From page 97...
... As the Committee on the Future of Supercomputing heard in numerous presentations during its site visits, computational materials science is now poised to explore a number of areas of practical importance. Algorithms are well tested that will exploit 100 to 1,000 times the computing power available today.
From page 98...
... 2004. "Ab-initio Monte Carlo for Nanomagnetism." ORNL White Paper.
From page 99...
... 2002. "Impact of Earth-Simulator-Class Computers on Computational Nanoscience and Materials Science." DOE Ultrascale Simulation White Paper.
From page 100...
... 49Based on excerpts from the white paper "The Future of Supercomputing for Sociotechnical Simulation," by Stephen Eubank, LANL, prepared for the committee's Santa Fe, N.M., applications workshop, September 2003.
From page 101...
... They were described in expert briefings to the committee as computing-limited at present and very much in need of 100 to 1,000 times more computing power over the next 5 to 10 years. Increased computing power would be used in a variety of ways: · To cover larger domains, more space scales, and longer time scales; · To solve time-critical problems (e.g., national security ones)
From page 102...
... Some of this increase can be expected on the basis of Moore's law and greater numbers of processors per machine. Any increase in raw computing power in terms of raw flops will have to be accompanied by larger memories to accommodate larger problems, and internal bandwidth will have to increase dramatically.
From page 103...
... The use of High-Performance Fortran (HPF) on the Earth Simulator is one of only a few examples of using higher level programming languages with better support for parallelism.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.