Skip to main content

Currently Skimming:

2 Explanation of Supercomputing
Pages 20-27

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 20...
... any of a category of extremely powerful, large-capacity mainframe computers that are capable of manipulating massive amounts of data in an extremely short time.
From page 21...
... equal to or exceeding 195 million theoretical operations per second (MTOPS) to a CTP equal to or exceeding 1,500 MTOPS.2 Current examples of supercomputers are contained in the TOP500 list of the 500 most powerful computer systems as measured by best performance on the Linpack benchmarks.3 Supercomputers provide significantly greater sustained performance than is available from the vast majority of installed contemporary mainstream computer systems.
From page 22...
... While the solution of such problems can be accelerated through the use of parallelism, dependencies among the parallel subproblems necessitate frequent exchanges of data and partial results, thus requiring significantly better communication (both higher bandwidth and lower latency) between processors and data storage than can be provided by a computational grid.
From page 23...
... Harrison, and R.J. Littlefield, 1996, "Global Arrays: A Nonuniform Memory Access Programming Model for High-Performance Computers," Journal of Supercomputing 10, 197-220; Katherine Yelick, Luigi Semenzato, Geoff Pike, Carleton Miyamoto, Ben Liblit, Arvind Krishnamurthy, Paul Hilfinger, Susan Graham, David Gay, Philip Colella, and Alexander Aiken, 1998, "Titanium: A High-Performance Java Dialect," Concurrency: Practice and Experience 10, 825-836.
From page 24...
... Smaller or cheaper systems are used for capacity computing, where smaller problems are solved. Capacity computing can be used to enable parametric studies or to explore design alternatives; it is often needed to prepare for more expensive runs on capability systems.
From page 25...
... It can be contrasted, for example, with the Hubble Space Telescope, which has immense potential for enhancing human discovery in astronomy but little potential for designing automobiles. Astronomy also relies heavily on supercomputing to simulate the life cycle of stars and galaxies, after which results from simulations are used in concert with Hubble's snapshots of stars and galaxies at various evolutionary stages to form consistent theoretical views of the cosmos.
From page 26...
... A typical commod ity processor chip includes the level 1 and 2 caches on the chip and an external memory interface. This external interface limits sustained local memory bandwidth and requires local memory accesses to be performed in units of cache lines (typically 64 to 128 bytes in length3)
From page 27...
... However, because this appli cation class is small, the market for custom processors is quite small.4 In summary, commodity processors optimized for commercial applica tions meet the needs of most of the scientific computing market. For the majority of scientific applications that exhibit significant spatial and tem poral locality, commodity processors are more cost effective than custom processors, making them better capability machines.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.