Skip to main content

Currently Skimming:

5 Research, Practice, and Education to Meet Tomorrow's Performance Needs
Pages 132-152

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 132...
... The shift to explicitly parallel hardware will fail unless there is a concomitant shift to useful programming models for parallel hardware. There has been progress in that direction: extremely skilled and savvy programmers can exploit vast parallelism (for example, in what has traditionally been referred to as high-performance computing)
From page 133...
... competitiveness: a slowdown in the growth of computing performance will have global economic and political repercussions. The committee has developed a set of recom mended actions aimed at addressing the challenges, but the fundamental power and energy constraints mean that even our best efforts may not offer a complete solution.
From page 134...
... , it is tempting to ignore sequential core performance and to deploy many simple cores. That approach may prevail, but history and Amdahl's law suggest caution.
From page 135...
... They find, for example, that as Moore's law provides more transistors, many CMP designs benefit from increasing the sequential core performance and considering asymmetric (heterogeneous) designs where some cores provide more performance (statically or dynamically)
From page 136...
... This has at least two important algorithmic implications: the problem becomes more regular and hence more amenable to parallelism, and bet ter training and hence better classification accuracies make addi tional parallel formulations usable in practice. Examples include scene completion in photographs2 and language-neutral transla tion systems.3 For many of today's applications, the underlying algorithms in use do not assume or exploit parallel processing explicitly, except as in the cases described above.
From page 137...
... The intellectual keystone of this endeavor is rethinking programming models. Programmers must have appropriate models of computation that express application parallelism in such a way that diverse and evolving computer hardware systems and software can bal ance computation and minimize communication among multiple computational units.
From page 138...
... For more on Chapel, see the website The Chapel parallel programming language, at http://chapel.cray.com. For more on X10, see the website The X10 programming language, at http://x10.codehaus.org/.
From page 139...
... Examples include evolv ing GPUs for more general-purpose programming, game processors, or computational accelerators used as coprocessors; and exploiting specialpurpose, energy-efficient engines at some level of granularity for compu tations, such as fast Fourier transforms, Codec, or encryption. Other tasks to which increased computational capability could be applied include architectural support for machine learning, communication compression, decompression, encryption, and decryption, and dedicated engines for GPS, networking, human interface, search, and video analytics.
From page 140...
... The slowing of growth in single-core performance provides the best opportunity to rethink computer hardware since the von Neumann model was developed in the 1940s. While a focus on the new research challenges is critical, continuing investments are needed in new computation sub strates whose underlying power efficiency promises to be fundamentally better than silicon-based CMOSs.
From page 141...
... Computer scientists and engineers manage complexity by separating interface from implementation. In conventional computer systems, the separation is recursive and forms the traditional computing stack: appli cations, programming language, compiler, runtime and virtual machine environments, operating system, hypervisor, and architecture.
From page 142...
... The press release for the center quotes center Director Eli Yablonovitch: "There has been great progress in mak ing transistor circuits more efficient, but further scientific breakthroughs will be needed to achieve the six-orders-of-magnitude further improvement that remain before we approach the theoretical limits of energy consumption." See Sarah Yang, 2010, NSF awards $24.5 million for center to stem increase of electronics power draw, UC Berkeley News, February 23, 2010, available online at http://berkeley.edu/news/media/releases/2010/02/23_nsf_ award.shtml. 8 For more on data centers, their design, energy efficiency, and so on, see Luiz Barroso and Urs Holzle, 2009, The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines, San Rafael, Cal.: Morgan & Claypool, available online at http:// www.morganclaypool.com/doi/abs/10.2200/S00193ED1V01Y200905CAC006.
From page 143...
... average industrial cost of electricity for 2008, US$0.0699/kWh.11 The typical energy efficiency of data-center facilities can multiply IT power consumption by 1.8-2.0,12 which would result in an actual electricity cost of running the server of up to about US$1,300. According to that rough model, electricity costs for the server could correspond to about one-fourth of its hardware costs.
From page 144...
... Environmental Protection Agency (EPA) on the Energy Star Program (EPA, 2007, Report to Congress on Server and Data Center Energy Efficiency Public Law 109-431, Washington, D.C.: EPA, available online at http://www.
From page 145...
... A joint report by The Climate Group and the Global e-Sustainability Initiative (GeSI) states that although the worldwide carbon footprint of the computing and telecom munication sectors might triple from 2002 to 2020, the same sectors could deliver over 5 times their footprint in emission savings in other industries (including transportation and energy generation and transmission)
From page 146...
... , protected in incubation from devolving into many incompatible variants, and yet made public enough to facilitate use and adoption by many cooperating and compet ing entities. Recommendation: To promote cooperation and innovation by sharing, encourage development of open interface standards for parallel programming rather than proliferating proprietary programming environments.
From page 147...
... It is possible that fewer of those people will be able to program well in the future, because of the difficulty of parallel programming. However, if the CS community develops good abstractions and programming languages that make it easy to program in parallel, even more of those types of developers will be productive.
From page 148...
... With respect to the topic of the present report, the CS curriculum is not training undergraduate and graduate students in either effective parallel programming or parallel computational thinking. But that knowledge is now necessary for effective programming of current commodity-parallel hardware, which is increasingly common in the form of CMPs and graphics processors, not to mention possible changes in systems of the future.
From page 149...
... If computational models are to be targeted to parallel hardware, as we argue in this report, parallel approaches to reasoning and thinking will be essential. Jeannette Wing has argued18 for the importance of computational thinking, broadly, and a current National Research Council study is exploring that notion.
From page 150...
... Nevertheless, possible models for reform include making parallelism an intrinsic part of every course (algorithms, architecture, programming, operating systems, compilers, and so on) as a fundamental way of solving problems; adding specialized courses, such as parallel computational reasoning, parallel algorithms, parallel architecture, and parallel programming; and creat ing an honors section for advanced training in parallelism (this option is much less desirable in that it enforces the notion that parallel programming is outside mainstream approaches)
From page 151...
... The next generation of discoveries will require advances at both the hard ware and the software levels. There is no guarantee that we can make future parallel computing ubiquitous and as easy to use as yesterday's sequential computer, but unless we aggressively pursue efforts suggested by the recommendations above, it will be game over for future growth in computing performance.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.