Skip to main content

Currently Skimming:

6 Range of Operational Models
Pages 102-126

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 102...
... First, unlike advanced telescopes or particle accelerators, where there is no competing commercial market, a vibrant computing industry develops new technologies and products and responds to market needs and opportunities that dwarf computing expenditures in academia and by federal research sponsors. Second, computing market shifts and the well-documented, rapid evolution of computing technology mean that researcher expectations and economically viable computing technologies change every few years.
From page 103...
... The following basic principles will help ensure the sustainability of NSF's advanced computing strategy: • Realistic business assessment that exposes the true costs and subsidies of cyberinfrastructure deployment and operation at all scales; • Identification and tracking of technology trends and economics, along with the research opportunities they create; • Long-term planning and articulated strategy (a roadmap) that allows the broad research community and service providers to plan accordingly; • Balanced support for computing hardware, storage systems, and networks, along with professional staff, software and tools, and operating budgets; and • NSF-wide commitment to cyberinfrastructure investment, strategic directions, and operational processes.
From page 104...
... Orthogonally, cyberinfrastructure spans the capabilities and needs of individual investigator laboratories, campus sites, regional and national research facilities, and commercial cloud service providers. Any comprehensive cyberinfrastructure strategy must include the entire spectrum of services and span the entire range of organizational
From page 105...
... Big data will require big infrastructure, just as leadingedge computational science does, and will likely involve a mix of both centralized facilities and decentralized repositories at universities. The Australian eResearch initiative and its Australian National Data Service is a relevant example.
From page 106...
... Examples include training programs for users offered by XSEDE and Blue Waters and the Argonne Training Program in Extreme Scale Computing. Such programs could benefit from a more formal approach and, in particular, long-term support for training materials and resources.
From page 107...
... Some of these problems are rooted in history, some are embedded in the NSF culture, and some are consequences of NSF's organizational structure. 6.2.1  Competitive Challenges From its origins, NSF's advanced computing programs -- the original 1980s supercomputer centers program, the 1990s Partnership for Advanced Computational Infrastructure (PACI)
From page 108...
... 6.2.2  Structural Challenges Since the beginning of the NSF supercomputing centers program in the 1980s, NSF ACI and its predecessor organizations have supported computational science research across NSF and provided services to a user base that spans all federal research agencies. Despite the clear recognition that computational science and data analytics are true peers with theory and experiment in the scientific process, NSF-wide coordination and support remain somewhat informal and ad hoc, with directorate participation often a secondary responsibility of the designees.
From page 109...
... Big data requires strongly coordinated big infrastructure, just as leading-edge computational science requires advanced computing systems. The lessons of commercial cloud computing are clear; centralization and scale create unprecedented opportunities for innovation and discovery.
From page 110...
... Although there are some aspects of MREFC projects that match the needs of advanced computing infrastructure, the current MREFC mechanisms may need to be modified and adapted to the unique needs of advanced computing infrastructure, including the general nature of computing and the need for regular refresh of computing equipment. To establish a regular cadence of infrastructure investments, NSF would plan and budget an upgrade every 3 to 5 years, with planning and construction of each generation overlapping the operation of the previous generation.
From page 111...
... 6.3.3  Commercial Cloud Service Purchases The explosive growth of commercial cloud services and their widespread adoption by both large corporations and small start-ups offers another alternative for provisioning advanced computing but is not a panacea (Boxes 6.1 and 6.2)
From page 112...
... A natural question is whether cloud computing can meet the advanced computing needs of segments of the science community. This box considers some of the advantages and disadvantages of commercial cloud services today.
From page 113...
... This network connectivity makes it much easier to provide the resource to anyone on the planet, rather than those with access to the facility. NSF's advanced computing facilities are also conveniently accessible but less so than commercial cloud services, which require only a credit card for access.
From page 114...
... For example, servers used by leading vendors include custom accelerators. Both commercial cloud operators and government-funded HPC centers ex
From page 115...
... In short, as a past study2 has shown and as the discussion above further suggests, supercomputing centers already exploit many of the cost advantages of clouds and can be significantly cheaper than commercial cloud providers for some science applications. Software and Expertise Researchers will need more than access to the cloud services themselves if they are to make effective and efficient use of cloud services.
From page 116...
... All would likely involve NSF negotiating a bulk purchase agreement for data analytics and computing services. • Individual investigators could request cloud services as part of a standard NSF proposal.
From page 117...
... • The current computing allocation review process could be expanded to include award of cloud services. Approved users would receive a budget to be spent with their chosen cloud provider.
From page 118...
... , and the cloud vendor would provide computing and storage services. NSF could leverage the Internet2 organization's NET+ initiative, which has selected commercial cloud services for its members and negotiated pricing and other terms.
From page 119...
... 6.3.5  Federally Funded Research and Development Centers As noted above, continuity is crucial to strategic planning, staff retention, and cross-domain partnerships. Cooperative agreements, whether for MREFC projects or other initiatives, provide one mechanism for collaborative planning and management.
From page 120...
... Currently, computational scientists request time on a variety of resources, taking advantage of DOE, NSF, and other providers of advanced computing infrastructure to the science community. But there is no formal coordination between agencies of the systems that they acquire, and trade-offs are made independently.
From page 121...
... Superficially, this may seem paradoxical, given the dramatic increases in computing capability and storage capability regularly delivered by the computing industry. However, those same computing advances have birthed new sensors and scientific instruments and a torrent of new digital data, as well as new simulation models and expectations for ever-larger computing capability.4 Rising demands for computing and storage (end-to-end capabilities, not just hardware)
From page 122...
... In a more extensive realization of this model, however, individual researchers or research teams would be allowed to spend awarded cyberinfrastructure dollars at their discretion. This cyberinfrastructure marketplace might include the following options: • Purchasing local computing infrastructure, services, or staff support for use within the individual researcher's laboratory; • Contributing dollars to a university pool that operates a campus facility under a "campus condominium" model;5 • Pooling research dollars to purchase and operate shared regional or national facilities; and • Purchasing commercial cloud services, exploiting the properties of elasticity and on-demand access.
From page 123...
... However, the same economic and technological forces driving the decisions on national computing infrastructure are eroding the abilities of campuses to purchase and operate their own cyberinfrastructure, and especially challenging are the cost and complexity of managing research data. Thus, smaller institutions are now choosing to invest in infrastructure operated by larger neighbors or at national centers, which can provide both cost and other advantages compared to attempting to use the commercial cloud.
From page 124...
... and Petascale Computing Resource Allocation Committees [PRAC]
From page 125...
... Appendixes


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.