National Academies Press: OpenBook
« Previous: III. The Trend Toward Decentralization
Suggested Citation:"IV. Theoretical Computing." National Research Council. 1983. Astronomy and Astrophysics for the 1980's, Volume 2: Reports of the Panels. Washington, DC: The National Academies Press. doi: 10.17226/550.
×
Page 309
Suggested Citation:"IV. Theoretical Computing." National Research Council. 1983. Astronomy and Astrophysics for the 1980's, Volume 2: Reports of the Panels. Washington, DC: The National Academies Press. doi: 10.17226/550.
×
Page 310
Suggested Citation:"IV. Theoretical Computing." National Research Council. 1983. Astronomy and Astrophysics for the 1980's, Volume 2: Reports of the Panels. Washington, DC: The National Academies Press. doi: 10.17226/550.
×
Page 311
Suggested Citation:"IV. Theoretical Computing." National Research Council. 1983. Astronomy and Astrophysics for the 1980's, Volume 2: Reports of the Panels. Washington, DC: The National Academies Press. doi: 10.17226/550.
×
Page 312
Suggested Citation:"IV. Theoretical Computing." National Research Council. 1983. Astronomy and Astrophysics for the 1980's, Volume 2: Reports of the Panels. Washington, DC: The National Academies Press. doi: 10.17226/550.
×
Page 313
Suggested Citation:"IV. Theoretical Computing." National Research Council. 1983. Astronomy and Astrophysics for the 1980's, Volume 2: Reports of the Panels. Washington, DC: The National Academies Press. doi: 10.17226/550.
×
Page 314

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

309 It is the opinion of this Panel that, except for very small applications (in which the cost of even a small minicomputer is not justified) or very large applications (in which the jobs are so long that the speed difference between a large vector machine and a minicomputer with array processor is important), the minicomputer, with an array processor if needed, is the most cost-effective way to perform the vast majority of scientific computations. Of course, this Panel is not the first to recognize the advantages of minicomputers. Their proliferation has already started. A large number of astronomers have now had experience with minicomputers, and they are finding wide acceptance in the astronomical community. One cau- tionary remark is necessary, however. Today there is little experience in the astronomical community with the minicomputer-array processor combination. Although no major problems are anticipated, additional experience should be obtained before it can be definitely stated that this is a viable mode of operation. IV. THEORETICAL COMPUTING The Panel has been aided in its investigations of theo- retical computing needs by a joint meeting with the Panel on Theoretical and Laboratory Astrophysics (see Chapter 4), the participation of a representative from that Panel, and by a Workshop on Computational Astrophysics held at the NASA/Ames Research Center on the two days preceding the third meeting of the present Panel (which was also held at NASA/Ames). The material that follows is drawn, in part, from all three sources. Many important insights and breakthroughs in modern astronomy have been obtained through large-scale compu- tation. Astronomical phenomena typically combine complex interplays of several physical processes with strongly nonlinear effects. Hostile or unattainable environments preclude laboratory studies. Large-scale computation provides the only hope for sorting out and understanding such interacting processes. Conversely, astronomical situations sometimes represent a setting in which certain kinds of physical processes manifest themselves without hopeless entanglement with other effects. The astronomical context often provides the best setting in which to study the physics of these processes; it is, in effect, our laboratory.

310 The complexities of astronomical phenomena together with greatly improved observational data conspire to broaden the scope of problems that demand attention and to sharpen the detail sought in interpreting observations Qualitatively new kinds of data (from Space Telescope, high-efficiency imaging detectors, and the Very Large Array and satellite data from previously unattainable wavelength regions, for example), as well as significantly improved accuracy and greatly increased data rates on more traditional observations, yield a flood of new data and introduce new kinds of problems that demand interpretation or solution. All of this makes access to more computa- tional capability imperative if theorists are to keep abreast of observational data, let alone investigate new problems. Lacking sufficient capability, interpretations offered to explain observational data from the Space Telescope, the Very Large Array, and other sources will necessarily be based on guesses or other shortcuts; only with adequate facilities will we be able to take the full range of physical effects into account and develop theo- . retical interpretations whose quality matches the quality of the observations. The last decade has witnessed a tremendous growth in the use of computers in constructing and testing theo- retical models of astrophysical systems. Computers were used in the 1960's mainly to construct one-dimensional models, for which the demands on computation time are modest by today's standards. In that decade, the greatest advances were made in the field of stellar evolution. Computer experiments played a key role in connecting our understanding of nuclear physics in stellar cores to the observational data, which necessarily refer to only a thin layer at the stellar surface. Computer simulations have been our only means of testing theories of phenomena in stellar interiors, such as the events leading to a supernova explosion. During the 1970's, increased com- puter speed and sophisticated computer programs have permitted much more detailed analysis of supernova models. Such computer calculations have made a major contribution to our understanding of nucleosynthesis. Computers have played a major role in the study of radiative transfer and the calculation of emission spec- tra. Computational techniques have been used to study radiative transfer in stellar atmospheres, the spectra of protostars, and the appearance of dense interstellar clouds in molecular lines. These computer models have played an essential role in relating the observational

311 data to physical models of stellar pulsation (e.g., Cepheid variables), stellar mass loss, star formation, and gravitational collapse of dense clouds. The compu- tation of emission spectra in situations where the radia- tive transfer is not coupled to a hydrodynamical calcula- tion is less demanding of computer capability, but here the computer is no less essential. Such computations have helped us to understand the physical conditions in a wide variety of astrophysical objects--x-ray sources, quasars, planetary nebulae, and coronas. Beginning in 1969, computer simulations were applied to models of star formation and over the last decade have increased dramatically in sophistication. Two-dimensional hydrodynamical calculations have been used to study the early stages of star formation. mese calculations have focused on the initial compression and the onset of col- lapse and also on the effects of cloud rotation and mag- netic fields on the subsequent evolution of the collapsing clouds. Only one-dimensional calculations have been per- formed for the later stages of protostellar core forma- tion, but these have become very detailed. Apart from simple arguments based on the virial theorem and similar- ity solutions of limited applicability, computer modeling has provided our only solid means of interpreting the wealth of observational data obtained in this field over the last decade. Computer calculations have played an important role in the investigation of the structure and dynamics of gal- axies. In the 1960's, many N-body calculations were carried out with small numbers of stars interacting through gravitational forces. These calculations yielded excellent models of star clusters, but it was only at the end of the decade, with the development of the particle- following numerical methods, that galactic systems could be simulated with models capable of producing the compli- cated structures characteristic of real disk galaxies. These stellar dynamical models have been refined during the 1970's and extended to treat three-dimensional sys- tems. In addition, gas-dynamical simulations have greatly aided the interpretation of the 21-cm radio observations of galaxies. A new application of computational methods in astronomy has been the simulation of general relativistic systems. This work is now in its infancy; not even simple flows are fully understood. Because of the immense difficulty in obtaining analytic solutions, this is a field in which numerical computations are likely to have a tremendous

312 impact. The computations of even very simple situations require an enormous amount of computer time, and we may expect more interesting problems to be attacked only as computers become more powerful and more easily available in the future. Impressive as this list of accomplishments may appear, progress in all these areas has been severely limited by the availability of computational facilities. Many of the projects that have been undertaken only very recently could have been done a decade ago had there been suffi- cient access to the computers existing at that time. The limitation has rarely been the availability of willing manpower or sufficiently powerful computational tech- niques. The slow progress over the decade in galaxy modeling is the case in point. The early spiral galaxy models have barely been surpassed, although much more realistic simulations are possible. When so much can be learned from this experimental approach, it is mystifying that the computational facilities necessary for vigorous pursuit of this research program have not been provided. A significant fraction of the spiral-galaxy simulations during the last decade was performed in England, where computational resources were made available through the controlled fusion program. Three-dimensional simulations of galaxies, like the work on disk galaxies, has pro- gressed at a rate determined by availability of computer time rather than the availability of manpower or compu- tational techniques. Another example of unnecessarily slow growth in the computer simulation of astrophysical systems is in the area of hydrodynamics. Hydrodynamic computer codes capable of modeling a variety of astrophysical systems in two dimensions have been available for at least a decade. Nevertheless, hydrodynamic computations performed on reasonably fine grids are a rarity even today. Computa- tions that involve the much more complicated and time- consuming algorithms for multifluid or implicit hydrody- namics or that also involve radiative transfer are even rarer. In fact, a major portion of this kind or work is now performed in Germany, where easy access to a powerful vector machine has been arranged through the Max Planck Institute in Munich (a third of the machine time is available for astrophysical calculations). Many impor- tant problems, such as the calculation of the nonlinear development of Parker's instability in two dimensions, have gone without solution because the computer facili-- ties are unavailable. Such calculations could easily

313 have been performed a decade ago; the computers, codes, and experts were there, and only the computation budgets were lacking. The availability of facilities for theoretical computa- tions has been inadequate in recent years and must be improved if the astronomy program in the United States is to have the proper interplay and balance between theory and observation. Computation capability is made available through three main sources. For small problems, university computer centers are adequate and are cost-effective when the com- puter capability required does not justify the purchase of a dedicated minicomputer-array processor system. These minicomputer-based systems are the second main source of computational capability for theoretical computations and are probably powerful enough to meet the needs of the large majority (perhaps 90 percent) of theoretical compu- tations. Finally, there are problems requiring access to the largest and fastest machines available. These prob- lems have traditionally been attacked through cooperative arrangements between astronomers and large laboratories such as the Lawrence Livermore Laboratory, NASA/Ames, NASA/Langley, Los Alamos National Laboratory, and the National Center for Atmospheric Research. At present, the first and third methods of performing theoretical calculations are dominant, with minicomputer- array processor systems just beginning to play a role. The Panel believes that these three methods will continue to be important in the 1980's, but their relative impor- tance will show a dramatic shift. Because of its cost effectiveness, the minicomputer-array processor config- uration should be performing most of the theoretical astronomical computations by the end of the 1980's. This will occur mostly at the expense of computations performed at university computer centers, which, toward the end of the decade, will be used primarily to support astronomical computations at universities where only small amounts of computation are performed. In addition, some of the problems that are now studied with the biggest machines are amenable to solution with minicomputer-array processor systems. However, there will remain problems--black-hole dynamics, star formation, radio sources and jets, super- novae, galactic chemical evolution, magnetic fields and plasmas, and solar phenomena, for example--that are at the cutting edge of theoretical research and merit atten- tion beyond the fraction of astronomical computing they

314 represent. Approximations must be made to fit these problems into the largest and fastest machines available today; our confidence in the results is weakened because of these compromises. Larger and faster machines expected in the coming decade may allow improved treatment of these problems and, very probably, attacks on additional prob- lems that cannot be fit into machines available today. The Panel makes the following recommendations concern- ing theoretical astronomical computations in the 1980's: 1. The primary recommendation is that the funding agencies [the National Science Foundation (NSF) and the National Aeronautics and Space Administration (NASA)] make available funds to purchase approximately 10 mini- computer-array processor systems for the purpose of theoretical calculations in astronomy. m e "canonical" system and its associated costs are described in Appendix A. These funds should be supplied at a steady level of funding in real dollars. This will allow the purchase of 1.7 systems per year for 6 years (a typical useful life for a computer system before it becomes obsolete), after which the oldest systems would be replaced. The funds made available for this purpose should be primarily new funds if theoretical astronomy is to have the increased support that it requires. In addition, funding, perhaps on a cost-sharing basis, is needed to support the main- tenance, operations, and software expenses for ten such systems after a steady state is reached. These computers need not be distinct from those used to perform image processing and analysis (see the next section), but an equivalent of 10 such systems should be dedicated to theoretical computations. m e exact number of such systems required is difficult to quantify. The number 10 represents the Panel's best guess at the number that is required and feasible; how- ever, the proposed steady-state funding plan is flexible. If it turns out that twelve systems are required, they can be purchased with the same level of funding, provided they are replaced at just over 7-year intervals rather than 6-year intervals. 2. m e funding that supports computing at university computer centers should be maintained in those cases where the level of astronomical computation at a given univer- sity does not warrant a switch to a dedicated system. However, the funding agencies should be alert to those cases where one or two medium-scale users (about $30,000/ year) and/or several small users (about $10,000/year) are

Next: V. Image Processing and Analysis »
Astronomy and Astrophysics for the 1980's, Volume 2: Reports of the Panels Get This Book
×
 Astronomy and Astrophysics for the 1980's, Volume 2: Reports of the Panels
Buy Paperback | $70.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!