Achievements in Physical Oceanography
Scripps Institution of Oceanography, University of California, San Diego
The last 50 years have seen a revolution in our understanding of ocean processes. I consider the major developments in three eras: (1) ending roughly 1970, observations were generally interpreted in terms of steady circulation models of large-scale, with variability regarded as "noise"; (2) following 1970, emphasis was on mesoscale variability (which was found to contain 99 percent of the oceanic kinetic energy), internal waves, edge waves, mixing events, and other time-dependent processes; (3) the "now" era returns to some of the large-scale problems of the first era, but with allowance for the decisive role played by the time-dependent processes and a growing appreciation that the large-scale features are themselves subject to slow climate-connected changes. Technological developments generally led (rather than followed) new ideas. Underlying all these developments is a half-century transition from grossly inadequate sampling to an appreciation of a rational sampling strategy.
I am to speak on "Landmark Achievements in Physical Oceanography." Why not call it "Seamarks"? I was in Lisbon in August at a meeting on satellite oceanography. Following the welcome by Mr. M. Gago.. Minister of Science and Technology, I was assigned a generous 5 minutes to cover the subject of "Oceanography Before Satellites." For illustration I used Plate 5, showing how Moses was saved by a tsunami with high nonlinear distortion, what we now call a soliton of depression. Going back to the early 1950s is like going back. to Exodus.
What are the seamarks that led us from the Exodus stage to our present theology? The following choice is highly subjective. I am an ocean adventurer, not an historian of science.1 I have paid little attention to the extent of National Science Foundation (NSF) support as compared to the Office of Naval Research (ONR) and other support; Michael Reeve, Richard Lambert, and others discuss this in other papers in this volume.
THEOLOGY OF A "STEADY" OCEAN
I will remind you that the field of oceanography immediately before NSF was founded was just coming to terms with Sverdrup's 1947 solution for the mid-ocean circulation in response to wind torquing and with Stommel's 1948 explanation of the intensification of currents along western boundaries (e.g., Gulf Stream).
In 1952, NSF awarded $6,100 for each of two years to Ray Montgomery for "Analysis of Serial Data." What Ray was really working on was the Equatorial Undercurrent, the last major current system missing from the lexicon of oceanography.2 T. Cromwell and J. Knauss were the major actors.
The basic elements of the deep (thermohaline) ocean circulation were known in Sverdrup's time. (He mapped global volume fluxes in units of million cubic meters per second, now known as sverdrups.) At the time it was believed that deep water known as Montgomery's "common water"3 was formed in a few concentrated areas south of Greenland and along the Antarctic Shelf by top-to-bottom convection. But there is no top-to-bottom convection. The work of V. Worthington, J. Reid, A. Gordon, and D. Roemmich has since shown that the formation of Montgomery's common water requires a complex interplay of water masses. Starting in 1960, Stommel and Arons provided a dynamical (though highly idealized) framework, with deep water transported to lower latitudes along western boundaries and communication between the oceans basins accomplished via the Antarctic Circumpolar Current. A subsequent visualization called "the great global conveyor belt" has enjoyed popular support because of its vividness, and support by chemists because of its simplicity, but it is important to keep in mind that this subject is still under active development.
The fashion at the time was to map the measured scalar fields of temperature and salinity and to infer the current velocities by a joint application of the hydrostatic and geostrophic equations. Since the scalar fields were relatively smooth and steady, the inferred currents were relatively smooth and steady. We had so much confidence in the method that we issued current charts on pocket handkerchiefs to our World War II pilots in the Pacific so that they could navigate the "known" surface currents toward the nearest islands.
There are two shortcomings to the hydrographic method. First, smooth scalar distributions do not necessarily call for smooth, steady current systems, the scalar fields being space and time integrals of the motion field. One has found smooth scalar fields in the presence of extremely complex float trajectories. The downed flyers would have found the current charts useful only if they had been willing to integrate their drifting experience over a year or two.
The second shortcoming is that the hydrographic method gives only relative currents, and much effort has been expended to find the so-called depth of no motion. The problem was treated in the 1970s by Stommel (with Schott, Behringer, and Armi) in the work on the ß-spiral, and similarly by Wunsch in his application of inverse methods. It is ironic that progress on the problem of the depth of no motion came about just as it was becoming clear that ocean currents were seriously time dependent at all depths.
All students of oceanography learn about the Ekman spiral, an elegant early-century mathematical solution to the wind-driven current profile. But it has been very difficult to extract a clear spiral signature from a noisy environment until the work of Price and Weller, and Niiler's recent statistical analysis of 50,000 float observations. In more general terms, "Ekman dynamics" has been observationally confirmed by Davis in the Mixed Layer Experiment (MILE), and by Rudnick in an acoustic Doppler current profiler (ADCP) transect across the Atlantic.
Fifty years ago physical oceanographers were deploying around the ocean in a few vessels taking Nansen casts and bathythermographs (BTs). The underlying theology was that of a steady ocean circulation: differences between stations were attributed to the difference in station position, not the difference in station time.4 We now know that more than 99 percent of the kinetic energy of ocean currents is associated with variable currents, the so-called mesoscale of roughly 100 km and 100 days. Incredible as it may seem, for one hundred years this dominant component of ocean circulation had slipped through the coarse grid of traditional sampling. Our concept of ocean currents has changed from something like 10 ± 1 cm/s to 1 ± 10 cm/s. This first century of oceanography, since the days of the Challenger expedition in the 1870s, came to an abrupt end in the 1970s.
The Mesoscale Revolution5
By 1950, the oceanographic community had become aware of the meandering of the Gulf Stream. If there was any doubt, the multiple ship Operation Cabot (the first of its kind), under the leadership of Fritz Fuglister, dramatically demonstrated the shedding of a cold-core eddy. At first it was thought that transients are confined to the regions of the western boundary currents. But the acoustic tracking of neu
trally buoyant floats by Swallow (who credits Stommel for suggesting this idea) soon demonstrated that variability in space and time was the rule, not the exception (though more intense near the boundary currents). There was an urgent need for a systematic exploration of the ocean variability. The development of deep-ocean mooring technology provided such an opportunity, and the Mid-Ocean Dynamics Experiment (MODE) starting in 1973 under the leadership of Stommel and Robinson defined the parameters of variability. (Soviet oceanographer Brekhovskikh got there first, but failed to get definitive results because of a high failure rate of current meters.) We now think of this mesoscale variability as the ocean weather and the underlying circulation as the ocean climate (itself subject to slow variations that are discussed later). Climate came first, weather later—rather the opposite of what happened in meteorology.
The era coincides with a flowering of geophysical fluid dynamics (GFD). Nearly everyone in GFD had their initiation at the summer sessions in Walsh Cottage at Woods Hole first organized by W. Malkus. Mesoscale variability was incorporated into general circulation models (GCMs). We recall the excitement of seeing B. Holland's first spontaneously unsteady wind-driven circulation model.
I believe that the numerical modeling reached a plateau in later decades as a result of a dependence on semiempirical nonphysical parameterization. Ironically, modeling came to the rescue, but in the new form of process-oriented modeling (as opposed to simulations of actual conditions), leading to appreciation of the ventilation of deep layers, of constant potential vorticity pools, and so forth. A resurgence of theoretical thinking has evolved into an indispensable complement to big numerical models.
On a smaller scale, internal waves (long recognized as a curiosity) became part of the oceanographic mainstream. At periods of less than a day, internal waves are the principal contributors to the velocity variance. This development owes a great deal to the application of power-spectral analysis, which in turn was made possible only by the computer revolution. Fifty years ago no oceanographer knew how to handle the wiggly records associated with random-phase wide-band processes. (Yet acousticians and opticians had done so for many years.) We could manage the discrete tidal line spectrum, and get away with the analysis of narrow-band processes such as distant swell, but we failed miserably in the analysis of storm waves or internal waves. Most ocean processes are wide-band!
In 1931, Ekman took some current measurements with a string of Ekman meters suspended on a vertical mooring. When I met Ekman in Oslo in 1949, he expressed disbelief that currents separated vertically by as little as 100 m could bear so little resemblance, and he delayed publishing an analysis until shortly before his death. But there is nothing mysterious in the result; processes with vertical bandwidth Dk are incoherent at separations exceeding ?z=?K-1 !
Among the seamark achievements are the recognition of an astounding spectral universality (within a factor 2) under a wide variety of conditions (still not understood) and of the role played by internal waves in ocean mixing processes. The transformation to internal solitons (solibores) in near-shore regions (first recognized on satellite images) is becoming an important component in coastal studies.
There exists a class of wave motion that is coastally trapped. Wave crests and troughs extend perpendicular to shore and diminish exponentially with distance from shore. Propagation is in a direction parallel to shore. There are two scales: the rotationally trapped Kelvin edge waves, and the gravitationally trapped Stokes edge waves. Both were discovered in the nineteenth century and considered curiosities. Referring to the latter, Lamb writes: ''it does not appear that the type of motion here referred to is very important." In fact, these curiosities are the very centerpiece of a rapidly developing coastal dynamics—one that is amazingly different and almost isolated from the deep ocean dynamics. It has turned out that the linear edge waves provide a linear core to the highly nonlinear coastal and littoral dynamics.
Gravitational edge waves are excited by incoming surface waves depending in a complex (but predictable) way on the character of the wave system. The edge waves, in turn, determine the littoral dynamics, the bar formation and cusps in the beach profile, and the spacing of rip currents. For a given medium size of sand grain and representative values of wave height, period, and direction, it is now possible to predict an equilibrium beach profile. Crucial elements in this development were the radioactive and fluorescent tagging of sand grains and the spectral representation of the incoming wave system. In a larger sense the underlying parameter space depends on the type of coast as determined by plate tectonics, and a mass balance determined by river discharge, cliff erosion, and the presence of submarine canyons.
This is another old subject that was revived by modern spectral analysis. In 1957, Miles and Phillips in two seamark papers6,7 discussed the generation of waves by wind, and a year later Phillips8 introduced the famous k-4
equilibrium spectrum. In 1963, Hasselmann first pointed out the crucial role played by the nonlinear energy transfer from the short and long components to the energetic central spectrum. The subject has now advanced to a point where wave prediction based on a given (past and future) wind field is routinely used in a wide range of human activities. Now that the wave field can be measured by synthetic aperture radar (SAR) satellites, I predict that the deconvolution of the wave field to provide wind data will become an important future application.
Tides are the earliest application of oceanography to human activities 9 and were a favorite subject of Victorian mathematicians. This field, too, has been revived by the computer revolution. In 1969, Pekeris and Accad solved the Laplace tide equation over a world ocean with realistic topography, using the new GOLEM computer built at the Weizmann Institute.10 There was a need to compare the global computations with measurements in the open sea. Coastal tide gauges have been around for centuries, but the ability to measure deep sea tides did not come until the early 1960s when pressure gauges could be dropped freely to the deep seafloor and subsequently recalled acoustically; about 350 pelagic stations have been occupied (mostly by Cartwright) in the 30-year window before satellite altimetry provided the means of truly global measurements. Tidal dissipation from the principal lunar tide is 2.50 ± .05 Terawatts (TW), very accurately derived from the measured rate of 3.82 cm/s at which the Moon moves away from the Earth. Tidal dissipation may have important implications to ocean mixing (as discussed below).
There are other achievements. We have learned the importance in tidal modeling of allowing for the elastic yield of the solid Earth. A seamark achievement is G. Platzman's expansion into global ocean normal modes. Tidal studies have not been in the oceanographic mainstream; I am one of the very few people who think that lunar studies will become fashionable once more (there is a name for such people).
The Microscale Revolution
At the opposite end of the general circulation scale is the micro-(or dissipation) scale where energy is irreversibly converted into heat. We are talking about millimeters to centimeters, but just because the process scales are small does not mean their importance is small.
It was not always clear that the deep ocean was cold. In the seventeenth century, Boyle argued that the temperature must increase with pressure according to his law PV = NRT (as it does in the Mindanao Deep, from 1.7°C at 5 km to 2.5°C at 10 km). While passing through the tropics on a voyage to the East Indies, Boyle noticed that the cook was lowering some bottles of white wine over the side. "And why should you be doing this?" he asked, to which the cook replied, "Every gentleman knows that white wine must be chilled before serving." Surely this was one of the most decisive oceanographic experiments of all time. 11
At the rate of 25 sverdrups of bottom water formation, the oceans would fill up with ice cold water in 3,000 years, forming a 1-m-thick thermal surface boundary layer controlled by molecular conductivity. Why is it you do not freeze your toes every time you go swimming?
The answer is that turbulent mixing brings warm water downward. A scale depth of 1000 m (roughly as observed) requires 1000 times the molecular diffusivity, or about 10-4 m2/s. Is this in accordance with fact? It has taken 30 years to find out that it is not. Cox, Gregg, and Osborn, among others, have developed the instrumentation with the required vertical resolution and found typical pelagic values of 10-5 m2/s. Ledwell confirmed these values by in situ measurements of the diffusion of a dye patch. Although a discrepancy by a factor of 10 is not large in this context, it appears to be real. A possible interpretation is that most of the ocean mixing takes place in a few regions of rough topography and very high turbulence. Far higher diffusivities have in fact been measured by Schmitt, Toole, and Polzin near rough bottom topography in the South Atlantic Ocean. An ambitious experiment along the Hawaiian ridge is being planned.
Mixing associated with 10-4 m2/s required 2 TW, the pelagic mixing rate of 10-5 m2/s requires 0.2 TW globally. Where does the energy for the mixing come from? Wind is an obvious candidate, tidal dissipation is another (2.5 TW are dissipated by the M2 tide alone, but nearly all of this has been claimed for dissipation in marginal seas).
Getting the mixing right is vital to any realistic modeling of ocean circulation and heat transport. In this connection we need to mention two other important developments. In 1956, Stommel (with Arons and Blanchard) published a paper: "An Oceanographical Curiosity: The Perpetual Salt Fountain."12 In a temperature-stable and salt-unstable stratification, a vertical hose, once primed, will pump up cold, salty (and nutrient-rich) deep water forever. Stern realized that this was associated with a fundamental instability (hose or no hose), and Turner developed this into the discipline of double-diffusive mixing.
The MEDOC (Mediterranean Deep Ocean Convection)
experiment in 1969 (another Stommel brainchild) provided direct measurements of convective overturning. Prior to MEDOC there had been very little direct observational evidence for deep water formation.
THE CLIMATE REVOLUTION
In 1960, NSF awarded J. Bjerknes a grant of $30,000 per year for three years to study "Sea Surface Temperature and Atmospheric Circulation." This was the beginning of ENSO, a combined ocean (El Niño) and atmosphere (Southern Oscillation) phenomenon.
Milankovitch long ago computed long-term variations in the orbital parameters of the Earth-Sun-Moon system with periods from 20,000 to 100,000 years. In a remarkable development pioneered by Imbrie, the terms have now been detected in the ocean sediment record, and they provide important information concerning the atmosphere-ocean response to harmonic forcing.
Hasselmann pioneered an approach that in some sense is opposite to that of Milankovitch. He suggested a "random walk" of the climate state in response to random pulses associated with short-term "weather." The character of such random walks is that they lead to large long-time departures from the mean. It has been demonstrated that the random-walk excitation accounts for the dominant part of the observed climate variance.
The coupled ocean-atmosphere system is capable of complex feedback systems. A number of these have been identified: ENSO, the Pacific "decadal variation," and the North Atlantic Oscillation. It would appear that the three phenomena can account for a significant fraction of the ambient variance. El Niño has a recognizable linear component in a highly nonlinear equatorial dynamics: an equatorially trapped wave moving eastward at a rate of order 0.1 m/s (playing a role somewhat similar to the edge waves in highly nonlinear coastal and littoral dynamics). There has been significant progress in ENSO prediction.
Greenhouse warming has occupied center stage, largely because mankind can do something about this component of climate variability. Model predictions now have error bars of the same order as the predicted mean change. There is urgent need for observational testing. The inevitable result will be an improved modeling and an increased understanding of ocean processes.
In all of the climate problems, a first-order consideration is the oceanic and atmospheric equator-to-pole heat flux (3.7 × 1015 W across 24°N) required to maintain the global heat balance. In 1955, Sverdrup estimated that the ocean contributed 1.4 × 1015 W, and this was mostly in the wind-driven circulation. We now estimate that the ocean carries more than half the total load, with comparable contributions from the wind-driven and thermohaline circulations. Quite a change!
THE TECHNOLOGY REVOLUTION
We all agree that there has been a technology revolution in ocean sciences; Larry Clark's paper, later in the volume, presents highlights of this revolution. It probably would have made more sense if I had organized this review along those lines; more often than not, new ideas have come out of new technology, rather than the other way around.
High-speed computers led to an explosion in the 1950s in every branch of physical oceanography (I have already listed a few examples). Readily available analysis of noisy records led at last to a sensible and reproducible description of surface waves and internal waves. It opened the door to objective analyses of extensive and diverse data sets, matched field processing of ocean acoustic transmissions, and the application of inverse theory to ocean measurements for an objective approach to estimating the validity of a given set of assumptions. Sadly, oceanographers had long found support for their favorite theory without such an objective assessment. In reviewing some past experiments designed to answer certain questions, one finds that the proposed measurements could not possibly have decided the issue with any reasonable degree of probability even if all measurements had worked (which is not always the case).
We have already referred to the revolution associated with the development of a deep-sea mooring technology. A similar case can be made for drifters, particularly those with a programmed depth strategy z(t), which have spearheaded a Lagrangian renaissance led by T. Rossby, D. Webb, and R. Davis. The oceans are a remarkably good propagator of sound (but not of electromagnetic energy), and this has played a profound role in ocean exploration starting with the acoustically navigated Swallow floats. The application of inverse methods has made possible the interpretation of the entire recorded field of an acoustic transmission in terms of the properties of the intervening water.
We must not overlook low-tech developments. A U.S. patent for the O-ring was awarded to Niels Christensen in 1939 (so Rita cannot claim credit for this seamark). Until the mid-1960s we used to load our gear into numerous boxes and carry them aboard the vessels, only to find that a crucial item had been left ashore. I think Frank Snodgrass was the first to build portable laboratories with the equipment assembled and pretested. The portable laboratory (Figure 1) is then brought aboard, ready for action. Decks of all oceanographic vessels now provide bolt-downs 2 feet on center for securing the portable laboratories. In about the same period we learned how to drop unattached instruments to the relatively benign environment of the deep seafloor, later to be recalled acoustically. There was a psychological block to overcome; it is not easy to let go of a line from which you have a year' s budget of equipment hanging.
Satellites constitute the most important technology innovation in modern times. Oceanographers are a conserva
tive lot; they did not welcome satellites with open arms. Apel came to Scripps and Woods Hole in 1970 to look for advice and support in planning SEASAT (Earth Satellite dedicated to Oceanographic Applications). He got neither. When mentioning that satellite altimeters would measure dynamic height, a well-known oceanographer replied: "If you gave it to me, I would not know what to do with it." With regard to climate, given the reluctance to employ new technologies, given that some of the underlying processes are not yet understood, given the slow rate (as demonstrated over the last 50 years) at which new concepts are adapted, and given the requirement of long time series for testing models, given that long time series take long times, we cannot expect to "solve" the climate problem in the next decade.
One final attempt at generalization. The key change between the century of the Challenger and the last 50 years is adequate sampling. The key product of the Technology Revolution is sampling. The key contribution of the conductivity-temperature-depth (CTD) profiler to vertical profiling was not more precision than the Nansen bottle (in fact it was less), but continuity in vertical sampling. The most important satellite contribution, I think, is not the instrument packages (remarkable as they are), but the ability to sample the global ocean and sample adequately. A key contribution of computers and the associated move from analog to digital discrete recording was that a new generation of oceanographers understood what the previous generation had not: the requirements of the sampling theorem. Beware of ignoring the theorem; it is unforgiving. Even the uncanny intuition of a Fritz Fuglister for the behavior of the Gulf Stream was not able to overcome the inadequate sampling of his time.
One person easily stands out in this brief account: Henry Stommel (Figure 2). Stommel joined Woods Hole in 1944 and died there in 1992. In his later years, NSF provided him substantial and continuous support. Stommel was the first to develop an intuition about the conservation of potential vorticity, with far-reaching consequences. In 1954, he privately printed a pamphlet entitled: "Why Do Our Ideas About Ocean Circulation Have Such a Dream-Like Quality?" Dream-like, indeed. An ocean with currents of 10 ± 1 cm/s (as we then thought) is an ocean far different from one with 1 ± 10 cm/s. My teacher Harald Sverdrup considered it one of the chief functions of physical oceanographers to provide biologists the background information for studying life in the sea. I am afraid that our concepts were too dream-like to provide useful guidance. Today we can provide information that is useful. Surely this is a revolutionary change! Stommel has led this 50-year transition from a dreamlike to an (almost) realistic ocean.
John Knauss discusses the transition from ONR to NSF dominance earlier in this volume. The changes are profound, nothing short of another revolution (Plate 6). (I must confess to a certain nostalgia for the old ONR days). This is an
inevitable result of going from adventure to public service. We enter the prediction arena at a high price; our failures (and there have been many) will now be publicly vented. My plea to NSF for the next 50 years is to support a few "curiosities" and other high-risk ventures and to retain a tolerance for failure.
I thank L. Armi, M. Hendershott, R. Knox, J. Layton, P. Niiler, J. Reid, and C. Wunsch for discussions, but the (very limited) selection of achievements reported here is entirely mine.