National Academies Press: OpenBook
« Previous: 2 Context and Motivation
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 27
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 28
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 29
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 30
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 31
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 32
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 33
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 34
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 35
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 36
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 37
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 38
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 39
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 40
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 41
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 42
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 43
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 44
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 45
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 46
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 47
Suggested Citation:"3 Study Design." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page 48

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

3 Study Design The National Research Council’s Committee on an Assessment of Research Doctorate Programs directed its research at fulfilling the following task: An assessment of the quality and characteristics of research-doctorate programs in the United States will be conducted. The study will consist of (1) the collection of quantitative data through questionnaires administered to institutions, programs, faculty, and admitted to candidacy students (in selected fields), (2) collection of program data on publications, citations, and dissertation keywords, and (3) the design and construction of program ratings using the collected data including quantitatively based estimates of program quality. These data will be released through a web-based, periodically updatable database and accompanied by an analytic summary report. Following this release, further analyses will be conducted by the committee and other researchers and discussed at a workshop focusing on doctoral education in the United States. The methodology for the study will be a refinement of that described by the Committee to Examine the Methodology for the Assessment of Research-Doctorate Programs, which recommended that a new assessment be conducted. This chapter describes how the study was organized for that purpose. PH.D. PROGRAMS AS THE UNIT OF ANALYSIS Like all large organizations, research universities in the United States consist of many related parts. These parts include the central administration, which oversees and coordinates the parts; the school or division, which has a faculty, admits students, and focuses on a large academic area such as engineering or arts and science; and the department, which tends to represent a discipline—that is, a field of teaching and learning within that large area. The faculty of a department specialize in the discipline and offer a curriculum that organizes and transmits disciplinary knowledge. For doctoral education another administrative unit is of central importance: the graduate program. In most graduate schools the program admits doctoral students, works with the graduate school to fund them, designs their course of study and advisement, establishes the partnerships between mentoring faculty members and students that are the 27

28 A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. bedrock of doctoral education, and recommends a successful student for a degree. The program best represents the site on which students do their studies and associate with other students and faculty. As a result, most of the data in this study are related to doctoral programs and their faculty. The committee’s decision was logical, but it also presents some complex problems for the most accurate possible representation of doctoral education. Perhaps the most vexing issue the committee faced was how to reconcile the various ways that universities structure their graduate educational experiences. Universities do not follow one standard method of organizing graduate education. As a result, in many fields there is substantial variability in the names of programs and in their content. The years since 1993 have been characterized by the increasing interdisciplinarity in doctoral programs and the blurring of the boundaries across fields, which has been manifested in a variety of ways. An example would be a neuroscience Ph.D. program that involves faculty from several departments and literally “cuts across” departmental lines. Even when a Ph.D. program is offered by a single department, however, it may include faculty from other departments, called “associated faculty” here, and thus it will have an interdepartmental or interdisciplinary character. A major challenge faced by this study was to find measures that do justice to the growth of interdisciplinarity in doctoral education. In the end, the questions asked and the measures constructed to gauge interdisciplinarity met with limited success. One measure tried was to measure the proportion of faculty from outside the program who helped to supervise dissertations. This measure, however, underestimates interdisciplinarity that is internal to the program. The committee also asked programs whether they were interdisciplinary. A large proportion answered yes, suggesting more extensive interdisciplinarity than that measured by the share of associated faculty. In contrast to classifying graduate programs, classifying academic disciplines is comparatively straightforward because of the reasonably high level of consensus within a field about its general boundaries and its major subspecialties and subcategories. Some fields have relatively few subspecialties, and the basic predissertation years of doctoral education are similar for all students in the program. However, disciplines and specialties that have grown out of other disciplines―such as biochemistry―or that have emerged from earlier interdisciplinary work present knotty problems with program classification and with the variety of ways in which different universities organize doctoral education. The biological and health sciences, a broad field that proved difficult to address in prior assessments, again proved the most problematic in this assessment. The swift growth of knowledge in the biological and health sciences—revolutionary changes in only a few decades or less—has produced rapidly evolving and highly differentiated ways of organizing graduate education in this field. The increasingly interdisciplinary character of the biological and health sciences is both a cause and a consequence of these academic and institutional changes. Interdisciplinarity means that a plethora of faculty members from several disciplines and programs have multiple responsibilities for training graduate students and identify with several of the programs offered at the university. As a result, obtaining agreement on the classification of core programs within disciplines in this field proved a difficult task. The committee recognized that it had to disaggregate the unit of analysis beyond the general disciplinary name. It could not lump all biological and health science programs together and get an accurate representation of the experiences of students in various parts of the biological and health sciences at a university. Unfortunately, there was no consensus about the nomenclature for programs within the biological and health

STUDY DESIGN 29 sciences, because different universities classify their biological programs differently. The committee thus worked closely with leaders in the disciplines before arriving at broadly acceptable names for the various programs that would be assessed. In asking about the student experience within these programs, the committee had to remember that students at some universities are admitted to biological and health sciences programs without having to choose an area of specialization until the second or third year of study. In principle, such an approach allows students to “find” their interest before choosing a special area of interest in which they will do their doctoral research. These programs often call themselves “Biological Sciences” or “Integrated Biological Science.” If the Ph.D. was offered in a program with this name, it was reported as such. If the Ph.D. was offered in a more specialized area, then the program was given the name of that area. Even as the committee sought to find reasonable patterns in the names of programs, it realized that increasingly “the laboratory” might be becoming the meaningful unit of analysis in some disciplines. Although this development is more often true at the postdoctoral level than at the Ph.D. level, the committee found evidence of graduate students identifying their own intellectual roots or heritage with the laboratory supported by their graduate thesis adviser or the professor who organizes a laboratory. In short, even the “program” as the unit of analysis may not fully capture the source of research training received by graduate students. And it seems increasingly true that faculty sponsors of doctoral students have a greater influence on the next steps in their careers than the program faculty as a whole. Yet on balance the committee believes that the core educational experience of doctoral students takes place within a program that embraces both the course work that they experience with multiple members of the faculty and the concentrated research experience within the laboratory of one or several faculty members or in seminars or in individual discussion with faculty. Outside of the biological and health sciences, the program as the unit of analysis also does some injustice to new, interdisciplinary programs that have not been sufficiently filtered into a standard curriculum and a standard method of organizing the educational experience. These programs transcend traditional boundaries and include experts from several existing disciplines. The names of these interdisciplinary programs often vary, and it is not altogether evident that what is being taught in the new programs is in fact comparable. For example, at the graduate level what are the academic relations between women’s studies and gender studies? Thus admittedly, interdisciplinary programs, even though they are becoming increasingly important at universities, are shortchanged in the evaluation of more standard scholarly and scientific programs. Moreover, they may not be sufficiently numerous on the national scene to make comparative ratings possible. Interdisciplinary studies and collaborations may give rise to new programs of study that evolve over time into fields distinct from their origins. Examples of recently emerged, but now established and recognized, fields fully surveyed here are biomedical engineering and American studies. These fields reflect the maturation of research areas that originally were interdisciplinary. Other fields may emerge from a single discipline, just as aerospace engineering has arisen from mechanical engineering. Some of the currently emerging fields identified by the committee are nanoscience, systems biology, urban studies and planning, and film studies. For these emerging fields the committee collected data only on the number of faculty (core, new, and associated) and the number of students overall and in candidacy.

30 A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. This information should be useful for future benchmarking studies and may assist prospective students in the identification of these programs. A final challenge inherent in making the program the unit of analysis was how to measure the workload of faculty members, whose appointment generally lies in a single department but who participate in more than one graduate program. Programs draw on faculty from within the discipline and to some extent on colleagues in related disciplines. For example, a nanoscientist may offer instruction and research guidance in the fields of physics, applied physics, and chemistry. A neuroscientist may work with students in programs ranging from biochemistry to cognitive psychology. A history professor may work in history, African American studies, and American studies. Thus how were professors assigned to programs? The committee spent a lot of time discussing this question. It largely agreed on one principle: it should not allow double counting and should try to prevent universities from assigning their most prolific and distinguished faculty to multiple programs unless they actually expended “effort” within them. Faculty members would demonstrate the effort they put into each program primarily by stating the number of doctoral students whose dissertations they advised or on whose doctoral committees they served. The total amount of time spent by faculty members in all the programs in which they are involved could not exceed 100 percent. The committee was aware that allocations of faculty time are sometimes not easily determined. Moreover, some faculty members have huge responsibilities in multiple programs―many graduate students and many sponsored dissertations―while others do far less in training and mentoring students. In actual time and energy spent, 50 percent of effort by some faculty members in a program may in fact be greater than 100 percent effort by others. Of course, this observation also applies to human activity outside of research universities. Faced with the practical question of whether the allocations of faculty time were realistic, the committee counted the dissertations that faculty members were directing and allocated their time among the programs in which they served. It then asked institutional coordinators to consult with the programs to judge whether this numerical allocation adequately reflected how a faculty member’s time should be allocated across several programs. In a few cases it did not, and the committee accepted the allocation provided by the institution. This decision was important because the total publications of faculty in a program were adjusted by the allocation of the faculty member to the program. Despite these problems of classification and assignment, the committee believes that the program continues to represent the unit that most accurately defines the range of experiences of the graduate student once admitted to a specific department or program. In this assessment, quantitative data on 4,839 programs have been assembled (see Table 3-1). These programs correspond to six broad fields and 59 different academic disciplines. Each of these programs was subjected to an overall, primary assessment represented by a range of rankings. In addition, assessments were conducted of three separate dimensions of doctoral education: (1) research activity; (2) student support and outcomes, a measure that reflects program characteristics that are specifically relevant to the student experience; and (3) diversity of the academic environment, a measure that includes the gender, racial, and ethnic diversity of the faculty and of the student body, as well as a measure of the percentage of international students. 1 Taken together, these individual assessments represent a comprehensive assessment of Ph.D. education in the United States. 1 The components of these measures are shown in Table 5-2A–C.

STUDY DESIGN 31 TABLE 3-1 Numbers of Programs and Institutions in Each Broad Field Broad Field Programs Institutions with in the Broad Field Programs in the Field Agricultural 312 70 sciences Biological and 1,168 191 health sciences Physical and 911 182 mathematical sciences Engineering 759 151 Social and 924 180 behavioral sciences Humanities 764 146 Total 4,838 221 FIELD COVERAGE The studies by the NRC in 1982 and 1995 focused primarily on fields in the arts and sciences and engineering. However, the committee recognizes that research doctorate programs have grown and diversified since 1993 and that research doctorates are not limited to the arts and sciences. Therefore, the taxonomy for this study has been expanded from the 41 fields in 1993 to the current 62 fields of which 59 have program rankings. In addition, it has placed more emphasis on studies that extend beyond a single field, and so 14 emerging fields are included to recognize the growth of multi-, cross- and interdisciplinary study. It is anticipated that many of these fields could become established areas of scholarship and eligible for inclusion in future studies. Finally, when the committee developed the taxonomy it expected that each field would have enough programs to be ranked, but after it administered the program questionnaires it found that three fields could not be ranked: languages, societies, and cultures (LSC), engineering science and materials (not elsewhere classified), and computer engineering. LSC could not be ranked because the subfields were too heterogeneous for raters to provide informed rankings across them, and no subfield was large enough that rankings could be calculated for it alone. Computer engineering was put forward as a field that was separate from electrical engineering, but the universities in the study reported only 20 computer engineering programs. Similarly, engineering science and materials (not elsewhere classified) did not have enough programs to be included in the rankings. Although rankings are not provided for these fields, full data are provided in the online data that accompany this study.

32 A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. DEVELOPMENT OF THE TAXONOMY Immediately after the release of the 1995 study, some institutions and users expressed their concerns about the scope of fields covered and the taxonomy. During the period leading up to the current study, some fields, such as communications, kinesiology, and theater research, matured and established themselves. Other areas, such as doctoral education in nursing, public health, and public administration, convinced the committee that they had emerged from predominantly master’s fields to established areas of doctoral research. The 1995 study report specifically mentioned the difficulty encountered in defining fields in the biological and health sciences. Furthermore, the taxonomy did not cover fields in the agricultural sciences. Coverage of Ph.D. programs in the basic biomedical sciences that were housed in medical schools was spotty. In establishing the taxonomy of fields to be included in the current study, the committee used as a starting point Assessing Research-Doctorate Programs: A Methodology Study, the 2003 report of the Committee to Examine the Methodology for the Assessment of Research-Doctorate Programs.2 On the one hand, it recognized that the taxonomy should build on previous taxonomies in order to maintain continuity with earlier studies, that it should correspond as much as possible to the actual programmatic organization of doctoral studies, and that it should capture the development of new and diversifying activities. On the other hand, it recognized that there was no “right” way of organizing academic fields. The organization used by one university as opposed to another is often an outcome of historical circumstances rather than some universal organizing principle. In general, faced with this variability, the committee adopted whatever seemed to be the most commonly used current taxonomic divisions. To go back to the example of biology, the changes in biology that were evident in the 1995 study have transformed the discipline. Biology is now a complex field that appears under the umbrella of the biological and health sciences, a grouping with 19 fields and 3 emerging fields. The impact of new technology, the digital revolution, and the explosion of knowledge at the molecular level have moved the biological sciences from fields defined by levels of organization to problem-based, interdisciplinary fields. The inclusion of immunology and infectious disease as a field exemplifies this change, as does the modification of pharmacology to include toxicology and environmental health. The growing importance of computation to biology is evident in the subfields of genetics and genomics and neuroscience and neurobiology, as well as in the presence of bioinformatics as an emerging field. All the biological science fields represented in the 1995 study are retained in this study with several noteworthy changes. Biochemistry now appears as biochemistry, biophysics, and structural biology rather than biochemistry and molecular biology. Biophysics and structural biology are new to this study, and the committee relegated 2 National Research Council, Assessing Research-Doctorate Programs: A Methodology Study (Washington, D.C.: National Academies Press, 2003).

STUDY DESIGN 33 molecular biology to subfield status after discussion of whether molecular biology has become more of a technique integrated into many areas rather than a separate field. Commonalities in research methodology along with research problems that increasingly merge traditional disciplines have resulted in greater integration across the life sciences. In the biomedical sciences in particular, newly developed programs offer students a common port of entry to a wide range of disciplines or, alternatively, degrees are offered in integrated programs without further differentiation. Programs draw on faculty from across the campus, making the assignment of faculty to programs more complex. These changes to traditional disciplinary structures have blurred the boundaries of research fields and departments and challenged the committee to define what was being rated. The inclusion of biology/integrated biomedical science accommodated these programs. The fields covered in the two studies are shown in Table 3-2.

34 A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. TABLE 3-2 Fields in 1993 and 2006 Data Collection Broad Field 1993 2006 Agricultural Animal sciences sciences Entomology ** Food science Forestry and forest sciences Nutrition Plant sciences Biological and Biochemistry and molecular Biology Biochemistry, biophysics, and structural biology health sciences Cell and developmental biology Cell and developmental biology Ecology and evolutionary biology Ecology and evolutionary biology Molecular and general genetics Genetics and genomics Neurosciences Neuroscience and neurobiology Pharmacology Pharmacology, toxicology, and environmental health Physiology Physiology Biology/integrated biomedical sciences (Note: Use this field only if the degree field is not specialized.) Immunology and infectious disease Kinesiology Microbiology Nursing Public health Engineering Aerospace engineering Aerospace engineering Biomedical engineering Biomedical engineering and bioengineering Chemical engineering Chemical engineering Civil engineering Civil and environmental engineering Electrical engineering Electrical and computer engineering Materials science Materials science and engineering Mechanical engineering Mechanical engineering Industrial engineering Operations research, systems engineering, and industrial engineering Computer engineeringa Engineering science and materials (not elsewhere classified)b Broad Field 1993 2006 Physical and Astrophysics and astronomy Astrophysics and astronomy mathematical sciences Chemistry Chemistry Computer sciences Computer sciences Geosciences Earth sciences Mathematics Mathematics Oceanography Oceanography, atmospheric sciences, and meteorology Physics Physics Statistics and biostatistics Statistics and probability Applied mathematics Social and Anthropology Anthropology behavioral sciences Economics Economics Geography Geography Political science Political science Psychology Psychology Sociology Sociology Agricultural and resource economics Communication Linguistics (moved from humanities)

STUDY DESIGN 35 Public affairs, public policy, and public administration History (moved to Humanities) Humanities Classics Classics Comparative literature Comparative literature English language and literature English language and literature French and Francophone language French and Francophone language and literature and literature German language and literature German language and literature Art history History of art, architecture, and archeology Music Music Philosophy Philosophy Religion Religion Spanish and Portuguese language and Spanish and Portuguese language and literature literature American studies History (moved from social sciences) Languages, societies, and culture (no rankings) Theater and performance studies Linguistics (moved to social sciences) Total 41 62 (3 unranked) Emerging Bioinformatics fields Biotechnology Computational engineering Criminology and criminal justice Feminist, gender, and sexuality studies Film studies Information science Nanoscience and nanotechnology Nuclear engineering Race, ethnicity, and post-colonial studies Rhetoric and composition Science and technology studies Systems biology Urban studies and planning Note: Italics indicates a new or reclassified field. a Computer engineering was not ranked because relatively few universities provided data about computer engineering as a field distinct from electrical and computer engineering. b Engineering science and materials (not elsewhere classified) was not ranked because relatively few universities provided data. As early as 1996 a planning meeting was held to consider a separate study of the agricultural sciences, because they were not included in the 1995 study. That study did not go forward, however, because of funding considerations, and the decision was made to wait until a more comprehensive study was conducted to include these fields. Thus this study includes six agricultural fields in the agricultural sciences category and one agricultural field (agricultural and resource economics) in the social and behavioral sciences category. Most of these programs are located in colleges or schools of agriculture in the land grant universities or other public universities. Some of these fields include groupings of programs that may be separate entities as some institutions. For example, the plant sciences include programs that may be named agronomy, horticulture, plant pathology, or crop sciences at different institutions; the animal sciences include programs that might be named dairy science, animal science, or poultry sciences at different institutions.

36 A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. Many excellent research doctorate programs in the basic biomedical sciences are located in colleges or schools of medicine. The biological and health sciences taxonomy recognizes this fact and provides for the inclusion of such programs among the basic biological research doctorate programs. It also recognizes the maturation of several interdisciplinary programs, such as neuroscience, into established independent fields. The treatment of psychology as a field has changed from its treatment in the 1995 study, which included a number of programs in clinical psychology. During the late 1990s, some universities with established programs in clinical psychology awarded a Psy.D. degree, as opposed to a Ph.D. In its data collection, the committee asked universities to exclude their clinical programs even if they awarded a Ph.D. and their faculty from the study, but this request was not heeded in all cases.3 ELIGIBILITY CRITERIA FOR FIELDS AND PROGRAMS The committee chose to preserve the criteria from the 1995 study for the selection of fields to be included in the current study. To be included, a field as a whole had to have (1) granted at least 500 doctorates in the last five years (2001–2002 to 2005–2006); and (2) be represented in at least 25 institutions. Taken together, these criteria ensure that the field is a significant presence in doctoral education and that there are enough programs nationwide to make comparison meaningful. Fifty-nine fields met these criteria. The unit of observation in this study is the doctoral program. A program is a unit of graduate study that is defined by its performance of at least three of the following four activities: 1. Enrolls students in doctoral study 2. Designates its own faculty 3. Develops its own curriculum 4. Recommends students for doctoral degrees. To be included in the study, a doctoral program meeting these criteria must also have produced at least five doctorates between 2001–2002 and 2005–2006. This quantitative criterion is designed to ensure that doctoral education and research are a central part or a mission of any included program. Given these ground rules, institutions were asked to name the programs they wished to see included in the study. They named 4,839 for which the committee calculated illustrative ranges of rankings. PARTICIPATION IN THE STUDY In September 2005 Ralph J. Cicerone, chair of the National Research Council, wrote the presidents of all universities offering doctoral programs to invite them to participate in the study. The invitation explained the purpose, organization, and time line of the study and encouraged the institutions to contribute funding to it. Contribution guidelines were determined by the number of Ph.D.’s granted in the fields in the NRC taxonomy over the 3 Four out of 146 psychology programs were called “clinical psychology.” It is not clear how many other programs were primarily clinical in their focus.

STUDY DESIGN 37 period 2001–2002 to 2003–2004. Copies of the letter were sent to the provost and graduate dean. Although universities were asked to contribute to the study, and most did, a financial contribution was not a requirement for participation. Indeed, the financial contributions of U.S. institutions of higher education, while vital to the study, were small compared with the value of very significant efforts by senior staff at the participating institutions to gather, check, collate, and communicate the data requested from their schools. In many cases such data had not been collected in the past, and the efforts initiated in response to the questionnaires were far from trivial. QUESTIONNAIRE DEVELOPMENT AND DATA COLLECTION A Panel on Data Collection composed of graduate deans and institutional researchers was tasked with drafting questionnaires for this study. Starting from survey instruments drafted originally by the 2003 study committee that developed a methodology for the assessment, the panel drafted questionnaires for four groups of respondents: institutions, programs, faculty, and students. After approval by the committee, the questionnaires were posted on the project web site and participating institutions were asked to comment on them. The e-mail list created was open to anyone from participating institutions who was working on the study. Through the list, the NRC received hundreds of comments and suggestions. Answers were posted by both NRC staff and the survey contractor, Mathematica Policy Research. The comment and response processes were open and iterative and, as such, resulted in decisions that were acceptable to most institutions and programs, but did not fit all (see Appendix D for copies of the questionnaires). Each of the four questionnaires was also reviewed by the Nation Research Council’s Institutional Review Board (IRB). And many institutions required that they be reviewed by their own IRBs. The introductory section of the questionnaires was revised to comply with their recommendations when needed. In designing the questionnaires the committee had to make many choices. In some cases the choices were obvious; in others they were less so and therefore engendered considerable debate among the committee members. These issues included definitions and choices of what information to collect. The program was chosen as the primary unit for the study because programs admit students, offer degrees, and are the obvious target of student interest. The treatment of faculty presented a more difficult problem. In many institutions emeritus faculty play an important role in teaching and research, as do adjunct faculty. For this study the committee chose to define faculty as those who had directed doctoral research dissertations within the last five years. It recognizes that many individuals whom it is not “counting” as faculty make valuable contributions, but for uniformity and consistency it chose this definition. There is also inconsistency across programs in the definition of a doctoral student. In most programs students apply directly for admission to the doctoral program without having first obtained a master’s degree. Other programs, however, do not admit students to the doctoral program until they have satisfactorily completed a master’s degree and shown they are capable of carrying out work in a doctoral program. The committee asked programs which definition they used and, under that definition, how many students they enrolled. The proliferation of multidisciplinary or cross-disciplinary programs presents a problem in how to “allocate” faculty. To ensure that the total number of faculty members

38 A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. across programs equaled the total number of faculty in the study, the committee had to allocate faculty to programs. A self-allocation procedure in which faculty assigned themselves or were assigned by their institutions was deemed unacceptable by most of the committee, because that procedure allowed allocations that did not accurately reflect the strength of programs. The formula eventually developed related allocation to the number of dissertations chaired by individual faculty members.4 The resulting allocations were, however, reviewed by the institutions, which in a small number of cases revised the allocations if they felt they were unreasonable or not representative of a faculty member’s scholarly efforts. Finally, the committee had to decide which kinds of data to collect. Two factors were important. First, the data had to be useful to the readers of the report, especially to potential students. Second, the data had to be consistent and available in an accessible form. Some data, such as publications in the scholarly literature and citation indices, can be obtained from commercial databases, and information about federal grants is available as well. By contrast, institutional data such as time to degree, levels of student support, and infrastructure investment are not uniform and not always available or as easily compared. 4 Faculty productivity (citations and publications) was allocated by the following formulas. For faculty members who are core in one or more programs that fall within the NRC taxonomy (regardless of the number of programs with which they may be associated), ⎛ ⎛ d ⎞⎞ ⎜ 5 Pi + n i + 5 ⎜ i ⎟ ⎟ ⎜ ⎟ ⎝ m ⎠⎠ Ai = ⎝ , ⎛ ⎛ di ⎞⎞ ∑ ⎜ 5 Pi + n i + 5 ⎜ m ⎟ ⎟ i ⎝ ⎜ ⎝ ⎠⎠ ⎟ where Ai is the share of publications and citations allocated to the faculty member in program i; Pi is the number of committees in program i for which the faculty member serves as chair or principal adviser; ni is the number of committees in program i on which the faculty member serves in a capacity other than chair or principal adviser; di is a variable that takes on the value of 1 if the faculty member is a core faculty member in program i and 0 otherwise; and m is the total number of programs in which the faculty member is a core faculty member. For faculty members who are core in a program in a nonincluded field but are listed as associate faculty in an included one, (5 Pi + n i ) , Ai = 2 ∑ (5 Pi + n i ) i where Pi and ni are defined as above. The factor of 2 in the denominator was included to reduce the overallocation of associate faculty members when information is not available on their core programs. The +5 that was there previously would have become proportionally smaller as these faculty sit on more and more committees outside their core program, making the allocation closer to 100 percent. To remedy this situation, the committee multiplied the denominator by 2 to effectively reduce the allocation to a reasonable fraction. With this modification, the allocation for associate faculty members (who are core in a nonincluded field or program) will never be greater than 50 percent. For new faculty members, all their publications and citations were allocated to their core program(s), because they will not yet have a record of dissertation committee service. For new faculty who are listed in more than one program (such as a joint appointment), their allocations were split evenly among their programs. These allocations were calculated directly by MPR from the faculty lists.

STUDY DESIGN 39 The characteristics for which data appear for each program in the online data tables, and how they are measured, are shown in Table 3-3. TABLE 3-3 Characteristics Listed in the Online Data Table CATEGORY COLUMN DESCRIPTION General Information A: Program ID B: Broad Field C: Field D: Institution Name E: Program Name F: Program Website G: Control Public or private institution H: Regional Code 1=Northeast; 2=Midwest; 3=South Atlantic; 4=South Central;5=West: I: Program Size Quartile 1 is smallest; 4 is largest. Quartiles based on Number of Students Enrolled, Fall 2005 (see Column AT). R Rankings J: R Rankings: 5th Percentile 5th percentile value of the program’s R ranking K: R Rankings: 95th Percentile 95th percentile value of the program’s R ranking S Rankings L: S Rankings: 5th Percentile 5th percentile value of the program’s S ranking M: S Rankings: 95th Percentile 95th percentile value of the program’s S ranking Dimensional Rankings N: Research Activity: 5th 5th percentile value of the program’s Percentile ranking for faculty research activity in 2006 O: Research Activity: 95th 95th percentile value of the program’s Percentile ranking for faculty research activity in 2006 P: Student Support & Outcomes: 5th percentile value of the program’s 5th Percentile ranking for student support and outcomes in 2006 Q: Student Support & Outcomes: 95th percentile value of the program’s 95th Percentile ranking for student support and outcomes in 2006 R: Diversity: 5th Percentile 5th percentile value of the program’s ranking for diversity in 2006 S: Diversity: 95th Percentile 95th percentile value of the program’s ranking for diversity in 2006 Data: Research T: Average Number of This variable is the total over seven Activity Publications (2000-2006) per years, 2000-2006, of the number of Allocated Faculty, 2006 articles for each faculty member divided by the total number of faculty allocated to the program. Data were obtained by matching faculty lists supplied by the programs to the

40 A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. Thomson-Reuters list of publications. The list of journals included in the ISI database can be found here, http://science.thomsonreuters.com/mjl/. To find journal coverage for 2005- 2006, contact Thomson Reuters. Books were not counted for the non- humanities. U: Average Citations per The annual average of the number of Publication (Non-Humanities) allocated citations in the years 2000- 2006 to papers published during the period 1981-2006 by program faculty divided by the allocated publications that could contribute to the citations. For example, the number of allocated citations for a faculty member in 2003 is found by taking the 2003 citations to that faculty member’s publications between 1981 and 2003. These counts are summed over the total faculty in the program and divided by the sum of the allocated publications to the program in 2003. Citations were not calculated for the humanities. V: Percent of Faculty with The faculty questionnaire asks whether Grants, 2006 a faculty member’s work is currently supported by an extramural grant or contract. The total of faculty who answered affirmatively was divided by the total respondents in the program and the percentage was calculated. W: Awards per Allocated Faculty Data from a review of 1,393 awards Member, 2006 and honors from various scholarly organizations were used for this variable. The awards were identified by the committee as “Highly Prestigious” or “Prestigious,” with the former given a weight five times that of the latter. The award recipients were matched to the faculty in all programs and the total awards for a faculty member in a program was the sum of the weighted awards times the faculty member’s allocation to that program. These awards were added across the faculty in a program and divided by the total allocation of the faculty in the program. Data: Student Support X: Percent of First Year Students For each program, question E5 & Outcomes with Full Financial Support, Fall reported the number of full-time first- 2005 year graduate students who received full financial support during the fall 2005 term. This number was divided by the total number of full-time, first- year doctoral students enrolled fall

STUDY DESIGN 41 2005. When there was zero first-year students enrolled, this value was imputed (average over the field). Y: Average Completion Ratio, 6 Questions C16 and C17 reported for Years or Less males and females separately the number of graduate students who entered in different cohorts from 1996- 1997 to 2005-2006 and the number in each cohort who completed in 3 years or less, in their 4th, 5th, 6th, 7th, 8th, 9th years, and in 10 or more years. To compute the completion rate, the number of doctoral students for a given entering cohort who completed their doctorate in 3 years or less and in their 4th, 5th, 6th years were totaled and the total was divided by the entering students in that cohort. This computation was made for each cohort that entered from 1996-1997 to 1998- 1999 for the humanities and 1996-1997 to 2000-2001 for the other fields. Cohorts beyond these years were not considered, since the students could complete in a year that was after the final year 2005-2006 for which data were collected. To compute the average completion rate, an average was taken over 3 cohorts for the humanities and over 5 cohorts for other fields. Z: Median Time to Degree (Full- Question C2 reported the median time and Part-time Graduates), 2006 to degree for full-time and part-time students. That reported number was used for this variable. The median was calculated from graduates who received doctoral degrees in the period 2003- 2004 through 2005-2006. AA: Percent with Academic A crosswalk was generated between Plans the NSF Doctorate Record File Specialty Fields of Study and the fields in the study taxonomy. Data from the DRF for 5 years (2001-2005) were matched by field and institution to the programs in the research-doctorate study. The percentage was computed by taking the number of individuals who have a signed contract or are negotiating a contract for a position at an educational institution and dividing by the number of survey responses. Positions included employment and postdoctoral fellowships.

42 A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. AB: Collects Data About Post- This variable takes the value of 1 if the graduation Employment (1=Yes; program collects data about the post- 0=No) graduation employment of its graduates. A zero is given if otherwise. Data: Diversity AC: Non-Asian Minority Faculty For each program the data reported for as a Percent of Total Core and question B7, the race/ethnicity of core New Faculty, 2006 and new faculty in the program, was used to compute the ratio of non- Hispanic Blacks, Hispanic, and American Indians or Alaska Natives to that of all faculty with known race/ethnicity. “Core” faculty are those whose primary appointment is in the doctoral program. “New” faculty are those with tenure track appointments who were appointed in 2003-2006. AD: Female Faculty as a Percent For each program the data reported for of Total Core and New Faculty, question B5, the gender of core and 2006 new faculty in the program, was used to compute the ratio of core or new female faculty to the total of core and new faculty. Allocations were not used in the construction of this variable. AE: Non-Asian Minority Question C9c reported the Students as a Percent of Total race/ethnicity of graduate students in Students, Fall 2005 the program. This was used to compute the ratio of non-Hispanic Blacks, Hispanics, and American Indians or Alaska Natives to that of the total of students with known race/ethnicity. AF: Female Students as a Percent Question C9 reported the gender of of Total Students, Fall 2005 graduate students in the program. This was used to compute the percentage by taking the number of female graduate students divided by the total number of graduate students. AG: International Students as a Question C9b reported the citizenship Percent of Total Students, Fall of graduate students in the program. 2005 These data were used to compute the percentage of international graduate students by taking the number with temporary visas and dividing it by the number of graduate students with known citizenship status.

STUDY DESIGN 43 Data: Other Overall AH: Average Number Ph.D.s Question C1 reported the number of Ranking Measures Graduated, 2002-2006 doctoral degrees awarded each academic year from 2001-2002 to 2005-2006. The average of these numbers was used for this variable. If no data were provided for a particular year, the average was taken over the years for which there were data. AI: Percent of Interdisciplinary Faculty were identified as either core, Faculty , 2006 new, or associated. Percent interdisciplinary is the ratio of associated to the sum of core, new, and associated faculty. Allocations were not used in the construction of this variable. AJ: Average GRE Scores, 2004- For each program, question D4 2006 reported the average GRE verbal and quantitative scores for the 2003-2004, 2004-2005, and 2005-2006 academic years and the number of individuals who reported their scores. A weighted average was used to compute the average GRE, which was calculated by multiplying the number of individuals reporting scores by the reported average GRE score for each year, adding these three quantities and dividing by the sum of the individuals reporting scores. AK: Percent of First-Year For each program question E8 reported Students with External the type of support full-time graduate Fellowships, 2005 students received during fall term each year of enrollment. For this variable the data for the first year were added for support by externally funded fellowships and combinations of external fellowships and other internal support and then divided by the total number of students. AL: Is Student Work Space Question D12 reported the percentage Provided to All Students? of graduate students who have work (1=Yes; 0=No) space for their exclusive use. AM: Is Health Insurance Question A1 reported whether or not Provided by the Institution? the institution provided health care (1=Yes; 0=No) insurance for its graduate students. At some institutions, the program might provide support when the institution does not.

44 A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. AN: Number of Student Question D8 listed 18 different kinds Activities (Max=18) of support activities for doctoral students or doctoral education. This variable is a count of the number of student support activities provided by the program or the institution. Data Not Used in AO: Total Faculty, 2006 Questions B1, B2 and B3, total Ranking responses. AP: Number of Allocated Calculated as the number of program Faculty, 2006 faculty corrected for association with multiple programs. For more detail on how these data were calculated, refer to footnote 46 in A Data-Based Assessment of Research-Doctorate Programs in the United States (2010), Chapter 3, “Study Design.” AQ: Assistant Professors as a Of those faculty who reported any Percent of Total Faculty, 2006 rank, the percentage of assistant faculty were calculated as the number of assistant professors divided by the number of total faculty. AR: Tenured Faculty as a Number of tenured faculty divided by Percent of Total Faculty, 2006 the number of total faculty. AS: Number of Core and New Total number of core and new faculty. Faculty, 2006 AT: Number of Students Question C9 reported the total number Enrolled, Fall 2005 of students enrolled in the fall of 2005. AU: Average Annual First Year Question C3 reflects the number of Enrollment, 2002-2006 first-time enrolled for 2001-2002, 2002-2003, 2003-2004, 2004-2005, and 2005-2006. An average was taken over 5 years. AV: Percent of Students with Question E8 reported the number of Research Assistantships, Fall students who received support as a 2005 research assistant in the fall of 2005. A percentage was calculated over the total number of students. AW: Percent of Students with Question E8 reported the number of Teaching Assistantships, Fall students who received support as a 2005 teaching assistant in the fall of 2005. A percentage was calculated over the total number of students.

STUDY DESIGN 45 Student Activities AX through BP Question D8 reports whether the institution and/or program provides support for doctoral students or doctoral education. Key: 1= provided for by institution; 2= program support only; 3= both institutional and program support; 4= neither institutional nor program support Note: Unless otherwise noted, all data refer to the 2005-2006 academic year. Further details are provided in Appendix E. Data collection was administered by the project’s survey contractor, Mathematica Policy Research. Responses at the institutional and program levels relied heavily on the institutional coordinator at each institution, who, depending on the administrative structure of the university, was either the graduate dean or the director of institutional research. This person knew how to find data about doctoral programs and made sure that the questionnaires were answered by knowledgeable respondents. Some institutions have highly centralized and automated data systems, and the institutional research office was able to provide many of the answers to the program and institutional questionnaires. Other universities relied on program and departmental administrators to provide the data. In addition, the universities provided MPR with faculty and student e-mail and student lists in order to administer their respective questionnaires. To preserve confidentiality, replies were sent directly to MPR. DATA VALIDATION AND CLEANING Once the data were collected from the universities, they had to be checked. The first data cleaning and accuracy check was conducted in 2007, after institutions had submitted program data for the study. This step involved returning the data for all programs, with a request that the data be checked for accuracy and missing data be supplied. To ensure that eligible programs that submitted data to the NRC could be included in the ratings, the NRC took several steps beyond the initial checking to “clean” the data. Second, in February 2008 the NRC contacted institutions to inquire about programs that were either missing too much data or for which it had identified some data as “outliers.” This process involved 107 institutions and 387 programs. To determine which programs required cleaning, the 2 sigma (outlier) test was performed for 14 key variables (see Box 3- BOX 3-1 Variables Used in Data Cleaning Percent Female Faculty (Program questionnaire question B5) Percent Minority Faculty (B7) Average Number of Graduates 2001-2006 (C1) Median Time to Degree, Full-time and Part-time Students (C2) Percent Female Doctoral Students in 2005 (C9) Percent Minority Students (C9c) 6-Year Completion Rate, Males (C16) 6-Year Completion Rate, Females (C17) 8-Year Completion Rate, Males (C16) 8-Year Completion Rate, Females (C17) Percent Students with Individual Workspace (D12) Percent Full-time 1st-Year Students with Full Support (E5) 1st-Year External Fellowship (E8) 1st-Year External Traineeship (E8)

46 A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. 1). In an e-mail to the institutional coordinators, NRC staff explained that the check was necessary to calculate ratings for the programs. Institutional coordinators received spreadsheets and were asked to fill in blanks and make sure the outlier values of variables for the programs were correct, changing them if necessary. During this process, 298 programs submitted new data or confirmed their existing data. Twenty-three programs requested to be removed from the ratings or the study or both. Sixty-six programs did not respond to requests or did not have the data available. An additional 95 programs (not in the original 387) submitted cleaned data. Following this process, NRC staff identified 70 programs that had left the health insurance or student outcomes variable, or both, blank. Most of the programs, though not all, responded with data. Third, in January 2009 members of the committee identified 27 programs that appeared to have been assigned to the wrong field. To check, NRC staff contacted 23 institutions to ask about one or more programs. Institutional coordinators responded, and, as a result, seven programs were moved to a different field. Another 20 programs did not move because the institutional coordinator explained why the school had placed the program in that field. Aside from external checks with the institutions, NRC staff and the committee performed repeated ongoing internal checks on the data. These checks included looking at grants, awards and honors, GRE scores, completion rates, and the like. In most cases anomalies in the ratings did not appear to be the result of data errors—that is, careful review by the committee did not find data from the anomalous program very different from that for similar programs. However, in one case there did appear to be an error. The calculated rating for one particular program was very low because of its GRE scores. After these data were questioned, the institutional coordinator submitted new data that reflected the scores of admitted students, as had been instructed, rather than applicants. Publication and citation data were obtained from Thomson Reuters (formerly ISI, the Institute for Scientific Information) and matched to faculty lists. Matching was checked both by examining outliers and by checking and eliminating attribution to faculty with similar names. Finally, no matter how careful the committee was in collecting data and designing measures, sources of error remain. Here are some examples: • Classification errors. The taxonomy of fields may not adequately reflect distinctions that the field itself considers to be important. For example, in anthropology physical anthropology is a different scholarly undertaking from cultural anthropology, and each subfield has different patterns of publication. By lumping together these subfields into one overall field, the committee is implying comparability. Were they separate, different weights might be given to publications or citations. Anthropology is not alone in this problem. Other fields are public health, communications, psychology, and integrated biological science. Although this study presents ranges of rankings across these fields, the committee encourages users to choose comparable programs and use the data, but apply their own weights or examine ranges of rankings only within their peer group. • Data collection errors. The committee provided detailed definitions of important data elements used in the study, such as doctoral program faculty, but not every program that responded paid careful attention to these definitions. The committee

STUDY DESIGN 47 carried out broad statistical tests, examined outliers, and got back to the institutions when it had questions, but that does not mean it caught every mistake. In fields outside the humanities it counted publications by matching faculty names to Thomson Reuters data and tried to limit mistaken attribution of publications to people with similar names. Despite these efforts, some errors may remain. • Omission of field-specific measures of scholarly productivity. The measures of scholarly productivity used were journal articles and, in the humanities, books and articles. Some fields have additional important measures of scholarly productivity. These were included in only one field, the computer sciences. In that field peer-reviewed conference papers are very important. A discussion of data from the computer sciences with its professional society led to further work on counting publications for the entire field.5In the humanities the committee omitted curated exhibition volumes for art history. It also omitted books for the science fields and edited volumes and articles in edited volumes for all fields, since these were not indexed by Thomson Reuters. All of these omissions result in an undercounting of scholarly productivity. The committee regrets them, but it was limited by the available sources. In the future it might be possible to obtain data on these kinds of publication from résumés, but that is expensive and time- consuming. 5 The computer sciences count as publications articles that are presented at refereed conferences, but until recently few of these papers were indexed by Thomson Reuters. To deal with this practice, the committee compiled a list of such conferences that were not indexed and counted these publications from faculty résumés, as it did in the humanities.

Next: 4 The Methodologies Used to Derive Two Illustrative Rankings »
A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD) Get This Book
×
Buy Paperback | $100.00 Buy Ebook | $79.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

A Data-Based Assessment of Research-Doctorate Programs in the United States provides an unparalleled dataset that can be used to assess the quality and effectiveness of doctoral programs based on measures important to faculty, students, administrators, funders, and other stakeholders.

The data, collected for the 2005-2006 academic year from more than 5,000 doctoral programs at 212 universities, covers 62 fields. Included for each program are such characteristics as faculty publications, grants, and awards; student GRE scores, financial support, and employment outcomes; and program size, time to degree, and faculty composition. Measures of faculty and student diversity are also included.

The book features analysis of selected findings across six broad fields: agricultural sciences, biological and health sciences, engineering, physical and mathematical sciences, social and behavioral sciences, and humanities, as well as a discussion of trends in doctoral education since the last assessment in 1995, and suggested uses of the data . It also includes a detailed explanation of the methodology used to collect data and calculate ranges of illustrative rankings.

Included with the book is a comprehensive CD-ROM with a data table in Microsoft Excel. In addition to data on the characteristics of individual programs, the data table contains illustrative ranges of rankings for each program, as well as ranges of rankings for three dimensions of program quality: (1) research activity, (2) student support and outcomes, and (3) diversity of the academic environment.

As an aid to users, the data table is offered with demonstrations of some Microsoft Excel features that may enhance the usability of the spreadsheet, such as hiding and unhiding columns, copying and pasting columns to a new worksheet, and filtering and sorting data. Also provided with the data table are a set of scenarios that show how typical users may want to extract data from the spreadsheet.

PhDs.org, an independent website not affiliated with the National Research Council, incorporated data from the research-doctorate assessment into its Graduate School Guide. Users of the Guide can choose the weights assigned to the program characteristics measured by the National Research Council and others, and rank graduate programs according to their own priorities.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!