Skip to main content

Currently Skimming:

3 Why Measurement of Higher Education Productivity Is Difficult
Pages 37-60

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 37...
... ; · Inputs and outputs of the productive process are heterogeneous, involve nonmarket variables, and are subject to quality variation and temporal change; and · Measurement is impeded by gaps in needed data. None of these complexities is completely unique to higher education, but their severity and number may be.1 In this chapter, we examine each of these complexities because it is essential to be aware of their existence, even while recognizing that practical first steps toward measurement of productivity cannot fully account for them.
From page 38...
... , and other goods and services from a vector of capital, labor, and other inputs. Community colleges produce remedial education, degree, and certificate programs designed for graduates entering directly into careers, academic degree programs that create opportunities for transfer to four-year institutions, and programs designed to meet the needs of the local labor market and specific employers.
From page 39...
... Specifically, a productivity measure of instruction can provide only a partial assessment of the sector's aggregate contributions to national and regional objectives. In particular, the omission of some kinds of research creates a truncated view not only of what colleges and universities do but also of their critical role in national research innovation and postbaccalaureate educational systems.
From page 40...
... If the performance of a less pre pared student is raised by being surrounded by better prepared students, this enhances learning and is part of the value of the higher education experience.10 The composition of an institution's student body will influence how that institution will score in a performance metric. If the measure of interest is graduation rates, lower levels of student preparation will likely translate into lower productivity.
From page 41...
... In the current economic downturn, with universities facing budget cuts, the utilization of adjunct faculty has become increasingly prominent.12 This situation raises the need for analyses of the quality of instruction adjunct faculty provide. In the section on inputs, below, and again in Chapter 5, we return to the topic of variable student and instructor quality, and its implications for productivity measurement.
From page 42...
... While compiling data at the campus level introduces a significant degree of approximation, this is no worse than would likely occur in many if not most industries elsewhere in the economy. Individual institutions can and should analyze productivity at the level of degree and subject, just as manufacturers should analyze productivity at the level of individual production processes.
From page 43...
... These kinds of nonmarket quality dimensions are no doubt important parts of the production function, although they cannot yet be measured well. The policy implication is that the fullest possible accounting of higher education should be pursued if it is to be used for prioritizing public spending.15 That positive externalities are created by higher education is implicitly acknowledged as college tuition (public and private)
From page 44...
... Many aspects of measuring quality change have been explored for other difficult-to-measure service sectors and progress has been made. In its price 16Within the framework of the national accounts, nonmarket activities such as education have been valued on the basis of the cost of their inputs.
From page 45...
... Students select into institutions with different missions, according to their objectives. Institutional mission and character of student body should be considered when interpreting graduation rates, cost statistics, or productivity measures as part of a policy analysis.
From page 46...
... , many more questions address areas of academic engagement such as writing, discussing ideas or doing research with faculty, integrative learning activities, and so forth. In looking at any measure of student characteristics, it must be remembered that between-institution variance is almost always smaller than within-institution
From page 47...
... This appears to be true at every level of education. Results from the NSSE reveal that for all but 1 of the 14 NSSE scales for both first-year and senior students, less than 10 percent of the total variance in student engagement is between institutions.
From page 48...
... of 500 colleges by The Chronicle of Higher Education revealed their view that the most effective cost-cutting or revenue-raising strategies are to raise teaching loads and increase tuition. 21 Another favored strategy is to reallocate tenure and adjunct faculty positions and, as a result, universities and colleges are increasingly scrutinizing faculty productivity.
From page 49...
... Even though the NRC report addressed a complicated issue, it emphasized measuring faculty quality as it pertains to research-doctoral programs in four-year research universities. The absence of guidelines on measuring quality of instructional faculty in four-year universities and community colleges was attributed to the trend of relying on wages earned as a proxy of faculty quality.
From page 50...
... We have made the point that higher education produces multiple outputs. Even for those concerned primarily with the instructional component, looking narrowly at the production of four-year degrees may be inadequate because degrees are far from homogeneous.25 Ideally, for valuing outputs, it would be possible to identify quality dimensions and make adjustments integrating relevant indicators of learning, preparation for subsequent course work, job readiness, and income effects.
From page 51...
... Student Learning Beyond measures of credits and degrees produced, and their associated wage effects, is the goal of measuring the value added of student learning.28 The motivation to measure learning is that the number of degrees or credit hours completed is not, by itself, a complete indicator of what higher education produces. That is, earning a baccalaureate degree without acquiring the knowledge, skills, and competencies required to function effectively in the labor market and in society is a hollow accomplishment.
From page 52...
... Ignoring measures of learning outcomes or student engagement (while, perhaps, emphasizing graduation rates) may result in misleading conclusions about institutional performance and ill-informed policy prescriptions.
From page 53...
... present evidence that institutional selectivity is strongly correlated with completion rates, controlling for differences in the quality and demographics of enrolled students as well as factors such as per student educational expenditures. The authors argue that students do best, in terms of completion rates, when they attend the most selective schools that will accept them, due in part to peer effects.
From page 54...
... . Longitudinal data from the National Study of Student Learning and cross-sectional results from the NSSE show that institutional selectivity is a weak indicator of student exposure to good practices in undergraduate education -- practices such as whether faculty members clearly articulate course objectives, use relevant examples, identify key points, and provide class outlines (Kuh and Pascarella, 2004)
From page 55...
... 3.5.1. Course and Department Level A course can be envisioned as the atomistic element of learning production, and the basic building block of productivity measurement at the micro level.
From page 56...
... Appendix B to this volume provides a description of how NCAT measures comparative quality and cost of competing course design models. For some purposes, an academic department or program is a more appropriate unit of analysis.33 This is because input costs as well as output valuations that markets, societies, and individuals place on various degrees vary by majors or academic field.34 Collecting physical input and output data that can be associated with specific departments or fields of study within an institution provides maximum flexibility as to how the production function will actually be organized, and also provides the data needed for productivity measurement.
From page 57...
... Of course, campus level productivity measurement invites inter-institution comparisons as well. We discussed earlier how heterogeneity of inputs and outputs requires segmentation by institutional type.
From page 58...
... Reporting data at the campus level that is potentially useful for productivity measurement does not require weighting departmental inputs and outputs. Estimating total labor hours for the campus as a whole is equivalent to summing the hours for the individual departments, but the data collection process is much simpler.
From page 59...
... It will also collect a broad range of information from the adults tak ing the survey, including how their skills are used at work and in other contexts such as in the home and the community. Ideally, in addition to educational at tainment, information on college major, previous work experience, and the dates and types of higher education institutions attended is desired to estimate higher education productivity based on PIAAC-collected data.
From page 60...
... Joint production of multiple outputs, heterogeneous inputs and outputs, quality change over time, and quality variation across institutions and systems all conspire to add complexity to the task. In order to advance productivity measurement beyond its current nascent state, it is necessary to recognize that not all of the complexities we have catalogued can be adequately accounted for at least at the present time.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.