Skip to main content

Currently Skimming:

5 Recommendations for Creating and Extending the Measurement Framework
Pages 87-106

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 87...
... This report attempts to advance the conversation about how best to proceed in response to these demands by identifying promising approaches to productivity measurement that would supplement the statistical information needed by policy makers and administrators to guide resource allocation decisions and assess the value of higher education against other compelling demands on scarce resources; in the process, insights may also be generated that, at least indirectly, lead to improved performance of higher education over the long run. In sorting through the wide variety of potential applications and contexts-differentiated by characteristics of student populations, institution types and missions, and relevant level of aggregation -- it is immediately clear that no single metric can suit all purposes.
From page 88...
... The panel does agree that policy makers should be concerned with social value, not just market value generated by higher education, and that, for many purposes, emphasis on the latter is a mistake. Earlier chapters include discussion of why current salary differentials (by degree and field)
From page 89...
... : The baseline productivity measure for the in structional component of higher education -- baseline because it does not capture important quality dimensions of all inputs and outputs- should be estimated as the ratio of (a) the quantity of output, expressed to capture both degrees or completions and passed credit hours, to (b)
From page 90...
... The two positive outputs can be combined into a single quantity by weighting the student credit hours with the added value of the degree or certificate over and above the equivalent years of schooling (the "sheepskin effect")
From page 91...
... That is, those with the highest earning potential may have a tendency either to enter the labor market directly with a bachelor's degree or to pursue a graduate degree. 6A good example is the "Brain Gain" initiative of the Oklahoma Board of Regents, which employs a statistical methodology that estimates the amount an institution deviates from model-predicted graduation rates that takes into account such variables as average admissions test scores, gender, race, and enrollment factors such as full- versus part-time status.
From page 92...
... to determine profiles of community college students whereby an outcome measure indicating a given number of credit hours earned counts as a success (see Bailey and Kienzl, 1999)
From page 93...
... 5.1.2. Instructional Inputs and Costs Ideally, for many monitoring and assessment purposes, separate productivity measures should be developed for direct instructional activities and for the various noninstructional activities that take place at higher education institutions.
From page 94...
... The rental value of capital is estimated as the book value of capital stock 12The cost allocation algorithm developed for the Delta Cost Project is an example of a logical and well-considered basis for allocating costs.
From page 95...
... For example, Webber and Ehrenberg (2010) show that increases in sponsored research expenditures per student were associated with lower graduation rates after holding instructional expenditures per student constant -- perhaps because regular faculty spend less time on optional tasks and rely more on adjuncts.
From page 96...
... Additionally, active researchers may be versed in new knowledge that will not reach journals for a year or two and textbooks for much longer. In earlier chapters, it was argued that instructional program data should exclude sponsored research and organized public service activities because these outputs are distinct from the undergraduate educational mission.
From page 97...
... Arguing on principle for inclusion of research costs in instructional cost is tantamount to arguing that the sponsored research itself be included -- which, in addition to being intrinsically illogical, would hugely distort the productivity measures.14 14A study by Webber and Ehrenberg (2010) supports the idea that project-driven DR may lower graduation rates and lengthen time to degree (presumably because of its demands on faculty effort)
From page 98...
... research and scholarship for fields where sponsored research is not typically available -- for example, the humanities; (b) research and scholarship for faculty who are less successful in obtaining sponsored projects than their peers or who are at departments or institutions that typically do not attract sponsored research funds but who, for various reasons, are deserving of dedicated research time; (c)
From page 99...
... from the other motivators of low teaching loads (other than those associated with sponsored research projects) , and there is no doubt that educational R&D should be included in the instructional cost base.
From page 100...
... This is also why we emphasize the need to segment and array institutions for purposes of comparison. At the national level, quality adjustment is a concern only if there are changes over time in the preparedness of students and the value of education, or in the percentages attending various kinds of institutions -- for example, a big shift into community colleges.
From page 101...
... In the spirit of monitoring quality (in this case, of the student input) in parallel with the proposed productivity statistic, student distributions could be reported at the quartile or quintile level so as not to make reporting excessively costly, rather than simple averages.
From page 102...
... Institutions may respond by reducing faculty size as salary costs rise or shifting to lower-paid nontenure-track teachers which, as noted earlier in the chapter, may potentially affect student learning and graduation outcomes. This suggests that productivity measures must control for changes in the distribution of faculty type and salaries over time.19 For these reasons, changes in the mix of majors must be accounted for when estimating the denominator of the productivity measure.
From page 103...
... : Where they already exist, externally validated assessment tools offer one basis for assessing student learning outcomes. For fields where external professional exams are taken, data should be systematically collected at the department level.
From page 104...
... Gains are computed by comparing student scores on the two exams and then categorizing students as making lower than expected progress, expected progress, or higher than expected progress."22 If learning assessments such as these are to be included as an adjustment factor in a productivity measure, the test selected needs to be one that has national norms. Accreditation is moving in this direction.
From page 105...
... : A neutral entity, with representation from but not dominated or controlled by the country's existing higher education quality assurance bodies, should be charged with reviewing the state of education quality assessment in the United States and recommending an approach to assure that quantitative productivity measurement does not result in quality erosion. This is an important recommendation: the time is right for an overarching impartial review.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.