Much of the focus on “quality” in undergraduate education in recent years has been on a combination of input factors and outcome measures. Reputation, entrance examination scores and admissions selectivity, financial resources, graduation rates, graduates’ employment and earnings, and other attributes are imperfect measures of the overall quality of a college or university, but they do provide some metrics to help consumers assess the value of their investment in postsecondary education. Yet, educators, policymakers, employers, and other interested stakeholders continue to strive for more comprehensive indicators of a “quality undergraduate experience,” including those that measure student learning outcomes and graduates’ readiness for success in the workforce.
Students, parents, and government agencies—all of which invest heavily in postsecondary education in the country—need as much information as possible about the outcomes of the higher education experience and the extent to which they can expect a fair return on their investment in higher education. Parents and students want some assurance that their investments will result in, among other things, the capacity of the students to secure well-paying jobs and have a fulfilling career. Governments—especially the U.S. federal government, which invests $75 billion annually in higher education,1 much of it through student support programs such as Pell Grants—also want assurances that their investments will benefit students as well as the larger society. The College Scorecard,2 released by the U.S. Department of Education in summer 2015, is an
1 See Schroeder, Ingrid, et al. (2015). Federal and State Funding of Higher Education: a changing landscape. The Pew Charitable Trusts. Figure 2. http://www.pewtrusts.org/en/researchand-analysis/issue-briefs/2015/06/federal-and-state-funding-of-higher-education.
2 The College Scorecard is an online interactive tool developed by the U.S. Department of Education to provide students and families with information to help inform a college search process—including location, size, campus setting, and degree and major programs. Each Scorecard
example of a tool that focuses on a few quantitative indicators of the value of institutions.
A major remaining challenge, then, is to better understand the concept of quality in terms of the full range of student experiences at an undergraduate institution. This can be defined broadly as enabling students to acquire knowledge in a variety of disciplines and deep knowledge in at least one discipline, as well as to develop a range of skills and habits of mind that prepare them for career success, engaged citizenship, intercultural competence, social responsibility, and continued intellectual growth.3 Although these outcomes are difficult to measure in a standard way that allows for easy comparison across programs and institutions, they are educational outcomes that students, parents, and employers value.
OBJECTIVES FOR THE WORKSHOP
In response to this challenge, an ad hoc planning committee of the National Academies of Sciences, Engineering, and Medicine (the Academies) Board on Higher Education and Workforce (BHEW), with funding from the Lumina Foundation, organized a workshop in Washington, D.C., on December 14-15, 2015. As outlined in the Statement of Task, the workshop goals were
- To engage scholars and researchers—as well as leaders from higher education, business, civic organizations, and government—in focused discussions about quality in the undergraduate educational experience.
- To begin to understand how to define and measure those factors that contribute to a quality educational experience that are difficult to quantify but represent the core elements of a successful undergraduate experience for most students.
- To identify key questions and research themes for possible further study on the definition, measurement, and determination of a quality education.
- To stimulate further research and dialogue among education leaders and policymakers on the topic of quality, which could in turn influence both institutional policy and practice and public policies at the federal and state levels.
The planning committee intended for the discussions among college and university faculty and administrators; state and federal agency officials, legislators, and staff; accreditors; policy organizations; business leaders and
also includes five pieces of data about a college: costs, graduation rate, loan default rate, average amount borrowed, and employment. More information is available at http://www.ed.gov/news/pressreleases/education-department-releases-college-scorecard-help-students-choose-best-college-them.
3 Workshop participants were asked to provide their own definitions of quality as it pertains to undergraduate education. Those definitions are presented in Chapter 2.
industry associations; students; and other stakeholders to focus on improving our understanding, definition, and measurement of educational quality across the range of undergraduate institutions in the United States.
GUIDANCE AND MATERIALS GIVEN TO WORKSHOP PARTICIPANTS PRIOR TO THE EVENT
Prior to the workshop, each participant received two background papers that set the stage for the presentations and panel discussions: “Quality in the Undergraduate Experience: a discussion document” (see Appendix B) and a commissioned paper authored by Jordan Matsudaira, assistant professor in the Department of Policy Analysis and Management at Cornell University, “Defining and Measuring Institutional quality in Higher Education” (see Appendix C).
The planning committee’s discussion document focused on five themes: the measurement of student learning; qualitative factors often cited as important outcomes of undergraduate education; the importance and challenges of assessment; federal policy implications of assessing quality; and the importance of context with regard to institution type, learning environments, and student goals. It concluded with a set of questions intended to guide the workshop discussions:
- What actions are required in the next 2 years to move us from current models of measuring student learning (e.g., VALUE Rubrics,4 PULSE,5 and DQP6) that are implemented on an ad hoc basis to a system of quality measurement whereby a group of like institutions adopts a standard set of indicators and reports their results, keeping in mind the work of the Voluntary System of Accountability (VSA)7 and the related community college effort, the Voluntary Framework of Accountability?8 What are the next steps in the process of implementing such a system, even on a pilot basis?
- Now that the College Scorecard9 has been released, what further steps should the federal government (and, possibly, state governments) take to improve public information about the quality of undergraduate institutions? Are there improvements to the College Scorecard that are feasible and desirable in the near term? If so, who should be responsible
4 Valid Assessment of Learning in Undergraduate Education, see https://www.aacu.org/value/rubrics.
5 Partnership of Undergraduate Life Science Education, see http://www.pulsecommunity.org/.
6 Degree Qualifications Profile, see http://degreeprofile.org/.
7 See http://www.voluntarysystem.org/.
8 See http://vfa.aacc.nche.edu/Pages/default.aspx.
for implementing them? What structures should be put in place to assure that the College Scorecard is well-curated and can improve over time?
- Can—and should—a group be assembled to create a core set of principles to guide the development of a general framework for measuring quality in undergraduate education—one that can be adopted by nearly any type of institution (e.g., 4-year university, 2-year college, online institution, “boot camp”)? If so, who should be involved in that process, who should lead it, and who should fund it? How could such an entity build on many of the existing rubrics and tools that have been recently developed?
- What might be the most appropriate role, if any, for the Academies? Could they, for example, serve an integration and synthesis role, bringing together and leveraging the good work that is under way (including DQP, VALUE, VSA, and perhaps other emerging programs)? Might they also seek to broaden the emphasis from defining competencies and outcomes to working out the quite thorny assessment and consumer information components?
The Matsudaira paper provided background on the topic and a substantial overview of the research that has already been conducted on defining and measuring institutional quality. Among the key points made in the Matsudaira paper were as follows:
The goal of developing quality indicators for higher education is to enable better decision-making on the part of prospective students, higher education officials, and policymakers to improve the quality of education offered by institutions and to guide students to institutions offering better quality. Institutional quality is multidimensional, and the various users of quality information might place different weight on each dimension of quality.
Quality should be viewed as the extent to which an institution increases the likelihood of achieving various educational goals—that is, as the causal impact of attending an institution on some outcome of education. Defining and measuring the various desired educational outcomes of higher education are major challenges to creating better quality indicators.
New information about the outcomes of students attending institutions, such as the cohort completion rates, debt repayment, and median earnings found on the College Scorecard, represent a large stride forward in developing institutional quality measures. But differences in these measures represent both differences in quality as well as differences in the family income, career interests, and academic preparation of the students that institutions enroll. Isolating quality from these “selection effects” is an important challenge to resolve.
Causal estimates of institutional effects on student outcomes are highly sensitive to variations in the statistical models used. Although progress has
been made, the research literature has yet to reach consensus on the best methodology to measure these causal effects.
In addition to validating methods to estimate the causal impact of institutions, more work is needed to develop measures of student outcomes aside from their labor market success. The lack of broader quality measures, such as students’ learning and subjective well-being, have caused ongoing accountability efforts—such as state performance-based funding initiatives for state higher-education institutions—to focus only on earnings and completion outcomes. This poses the risk of incentivizing institutions to allocate resources toward a narrow set of educational goals.
Workshop participants considered the ideas and themes in both papers throughout the 2 days. The workshop itself included panel sessions, expert presentations, small-group discussions of key topics and themes, and large-group “report-outs” and discussions about the topics and themes. The workshop agenda is included in Appendix A.
ORGANIZATION OF THE SUMMARY
This summary is organized into major themes that arose during the workshop: defining quality, improving quality, and measuring and communicating quality. These themes should not be construed as reflecting consensus or endorsement by the committee, the workshop participants as a whole, or the Academies.
This document has been prepared by the workshop rapporteurs as a factual summary of what occurred at the workshop. The statements made in this volume are those of the rapporteurs and do not necessarily represent positions of the workshop participants as a whole, the steering committee, the Board of Higher Education, or the Academies. The workshop did not attempt to establish any conclusions or recommendations about needs and future directions, focusing instead on issues identified by the speakers and workshop participants. In addition, the planning committee’s role was limited to planning the workshop.
Opening remarks by planning committee chair Paul Courant (University of Michigan) framed many of the topics and questions explored during the course of the workshop. Courant articulated several questions commonly posed today to higher education: What are you doing? Why is it so expensive? Does it really work? Is it worth it? He noted that higher education asserts vigorously—and accurately, in his opinion—that there are good reasons for the high price, but higher education is not quite as good at communicating its quality and value to the public, parents and students, and government agencies.
Other participants explored some of these ideas during the course of the workshop. Paul LeBlanc (Southern New Hampshire University), for example, noted that institutions often make clear claims about their students’ learning but are not able to back up their claims. In order to assess quality at the institutional
level or across higher education, he said, institutions reframe the question “What do students know?” to “What can students do with what they know?”
Several workshop participants discussed how new demands are being placed on higher education because its traditional institutions were established during an era with different expectations. Sally Johnstone (Western Governors University) noted how the current system was not purposefully designed, but rather evolved over generations. Institutions were created as places where a group of experts convened and shared their knowledge with students, functioning as students’ primary sources of information. Individual participants pointed to new technologies as one major influence on the evolution of universities’ roles and students’ experiences and needs. For example, students today have easy access to disciplinary content on the Internet through sources such as the Khan Academy and other online content providers of instruction.
Other reasons for increased concern about quality in undergraduate education that arose during the workshop included (1) a growing concern by the federal government about the quality of instruction and the return on investment—driven in part by its spending on financial aid, which has more than doubled in recent years, (2) expressions of dissatisfaction by some employers regarding the skills and proficiencies of new graduates, and (3) an increasing concern about whether underrepresented minorities and first-generation college students have adequate access to quality undergraduate education that is designed for the social and academic challenges many face. Participants elaborated on employers’ experiences and needs in particular—especially in light of the changing workforce, which seems to demand higher levels of numeracy, problem-solving, and critical thinking skills—as well as on the need for equity and inclusion as an integral part of any conversation about quality.
Individual participants cited a number of current indicators of and assumptions about quality, which they considered valuable or reasonable but insufficient. Among the sources of indicators mentioned were the College Scorecard, employer satisfaction surveys, results from the National Survey of Student Engagement, results of the College Learning Assessment (CLA), and the Gallup-Purdue Index on life satisfaction. Participants also mentioned several important initiatives to improve quality that are already completed or under way, including
- Degree Qualifications Profile (DQP), “a learning-centered framework for what college graduates should know and be able to do to earn the associate, bachelor’s or master’s degree”10
- Liberal Education and America’s Promise (LEAP), “a national public advocacy and campus action initiative of the Association of American Colleges & Universities (AAC&U)”11
- Workcred, an affiliate of the American National Standards Institute looking “to strengthen workforce quality by improving the credentialing system, ensuring its ongoing relevance, and preparing employers, workers, educators, and governments to use it effectively”12
- Credentials Transparency Initiative, a joint venture of George Washington University’s Institute of Public Policy, Workcred, and Southern Illinois University “to help align credentials with the needs of students, job seekers, workers and employers”13
Several participants also cited institutional attributes that are often treated as proxies for quality but whose causal connections to quality have not been proved. LeBlanc said that too often claims of quality have been based on an institution’s having a sufficient number of faculty from reputable schools, students admitted with high SAT scores, and substantial volumes in the library, to the neglect of student outcomes. Although some of those outcomes are now being measured, they need to be fleshed out further: “How do we know? Do we have the kind of hard-nosed regular assessment that allows us to test those claims?” he continued. Scott Ralls (Northern Virginia Community College) believes that many assume the quality of an institution increases if it is more selective. Ellen Hazelkorn (Dublin Institute of Technology) highlighted the tendency to define outcomes using only the top 100 institutions in global rankings as a guide. However, she noted that currently there are 18,000 higher education institutions as defined by the Organisation for Economic Co-operation and Development (OECD) and United Nations Educational, Scientific and Cultural Organization (UNESCO), which means that the quality definitions determined by the top 100 institutions represent about 0.5 percent of the world’s institutions and about 0.4 percent of the world’s students.
These opening discussions set the stage for a series of panel discussions and small-group conversations focused on potential next steps for clarifying the definitions of quality, measuring the quality of student learning and mastery of skills, and developing an accountability system that communicates indicators of quality to the various stakeholders while protecting the academic freedom of postsecondary institutions.
11 See https://www.aacu.org/leap.
12 See http://www.workcred.org/About-Workcred/Default.aspx.
13 See http://www.credentialtransparencyinitiative.org/Default.aspx.
This page intentionally left blank.