Measuring and Communicating Quality
QUESTIONS AND CHALLENGES RELATED TO MEASURING QUALITY
Quality measures may be useful for internal institutional improvement or for external benchmarking, and often the same quality measures are not useful for both purposes. Wide-ranging discussion occurred at the workshop around the challenges of measuring quality, including how to determine the appropriate level, time frame, and attribution; how to interface with the public good; and how to measure the roles of faculty and institutions.
At What Level Can Quality Be Measured?
Participants debated the appropriate level for best measuring quality—institutional, program or department, or classroom and faculty.
The institution as the focus. Alexander McCormick (Indiana University) noted that there are powerful cultural beliefs in the United States that the institution matters most, but that there is persuasive evidence that the quality of the educational experience and student learning varies more between particular programs or departments within institutions than between institutions. He asked the audience, “Do you think that you experienced quality uniformly at the institution throughout your experience? Did your peers experience quality at that institution in a uniform way?” The belief that quality is an attribute of an institution, he said, is reinforced not only by ranking, but also now by governments asking for evidence of quality and return on investment.
Because educational quality is often delivered program by program, Josh Wyner (Aspen Institute) suggested that the institution and its senior leaders are some of the essential actors who need to better understand quality across programs if the institution is to improve at scale. McCormick noted that although institutions are an actor, they are not the actor. He observed that institutions can provide and encourage certain conditions for educational
effectiveness, “but no president, provost, or dean can walk up to a dimmer on the wall and turn a switch and ratchet up the quality of education.” Paul LeBlanc (Southern New Hampshire University) and James Grossman suggested that an institution-level aspect of quality may emanate from diversity in an institution’s program offerings, as in the example of a music school benefiting students other than music majors. These network effects—interactions outside the classroom among students with widely different interests—may add quality to the educational experience. Grossman suggested that the benefit is not only personal, but also public: “How much does the public benefit from political science students and future lawyers and future cooperation executives interacting on a daily basis with artists, musicians, and future clergymen? That’s a public good.”
The program as the focus. Several participants believe that quality can best be measured at the program level. Jessica Howell (College Board) described how quality in the health care environment is viewed as a specific program, rather than an overall hospital, issue. LeBlanc suggested that the quality discussion could be situated at the program level given the variability between an institution’s programs. McCormick explained that National Survey of Student Engagement has identified the program as the primary driver of the student experience. Scott Ralls (Northern Virginia Community College) also believes a focus on the program level to be most appropriate.
The classroom as the focus. Several participants connected quality to teaching methods guided by research on student learning, including hands-on learning, inquiry-based learning, and student connections to real-world problems. Quality, in this case, would be measured at the classroom level. McCormick noted institutions have reliable systems for tracking coursework and credit hours, but it is much more difficult to measure student learning. A number of participants discussed the contributions that adaptive learning assessments could make to efforts to determine quality.
When Can Quality Be Measured, and How Can It Be Attributed?
The group discussed the question of when quality can be measured, whether immediately upon graduation or several years thereafter. One participant noted that feedback will be very different if captured 5 years versus 10 years after graduation. Elsa Núñez (Eastern Connecticut University) pointed out that as information on quality is gathered year after year, higher education’s questions, values, and concerns will evolve, as will its data-collection tools.
Individual participants discussed the limitations of assessing the experience of graduates and making inferences about an institution’s contribution to their success later in life. Cliff Adelman (Institute for Higher Education Policy) noted that greater than 50 percent of students attend more than one institution and greater than 30 percent attend more than two institutions. Ralls noted that students are not randomly assigned to institutions. Some institutions can select
their students, who tend to arrive with many of the experiences that contribute to their future success, making attribution of their knowledge and proficiencies to a particular institution difficult.
QUESTIONS AND CHALLENGES CONCERNING DATA
Workshop participants discussed the quality and usefulness of existing data and the ways to improve and coordinate data collection.
How Can Data Quality Be Improved or Made More Relevant to the Quality Discussion?
Several participants acknowledged dueling needs for quality data: the need for contextualization (related to an institution’s mission or to a particular type of student’s needs) and the need for comparability across institutions. For example, Jennifer Engle (Bill & Melinda Gates Foundation) noted, “Even as we want to contextualize, we also have to balance that with a need to provide students with the information that they can compare. Both of those impulses are valid.”
Engle described how the Gates Foundation has undertaken a number of initiatives to improve data quality, for example, collecting data through completion initiatives such as Complete College America1 and Achieving the Dream.2 She believes that scaling the data collection and analysis process is crucial for expanding innovation. She stressed that quality information is needed for all students (including “nontraditional” and remedial students) and all institutions. She recognized that “it doesn’t seem innovative to count all students, and yet that’s what’s underlying a lot of discussions about why the data are not sufficient” for quality improvement efforts. Referencing the lack of data on nontraditional students, Emily Slack (Education and Labor Committee, U.S. House of Representatives) observed, “It would revolutionize higher education data if we would just count the other 50 percent of students that are out there.” Engle noted that current data systems were not designed to capture the experiences of nontraditional students, which in her view, is one of the most prevalent problems, “but also the most easily fixed because we know the students are there. We’re already counting their outcomes, but we’re not making them part of how we publicly express the outcome of an institution.”
How Can Dissimilar Systems Work Together, and What Is the Infrastructure We Might Want?
Several participants highlighted the need to coordinate data-gathering systems. Engle noted, “We have a lot disconnected data systems that were
1 See http://completecollege.org/.
created for their own purposes but none of which were exclusively created for the purposes that we’re talking about here, in terms of understanding how students are moving through the college experience.” In particular, she believes the outcomes of nontraditional students should be communicated: “How do we start to change the publicly available data systems so that we can better capture those students?” This issue is also relevant to measuring the quality of education delivered to students enrolled in the nontraditional postsecondary education and training programs.
Engle advocated for careful thought about the optimal data infrastructure, including communication between the various data systems, noting that “we need to decrease burden and increase utility.” She urged that state systems should communicate better with federal systems, federal systems should communicate better with one another, and private systems—such as a national student clearinghouse—should play a role as well. Engle described the Gates Foundation’s data infrastructure working group that is writing papers (released in early 2016 through the Institute for Higher Education Policy), one of which focuses on what institutions need to do to improve data quality and recommendations for action at the state and federal levels.3
How Can the Existing Data Be Put to Better Use and Be Coordinated?
Engle described how she believes existing data could be put to better use, including by more audiences such as students and institutions themselves. She asked how institutions might better use the data they are already collecting and connect those data to their campuses’ operational data—“How can institutions link performance metrics to what is happening in terms of individual students?” She explained that some institutions make this connection by using technology-enabled advising systems to examine why students are not making sufficient progress toward their degree.
Participants in one break-out group noted the obstacles created by regulations that restrict data-sharing across institutions (e.g., institutional review boards, or the Family Educational Rights and Privacy Act). Sharing these data is important to determining which interventions improve student outcomes and which do not. Increased data-sharing would strengthen trans-institutional conversations about quality.
3 Engle, Jennifer. (2016). Answering the call: Institutions and States Lead the Way Toward Better Measures of Postsecondary Performance. Bill & Melinda Gates Foundation. http://postsecondary.gatesfoundation.org/wp-content/uploads/2016/02/AnsweringtheCall.pdf.