National Academies Press: OpenBook
« Previous: 1 Background, Framing, and Concepts
Suggested Citation:"2 Governing Principles of Good Metrics." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
Page 9
Suggested Citation:"2 Governing Principles of Good Metrics." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
Page 10
Suggested Citation:"2 Governing Principles of Good Metrics." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
Page 11

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

2 Governing Principles of Good Metrics Important first steps in creating metrics for evaluating teaching in engineering schools is to develop principles that ensure that the metrics will be widely accepted and sustainable and that they actually will provide valid assessments of the educational impact of faculty on students. One of the main principles should be what is valued is rewarded, and what is rewarded is valued. For far too long many have bought into the notion that teaching effectiveness cannot be evaluated as objectively as research contributions (where output quantity, frequency of citation, and confidential letters attesting to quality and impacts are frequently employed; England, 1996). Some have internalized this principle and made it part of the value system in engineering education, namely, that teaching is less important and less scholarly than research. Promoters of metrics for evaluating teaching must be sensitive to these long-held, very strong convictions and recognize that introducing metrics will represent a major cultural change. The principles listed below are common to the development of any new system in an organization and can guide the creation of metrics for evaluating teaching: • The evaluation system must be compatible with the overall mission, goals, and structure of the institution because engineering colleges reside within universities, and the evaluation of engineering faculty for promotion and tenure will eventually be conducted by university committees. If metrics have been created in isolation, engineering faculty might be judged by one set of criteria in the engineering context and a different set of criteria in the context of promotion at the university level. Thus, ideally, engineering schools should approach their respective institutions to initiate a discussion across the university regarding improved metrics for evaluating teaching. • The proper locus for developing an effective evaluation system should be the deans and department chairs, or their equivalents. These administrative levels can provide the necessary connections between the institutional administration and the individual faculty members. Deans and department heads can also assist in allocating resources for the design and implementation of an evaluation system that is in concert with the institutional mission, goals, and structure. • To ensure the acceptance of the evaluation system, faculty members should be integrally involved in its creation (i.e., faculty must believe in the fairness and utility of the evaluation process). To ensure faculty buy-in, they must be involved in the discussions from the beginning. Moreover, the discussions themselves, by providing a forum where faculty from different departments can discuss characteristics and methods of effective teaching, will begin to break down the barriers of teaching as an isolated activity and reposition it as a collegial activity, thus further legitimizing its value. 9

• The evaluation system should reflect the complexity of teaching, which must include the course design element, implementation and delivery of the course, assessment, and mechanisms for continuous improvement, and recognition of different learning styles and levels of student abilities. Teaching is both a science and an art, and doing it well requires a knowledge base and skills that are usually not well-addressed in disciplinary doctoral programs. • At the end of the day, the discussion participants must be in agreement/consensus on the fundamental elements of effective teaching. Most important, learning1 should be a key component of any definition, because the outcome of effective teaching is always learning. Other elements include design (e.g., the alignment of clearly articulated objectives/outcomes,2 assessments,3 and instructional activities4) and implementation (e.g., clear explanations, frequent and constructive feedback, illustrative examples). • An evaluation of teaching should include both formative feedback to assist/help individual improvement and summative evaluation to measure progress toward institutional goals.5 An evaluation system must identify areas for improvement and provide both opportunities and support for making those improvements. While we believe that faculty evaluation and faculty development should not be programmatically linked (they should not be housed in the same entity or done by the same people), linking the two conceptually sends a clear message that the institution supports faculty growth, which happens only when faculty receive ongoing and constructive feedback. • The evaluation system must be flexible enough to encompass various institutional missions, disciplines, audiences, goals, teaching methodologies, etc. In addition, it should also accommodate people on different “tracks” (e.g., some universities have adopted teaching tracks as some faculty gravitate toward expanded teaching roles at different points in their careers). Finally, the system should be flexible enough to acknowledge, encourage, and/or reward educational experimentation or attempts at educational innovation. A flexible system enables instructors to try new things without worrying that they might be penalized if the outcomes are not immediately positive. 1 In the context of this report, learning is defined as knowledge, skills, and abilities, as well as attitudes students have acquired by the end of a course or program of study. 2 Objectives/outcomes are descriptions of what students should be able to do at the end of the course (e.g., analyze, use, apply, critique, construct). 3 Assessments are tasks that provide feedback to the instructor and the student on the student’s level of knowledge and skills. Assessments should be varied, frequent, and relevant. 4 Instruction includes providing contexts and activities that encourage meaningful engagement by students in learning (e.g., targeted practice). 5 A formative assessment is typically defined as an ongoing assessment intended to improve performance, in this case, faculty teaching (and hence student learning). A summative assessment, typically conducted at the end of instruction (e.g., of a semester or program), is used to determine overall success. 10

• Evaluations should be based on multiple sources of information, multiple methods of gathering data, and information for multiple points in time.6 The evidence collected should be reliable (i.e., consistent and accurate), valid (i.e., it should measure what it is intended to measure), and fair (i.e., it should reflect the complexity of the educator’s achievements and accomplishments). • It is equally important to note that collecting and analyzing data of this sort often demands a skill that we may need to develop further among our faculty and administrators. A good way to learn these skills might be to enlist the help of colleagues on campus who have expertise in, for example, survey design, qualitative interviewing, educational outcomes research, and so forth. • A sustainable evaluation system must not require implementation that is burdensome to faculty or administrators. However, it is important to guard against sacrificing the fairness, validity, accuracy, and reliability of the evaluation system in trying to make it as easy to use as possible. • The evaluation system itself should be evaluated periodically to determine if it is effective. These periodic reviews should be part of the development plan to ensure that evaluations provide both formative feedback that leads to improvements in teaching and data adequate for judging the quality of teaching. If the system is successful, all stakeholders will recognize that it provides accurate and valuable information that meets the needs of various groups and creates a culture of assessment that drives teaching and learning improvements. They will also agree that an assessment is not done to faculty but is done by faculty and for faculty and that assessment supports continuous improvements in the quality of education. If stakeholders internalize the principles listed above for developing metrics, they will naturally support a culture of assessment. 6 Both direct and indirect measures should be used. Direct measures (e.g., exams, projects, assignments) show evidence of students’ knowledge and skills. Indirect measures (e.g., teaching evaluations) reflect students’ perceptions of teaching effectiveness and employers’ and alumni perceptions of how well the program prepares students for their jobs. 11

Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved Get This Book
 Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Faculty in all disciplines must continually prioritize their time to reflect the many demands of their faculty obligations, but they must also prioritize their efforts in ways that will improve the prospects of career advancement. The current perception is that research contributions are the most important measure with respect to faculty promotion and tenure decisions, and that teaching effectiveness is less valued—regardless of the stated weighting of research, teaching and service. In addition, methods for assessing research accomplishments are well established, even though imperfect, whereas metrics for assessing teaching, learning, and instructional effectiveness are not as well defined or well established.

Developing Metrics for Assessing Engineering Instruction provides a concise description of a process to develop and institute a valid and acceptable means of measuring teaching effectiveness in order to foster greater acceptance and rewards for faculty efforts to improve their performance of the teaching role that makes up a part of their faculty responsibility. Although the focus of this book is in the area of engineering, the concepts and approaches are applicable to all fields in higher education.


  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook,'s online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!