Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
62.1 Instructional Design Shortly after World War II, the U.S. military services real- ized that the training of personnel to operate and maintain increasingly sophisticated weapon systems was key to the future of those services (Dick 1987). During a 20-year period of research, development, and deployment, a set of principles and steps were derived that changed both the design and con- duct of instruction. Essentially, instruction shifted from an emphasis on the teaching process to an emphasis on student performance measurement (Brock 2006). During nearly two decades of research, development, and field studies a system that emphasized student performance based on specific task characteristics was evolved. Key studies by both behavioral theorists (e.g., Glaser 1963, Merrill and Boutwell 1973, Gagné 1965) and training practitioners (e.g., Mager 1962, 1977; Brock, McMichael, and DeLong 1975) led to a five-volume set of DOD guidelines for designing instruc- tion (Branson, Rayner, Cox, Furman, King, and Hannum 1975). Although over time various researchers and practition- ers developed variations on the basic model to meet specific training needs, the fundamentals remained constant: training will be based on the work (tasks) that the student must per- form upon graduation, training objectives will be stated in terms of student performance, and it is student performance that will be measured at the end of the instructional process (Andrews and Goodson 1980). The basic Instructional Sys- tems Design (ISD) model is shown in Figure 1. The important point of Figure 1 is that learning objectives, content, and instructional methodologies must have their roots in real-world performance and must be evaluated against those real-world performance requirements. Figure 2 shows the cornerstone of any model of instruc- tional design: the learning objective. In the simplest terms, the learning objective describes what the student in a training program must do before he or she graduates from the pro- gram. The central feature of a learning objective is the task, which is typically taken from an analysis of the job or jobs the graduate is being trained to do (Mager 1962). The second feature of the learning objective is a description of the conditions under which the task will be performed in the training environment. For instance, if the student were to learn to change a tire, the conditions would state the kind of vehicle, what tools would be provided, and any other condi- tions imposed by the trainer. The third element of the learning objective is the standards that the student must meet to successfully complete the instruc- tion. In the example of the tire changing, it could be expressed in terms of maximum allowable time to complete the task and amount of torque applied to each lug nut. Glaser (1963) and Glaser and Klaus (1962) coined the term, âcriterion-referenced measures,â to describe a testing process that measured individual student performance against a set standard. Before the early 1960s, most tests, called norm referenced tests, were designed to spread out the performance of students across a normal distribution curve. Some students were expected to do very well and some were expected to do very poorly. Measuring students against a set of standards rather than against other students reflected the influence of having behaviorally based learning objectives for the training process. Readers interested in a more detailed narrative of the history of instructional design can read Riesner (2001). The impor- tant point is that the systematic design of instruction begins with a description of the tasks to be trained and ends with a set of tests which measure the ability of a student to perform those tasks. 2.2 Instructional TechnologyâCBI Concurrently with the development of a systematic approach to instructional design, educators, computer scientists, and instructional technologists were developing new ways to instruct. Whereas the technologies of instruction have rapidly C H A P T E R 2 Review of the Literature
7CBI is not intrinsically good. If instructional programs are not well designed, if student needs are not met, if incorrect or incomplete content is presented, and if student performance is not measured, then all that the computer does is provide an efficient means for bad instruction to be distributed (Brock 1997, 2003). CBI can be more interesting than conventional instruction; it can be more engaging, more entertaining, more individualized, and more exciting. Nevertheless, if the result of the instruction is not measurably improved human performance, it does not make any difference. However, the power of computers to instruct is significant. It can provide graphics, video, and sound of the highest qual- ity. Computers can adapt the pace, mode, and content of an instructional program to meet the learning needs of each stu- dent. A well-designed CBI program will test each student as he or she progresses through an instructional program and based on those test results, provide the next appropriate unit of instruction. The most fundamental question about CBI is, Does it work? Recent research, which has applied meta-analytic tech- niques to answer that question, suggests that it does (Fletcher 2006, Kulik 1994, Kulik and Kulik 1991). Meta-analysis is a technique first proposed by Glass (1976). It applies statistical analysis to an accumulation of studies around the category of interest. Fletcher proposes a method by which these statisti- cal findings can be converted to a percentile measurement to compare student performance. Recent studies, using the approach presented by Fletcher, have indicated that the appropriate application of CBI across a wide range of a large population of students can lead to a 33% increase in the amount of material learned or a 33% decrease in the time needed to reach previously established learning criteria (Dodds and Fletcher 2004). In the following chapter, the report discusses the implications of those find- ings for commercial vehicle operator training. CBI programs for the training of commercial vehicle opera- tors are presented in the next chapter. However, there has also been some development and study of applying CBI techniques to the training of young, novice drivers (e.g., Brock 1998; Hodell, Hersch, Brock, Lonero, Clinton, and Black 2001). In the cases where studies have been conducted, CBI has been shown to be effective in very specific instances (Fisher, Laurie, Glaser, Brock, Connerney, Pollatsek and Duffy 2002). The program studied by Brock (1998) and Fisher et al. was developed by the AAA Foundation for Traffic Safety to train a very specifically defined set of driving skills (Lonero, Clin- ton, Brock, Wilde, Laurie, and Black 1995). Young drivers were trained to search for, recognize, and respond to risky sit- uations. The program is now available in both CD and DVD formats and continues to have some commercial success. However, its overall effect on young driver accidents or mov- ing violation incidents is undocumented. Figure 1. Basic principles of ISD. Figure 2. Components of a learning objective. changed (and continue to do so), the basic principles of using technology to teach have remained relatively stable (Eberts and Brock 1987, Brock 1997). Although instructional technologies can apply to anything from slide projectors to satellite linked distance learning pro- grams, this discussion will focus on two general technology applications that have the highest probability of directly influ- encing the training of commercial vehicle operators: CBI and simulation. Each will be generally discussed, and then specific current and potential motor vehicle operator training appli- cations will be described in the next chapter. Computers provide ways to exploit human learning capa- bilities (Brock 1997). Human learning capabilities themselves have not significantly changed. Swezey and Llaneras (1997) describe various instructional and learning models. CBI pro- vides a lever that can be applied to those models to improve human performance.
2.3 Instructional Technologyâ Simulator-Based Instruction Some simulators are training devices; others are R&D tools. Some training devices are simulators, but most are not. This distinction can be made by two examples: 1. All major airlines use flight simulators for pilot training. However, NASA and other research agencies use simula- tors not for training, but for engineering R&D on future cockpit technology. 2. A wooden airplane on the end of a stick can be an effective training device when used by a good instructor to teach basic flight control. It is a training device but not a human- in-the-loop simulator. To be considered a true simulation, an instructional activ- ity must be based on the reality of a specific real-world process or situation (Clariana and Smith 1988; Heinich, Molenda and Russell 1985; Gagné 1965). Activities such as in- basket and decision-making instructional exercises and role- playing can be classified as simulations if the activity is based on a real situation or the students are required to apply a process that could be used in a real-life scenario. Note that in these examples, there are no devices. One can have simulation without a simulator device. Simulation is an instructional method that requires stu- dents to interact with specific instructional events based on real-world scenarios. Students must see or experience the consequences of their interaction. All interactions should result in similar real-world outcomes or effects. The primary learning outcome of a simulation should be the demonstra- tion of a real-world process, procedure, or specific behavioral change. As Brock, Jacobs, and McCauley (2001) point out in TCRP Report 72, there is a long and rich body of scientific and technical literature on simulators and their use for training that goes back to at least the early 1950s. The liter- ature can be broadly characterized as falling into four main domains. These domains are (1) descriptions of simulators (or simulator components), their characteristics, and how they are being used; (2) advice on what characteristics are required in a simulator; (3) results of research on the effects of simula- tor characteristics on performance; and (4) results of research on the effects of simulator characteristics on training. The vast amount of this literature is in the context of flight simulators because that has been the predominant use of sim- ulators for the past 60 years. Within this body of literature, the smallest segment has been the research findings on how certain simulator characteristics affect the rate of learning and proficiency. Within the research on flight simulators, the smallest por- tion is on transfer of training (TOT) results, that is, how well someone performs with the actual equipment after having been trained in a simulator. However, over the accumulated history of using simulators for training, there is evidence that training simulators, particularly flight simulators are effec- tive training devices. That is, time spent in the simulator trades off for some amount of training time using the actual equipment. With the advantages of simulators and the changing cost of these devices in mind, it is easy to see why aviation train- ing has been the dominant application of simulators. The cost of a full-flight simulator for a military aircraft is tens of millions of dollars. When the aircraft has a value equal to or greater than the simulator but has a much higher operating cost, the use of a simulator for training is attractive. It is even more attractive when loss of the aircraft during train- ing is a real possibility. In addition, an expensive aircraft dedicated to training is not available for revenue-producing operations. 2.4 Training Effectiveness Measures Perhaps the best known training effectiveness measure- ment methodology is Kirkpatrickâs Four Level Evaluation Model (Kirkpatrick 1994) of reaction, learning, performance, and impact. Figure 3 schematically shows how the evaluation process works and Table 1 contains descriptions of the levels. Reaction is the evaluation that measures how the learners react to the training. This level is often measured with attitude questionnaires that are passed out after most training classes. This level measures one thing: the learnerâs perception (re- action) of the course. In many training programs, this may be the only measure of the instruction. It provides a kind of like- ability score. Skeptical instructors sometimes refer to this measurement as the âsmile factor.â This level does not measure what new skills the learners have acquired or what they have learned that will transfer back to the working environment. This has caused some evaluators to downplay its value. However, the interest, attention, and moti- vation of the participants are critical to the success of any train- ing program. People learn better when they react positively to the learning environment (Markus and Ruvulo 1990). Learning is the extent to which participants change atti- tudes, improve knowledge, and increase skill as a result of attending the program. Measuring the learning that takes place in a training program is important to validate the learn- ing objectives. To evaluate the learning that has taken place, the key question of any instructional program is, What knowledge and skills were acquired? Most people are familiar with tests in conjunction with learning. The important aspect of tests to measure instruc- tional effectiveness is that they be based on the objectives of the program. Ideally, all students would be tested on entering 8
an instructional program, be tested throughout the program to ensure that they are progressing, and then be tested at the end of the program to ensure they met all the objectives of the course. In most cases, the successful passing of an end of course test results in a certificate or diplomaâsome documentation of 9 successful completion. This final step is the end of the instruc- tional process. It is not the end of measuring the effectiveness of the instruction. Performance refers to how students actually perform in the work place. Students may score well on post-tests in the class- room but the real question is whether or not any of the new Figure 3. Kirkpatrickâs four level evaluation model. Level Evaluation (what is measured) Description and characteristics Examples of eval- uation tools and methods Ease of measurement 1 Reaction How the students feel about the instruction ⢠Survey ⢠Interviews Quick, easy, and inexpensive 2 Learning Measures increases in skills and knowledge ⢠Pre- and post- tests ⢠Observation Stems from learning objectives; may require complex testing 3 Performance Measures on-the-job performance ⢠Interviews ⢠Observation ⢠Supervisor ratings Need cooperation and participation by line supervisors 4 Results Measures organi- zational benefits from training ⢠ROI ⢠Productivity ⢠Organizational goals Often based on estimates rather than on hard data Table 1. Kirkpatrickâs four levels of evaluation.
knowledge and skills transfers to the job. Level three evaluations attempt to answer whether or not studentsâ behaviors actually change as a result of new learning. Ideally, this measurement should be conducted 3 to 6 months after the training program. Observation surveys can be used. In the case of skill-based jobs, current metrics may already be in place (e.g., number of widgets produced, accident rate). Surveys can be completed by the student, the studentâs supervisor, individuals who report directly to the student, and even the studentâs customers. Results is the fourth level in the model. The goal is to eval- uate the business impact of a particular training program. Kaplan and Norton (2001) have proposed a so-called, âbal- anced scorecard,â to look at the impact of training on a busi- ness operation from four perspectives: ⢠Financial: A measurement, such as return on investment (ROI), that shows a monetary return, or the impact itself, such as how the output is affected. Financial can be either soft or hard results. ⢠Customer: Improving an area in which the organization differentiates itself from competitors to attract, retain, and deepen relationships with its targeted customers. ⢠Internal: Achieve excellence by improving such processes as supply-chain management, production process, or sup- port process. ⢠Innovation and Learning: Ensuring the learning package supports a climate for organizational change, innovation, and the growth of individuals. Other measures might include improved safety performance, reduced waste, and recognized improved productivity. At least one writer (Phillips 2003) has suggested that a competent com- putation of the ROI of a training program should be considered a fifth evaluation level. It is certainly true that the cost of train- ing should be offset by the benefits accrued from that training. 10