Skip to main content

Currently Skimming:

7. Evaluating the Quality of Performance Measures: Content Representativeness
Pages 128-140

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 128...
... Chapter 8 further explores the meaningfulness of hands-on test scores by examining their relationships with other variables of interest. Job performance tests attempt to replicate the full job as faithfully as possible within the constraints of time, cost, and assessment technique.
From page 129...
... the hands-on performance measure was constructed by systematically sampling a set of tasks/behaviors from a universe of tasks defined by a job analysis and (2) the translation of those job tasks/behaviors into the test preserved the important features of the tasks themselves and the behaviors they require.
From page 130...
... For example, all Services eliminated tasks involving live fire because of safety, cost, or some combination of the two; only the Marine Corps retained a live-fire task. Likewise, all Services eliminated tasks that job experts judged to be redundant,
From page 131...
... One justification is that because hands-on performance measures contain a very limited sample of tasks (e.g., 15) , each task must be carefully selected to reflect an important (or critical or difficult or frequent)
From page 132...
... increased.) Random Sampling The second school of thought holds that the better scientific ground for arguing content representativeness is provided by random selection of tasks from the job domain, because only random sampling permits one to make, with known margins of error, statements that can be generalized to the entire domain of tasks.
From page 133...
... This concern gains credence from reports from the Army and the Navy that panels of job experts disagreed substantially on their judgments of important or critical tasks or samples of tasks for hands-on performance measures (e.g., Lammlein et al., 19871. Adherents of random sampling hold, in contrast to the purposive sampling school, that the purpose of a hands-on performance measure goes beyond rank ordering individuals in correlational analyses (relative decisions)
From page 134...
... The sampling distribution of means described above will be normally distributed, especially with increasing task sample size. It will have a mean equal to the domain mean and a standard deviation equal to the domain standard deviation divided by the square root of the sample size.
From page 135...
... Domain parameters are presented above the main diagonal; purposive sample statistics are presented below. TABLE 7-2 Means, Standard Deviations (SD)
From page 136...
... We then divided this difference by the standard deviation of the sample means, the standard error. This produced a measure of the distance between the purposive sample and what would be expected with random sampling, in standard deviation (standard error)
From page 137...
... Under this assumption, the interpretation does not change; the increase in the magnitudes of the distances emphasizes that the purposive sample contains "unrepresentative" tasks in terms of frequency.2 The last column in Table 7-3 provides distance scores based on the assumption that tasks were selected by stratified random sampling from a finite domain. The stratification reflects the process used by the Navy in creating content categories to ensure that the full range of critical tasks was included in the hands-on performance measure (see Table 7-1, Step 21.
From page 138...
... By virtue of translating job tasks into assessment devices, some aspects of the job are ignored, which is the subject of the next section. PERFORMANCE MEASUREMENTS AS JOB SIMULATIONS Job performance measurements attempt to replicate job tasks as faithfully as possible within constraints imposed by time, cost, and assessment 3 That purposive samples tend to overrepresent frequently performed tasks in a job performance measure makes sense in light of the literature on judgment bias.
From page 139...
... Fourth, hands-on performance measures present a sequence of tasks to incumbents that may not fit the sequence of tasks typically encountered on the job. Changing the typical sequence of tasks, although necessary to sample job tasks adequately and to standardize the job performance measurement, introduces an artificiality into the hands-on performance measure.
From page 140...
... Computer simulations and walkthrough tests move along the abstractness continuum toward the concrete pole; they tend to be of higher fidelity than paper-and-pencil tests. Because hands-on performance measures can provide high fidelity, concrete representations of jobs, the JPM Project considers them to have a certain inherent credibility.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.