Skip to main content

Currently Skimming:

Measuring Job Competency
Pages 53-74

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 53...
... The process of linking entrance standards to job performance Is a more complex task requiring nontraditional methods and an expanded sense of policy perspectives. The committee feels strongly that if the Joint-Service Project is to effectively communicate information about the performance of enlisted personnel and the implications of changing standards~ither internally to military policy makers or to Congress then the scoring scale of the job performance tests needs to be given some son of absolute meaning.
From page 54...
... Figure 1 shows predictor composite scores and performance scores that are related in the usual psychometric fashion, assuming a moderate validity correlation and roughly normal score distributions. The population is considered to be those who actively seek the job in question.
From page 55...
... ss I ID ~ , ~ , Q ~ ID _ O Q ~ O ~ As S C, 0 in Jet ~ m ° ~ _ 0 ~ ~ ._ .
From page 56...
... Furthermore, the en\ \ predictor test scores - total group predictor scores above bwer cutoff l 1 1 1 predictor scores above upper cutoff FIGURE 2 Distribution of predictor composite scores.
From page 57...
... If these factors are considered important to the definition of the job, and if they can be made explicit, they can be included in the sampling and estimation procedure. Whether a random or purposive sampling scheme is adopted, it is clear that the initial definition of the job domain is the foundation of any later interpretation of performance test scores.
From page 58...
... Normreferenced test scores have only relative meaning. For example, a person with an ASVAB standard score of 50 on the word knowledge test has a working vocabulary about as extensive as the average applicant, but apart from this relative statement, the score indicates nothing about the extent or adequacy of his or her vocabulary.
From page 59...
... A1though both tees imply a content-referenced interpretation of test performance, cnterion-referenced testing has become closely associated with minimum competency testing programs in recent years. In numerous states, high school students are required to demonstrate minimum levels of competence in language skills, mathematics, and possibly other areas of local or state interest as a prerequisite to graduation.
From page 60...
... , but validity for actual performance is only assumed. The Joint-Service Project will examine predictive validity by correlating entrance test scores with job performance scores.
From page 61...
... A competency analysis seems to the committee a particularly fruitful way to approach the problem of comparing jobs, since the competency designations developed for each job's performance measures could be correlated with the predictor tests given at entrance and could guide allocation. For example: if enlisted personnel in Job X who scored in the 50th percentile on the relevant ASVAB technical composite consistently achieve expert status by the end of the first term, one would want the allocation system to avoid waste by not assigning people to Job X if their score on the technical composite is very much above the 50th percentile.
From page 62...
... As part of the Omnibus Defense Authorization Act of 1985, the Senate Armed Services Committee required the Department of Defense to review military enlisted manpower quality requirements for the next five years. In order to make these projections, the Services had to rely on two indirect indicators of quality: high school education status and scores on the Armed Forces Qualification Test.
From page 63...
... However, the current instruments have not been designed for the ancillary applications, and we fear that expanding the use of these very job performance measures beyond the original intention of evaluating alternative enlistment standards could pose serious threats to their measurement validity. One type of problem is test fairness.
From page 64...
... OPERATIONALIZING THE COMPETENCY IDEA Having explored the rationale and potential benefits of a competency approach to job performance measurement, participants in the meetings on competency assessment took up the practical question of how to develop measures that permit interpretation of performance scores as representing degrees of job competency or job mastery. For this specific application, the fundamental need is for the measures to be representative of job requirements.
From page 65...
... The difficulty factor provides an illustration. The concept of difficulty in the context of selecting test content implies a rank order of skills and knowledge; that is, people who can perform the more difficult tasks can also perform the easier ones.
From page 66...
... Test Scoring Strategies In creating scales, either to show individual differences or to assess level of competency, there are several ways of combining the binary scores on steps to get task scores and several ways to combine task scores to get a total test score. Furthermore, there may be some advantage in creating a profile of test scores for different duty areas as an intermediate level of analysis, as the Army has done, for example, with its common and occupation-specific tasks.
From page 67...
... Automatically adding up the number of successful steps may not be the wisest course, especially if some of the steps are critical. Combining Task Scores to Obtain Test Scores Compensatory, conjunctive, and disjunctive models, which were offered as strategies for scoring steps in a task, are also available for combining tasks to obtain a test score.
From page 68...
... For example, if the job specification indicates that simple tasks occur with great frequency, the simple test tasks could be weighted accordingly. If, however, job experts report that the more characteristic feature of a particular job is the necessity for all incumbents to be able to perform a small set of extremely critical tasks, with the remaining tasks being the equivalent of sweeping up, then the tasks representing that critical subset could be very heavily weighted.
From page 69...
... However, if the performance scores are to be interpreted as measures of competency, with a given test score indicating, a certain level of job performance, then the weighting scheme is important. It should be emphasized that an externally referenced meaning depends on attending to means and standard deviations as well as correlations.
From page 70...
... No matter how a performance test is constructed, the process of attaching meaning to the performance scores will involve some evaluation of test performance by subject matter experts. Some thoughts about how to elicit such judgments are provided in the appendix to this report.
From page 71...
... Policy makers have to deal with the totality of jobs, so the question of relating competency scales to one another becomes important. Earlier, in discussing the advantages of domain-referenced tests, we noted that for setting minimum standards for each separate job, it would be useful, after getting meaningful absolute scales of competence for each of several jobs, if the same fixed value (say 40-70)
From page 72...
... Suppose that a set of experts were asked to act as judges and assign points to a set of hypothetical individuals who are characterized by their hands-on perfonnance test data in the form of task scores, including completion times when available. One method for collecting such judgments would be to provide 20 to 40 task score profiles (which could be a random sample of real performance profiles based on real job performance measurement)
From page 73...
... In addition to leaving open the question of what sort of scorin, algorithm is possible and the option of varying that algorithm by military occupational specialty, it also provides a sort of final test of whether a set of job performance items has much to do with what experts consider important in a job incumbent. Of particular relevance is that it uses a metric that could, we hope, have applicability across military occupational specialties—for example, if in one the range of scores associated with a sample of examinees is 60-95, while in another it is 30-90, and in another it is 85-99, that seems potentially useful comparative information.
From page 74...
... Green, Jr., eds. 1986 Assessing the Performance of Enlisted Personnel: Evaluation of a Joint-Service Research Project.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.