Skip to main content

Currently Skimming:

Exploring Strategies for Clustering Military Occupations
Pages 305-332

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 305...
... The third is the issue of setting predictor cutoffs for various MOS. For MOS for which ASVAB and criterion data are available, it is at least possible (even if not current practice)
From page 306...
... JOB ANALYSIS METHODS: THE CHOICE OF THE JOB DESCRIPTOR Numerous approaches to analyzing jobs exist. Textbooks in the fields of industrial/organizational psychology and personnel management commonly catalog 6-12 job analysis methods (e.g., functional job analysis, Position Analysis Questionnaire (PAQj, task checklist, job element method, critical incidents, ability requirement scales, threshold traits analysis)
From page 307...
... . Of particular interest here are these last two: the purpose for which the job analysis information is collected and the job descriptor chosen.
From page 308...
... Examples include "number facility" and "fluency of ideas." Thus the ability requirements approach involves describing jobs in terms of a relatively limited number of abilities required for job performance. Pearlman's fourth category is labeled "overall nature of the job," referring to approaches that characterize jobs very broadly, such as by broad job family (managerial, clerical, sales)
From page 309...
... The job descriptor chosen is in all cases behavioral; they vary on a continuum from general behaviors to specific behaviors. Similarly, one may reach different conclusions about job similarities and differences if different categories of job descriptors are chosen (e.g., focusing on job activities versus focusing on abilities required for job performance)
From page 310...
... These same three jobs correlated .90, .92, and .88 with the paying and receiving teller when comparing the jobs based on similarity of rated ability requirements. Thus the use of different job descriptors leads to different conclusions about job similarity.
From page 311...
... Conceivably these two purposes could require different job descriptors for optimal clustering. Approaches to identifying the appropriate job descriptor for these purposes are discussed in a subsequent section of this paper.
From page 312...
... Thus, even if it were agreed that abilities required is the appropriate job descriptor for a given application, operationalizing ability as importance, frequency of use, contribution to variance in performance, or level required can lead to different conclusions about job similarity. It would seem logical to hypothesize that judgments about contributions to variance in job performance would be most appropriate for determining for which jobs a given test should have similar validity and that judgments about level required would be most appropriate for determining which inh~ should have similar test cutoffs.
From page 313...
... Jobs very similar in the amount of time spent on the PAQ dimension "processing information" may be very different in the level of information processing involved. In short, it is suggested that careful attention be paid to both the selection of the job descriptor and to the operationalization of job element importance.
From page 314...
... To understand validity generalization, it is helpful to distinguish between "true validity" and "observed validity." True validity is the correlation that is obtained if there is an infinitely large sample size that is perfectly representative of the applicant pool of interest and if the criterion measure is a perfectly reliable measure of true job performance. Observed validity is the correlation obtained in our research typically with smaller Ns than preferred, with samples that may not be perfectly representative of the job applicant population, and with less than perfect criterion measures (e.g., supervisory ratings of performance)
From page 315...
... Furthermore, when the range restriction is severe, the extrapolation permitted by these assumptions is tenuous. A source of confusion in understanding and interpreting validity generalization/meta-analytic research lies in the failure to differentiate between two different statistical tests that can be performed on a set of validity coefficients; these are tests of the situational specificity hypothesis and the generalizability hypothesis.
From page 316...
... The one potentially important difference between the present set of validity studies and the cumulated literature on the validity of cognitive ability tests is the criterion used. Most validity generalization work to date has categorized studies as using training criteria (typically end-of-course knowledge test scores)
From page 317...
... Second, this approach presumes that it is sufficient to demonstrate positive nonzero validity; point estimates of true validity are not necessary. As discussed above, it is clear that the true validity of cognitive ability tests does vary across jobs; if one wishes to estimate true validity for MOS not included in the Job Performance Measurement Project, a system of relating variance in job descriptors to variance in validity coefficients is needed.
From page 318...
... Some is more sophisticated, such as concerns about the correction of validities for range restriction based on assumed rather than empirically determined measures of the degree of range restriction, or concerns about the statistical power of validity generalization procedures when applied to small numbers of validity coefficients (cf., Sackett et al., 1985~. However, one indication of the degree of acceptance of validity generalization can be found in the 1987 Principles for the Validation and Use of Personnel Selection Procedures published by the Society for Industrial and Organizational Psychology, Division 14 of the American Psychological Association: "Current research has shown that the differential effects of numerous variables are not so great as heretofore assumed; much of the difference in observed outcomes of validation research can be attributed to statistical artifacts....
From page 319...
... For a number of jobs, validity coefficients for a given predictor/criterion combination are obtained. Information about each job is obtained through a structured job analysis questionnaire; job dimension scores are derived from this questionnaire and then used as predictors of the validity coefficients for each job.
From page 320...
... - m~`l tors of the validity coefficients. The job components used by McCormick are derived from the Position Analysis Questionnaire (PAQj, a 1 87-item structured worker-oriented job analysis instrument (McCormick et al., 1972~.
From page 321...
... One possible explanation for McCormick et al.'s lack of success in predicting validity coefficients using PAQ dimensions is that PAQ dimensions do not constitute the most appropriate job descriptor for this purpose. Consider the array of job descriptors discussed in an earlier section of this paper: specific behaviors, general behaviors, ability requirements, and global descriptors.
From page 322...
... Commonality of underlying abilities required leads to similar validity despite lack of overlap in specific behaviors performed. This leads to the hypothesis that more general approaches, namely, ability requirements or Second, successful attempts at A, moderators or vary across diverse jobs have used general rather than molecular job descriptors.
From page 323...
... It is critical to note that the importance of various criterion constructs is a policy issues as well as a scientific one, and take issue with the notion that there is such a thing as "true" overall performance. Consider, for example, two potential dimensions of military job performance: current job knowledge and performance under adverse conditions.
From page 324...
... Note that in the J-coefficient literature there is evidence that job incumbent judgments of test-criterion relationships are predictive of empirical validity results, suggesting that work experience, rather than test validation experience, may be the critical factor. Thus there is some basis for positing both that experienced psychologists will be able to estimate validity for a wide variety of predictor-criterion combinations and that experienced nonpsychologists, such as job incum
From page 325...
... Paired comparison judgments could be obtained from psychologists for 20 of the 27 MOS in the Job Performance Measurement Project data base. These judgments could be pooled across raters and scaled, and the scaling solution then compared with obtained validity coefficients from the project.
From page 326...
... With criterion data and a large sample size, a different type of approach is possible. Based on expert judgment, the minimum acceptable level of criterion performance is identified, and the regression equation relating predictor and criterion is used to identify the predictor score corresponding to this minimum level of acceptable performance.
From page 327...
... However, the general strategy of using job information (e.g., PAQ dimensions) to predict needed predictor construct scores can be applied by substituting the regression-based predictor score corresponding to the needed minimum level of criterion performance for the mean predictor score.
From page 328...
... The analysis of correlations between ASVAB and hands-on measures is expected, at least by this author, to produce a similar pattern of findings to meta-analyses of cognitive ability tests using training or rating criteria; the expected strong relationship between hands-on measures and training criteria provides a link to the larder body of validity studies USiIlg training criteria. If point estimates of validity are needed, a number of possibilities have been proposed: synthetic validity, direct estimation of validity, and paired comparison judgments of job similarity.
From page 329...
... The synthetic validity approaches discussed in this paper offer the needed criterion. The magnitude of differences on an ability requirement scale needed to produce a change in cutoff score of a given magnitude can be determined, and then used to guide clustering decisions.
From page 330...
... Hunter, J.E. 1980 Test validation for 12,000 jobs: an application of synthetic validity and validity generalization to the GATB.
From page 331...
... 1955 Test Selection by Job Analysis (Technical test series, No.
From page 332...
... 1982 Synthetic validity and its application to the uniform guidelines validation requirements. Personnel Psychology 35:383-397.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.