Skip to main content

Currently Skimming:

4 Analytic Issues
Pages 41-54

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 41...
... adjust for students' starting level of achievement using prior test scores, but they do so in different ways. Some also adjust for student characteristics and school context variables.
From page 42...
... As workshop presenter Dale Ballou explained, to get around the problem of nonrandom assignment, value-added models adjust for preexisting differences among students using their starting levels of achievement. Sometimes a gain score model is used, so the outcome measure is students' growth from their own starting point a year prior; sometimes prior achievement is included as a predictor or control variable in a regression or analysis of covariance; and some models use a more extensive history of student test scores as control variables, as in William Sanders's work.
From page 43...
... The contributions of these factors, positive or negative, may end up being attributed to the teacher. Dan McCaffrey noted that most statistical models that have been used in practice have tended not to include student- or context-level predictor variables, such as race or socioeconomic status measures.
From page 44...
... fourth grade scores. Similarly, if the children in the advantaged schools do well on both the third and fourth grade tests, in part because such schools are able to hire better teachers, then, depending on the approach used, the model may attribute too much of the high fourth grade scores to the "quality of the students" reflected in the third grade scores and too little to the quality of the fourth grade teachers.
From page 45...
... Small sample sizes are more of a challenge for value-added models that seek to measure teacher effects rather than school effects. This is because estimates of school effects tend to be derived from test score data of hundreds of students, whereas estimates of teacher effects are often derived from data for just a few classes.
From page 46...
... This yearto-year variability generally accounted for a much larger share of the variation in effects for elementary school teachers than for middle school teachers (perhaps because middle school teachers usually tend to teach many more students in a single year than elementary teachers)
From page 47...
... (This problem applies to using status test score data for teacher evaluation as well.) Complexity Versus Transparency Value-added models range from relatively simple regression models to extremely sophisticated models that require rich databases and stateof-the-art computational procedures.
From page 48...
... For example, most current tests are scored using item response theory, which is also very complex. However, test users generally accept the reported test scores, even though they do not fully understand the mathematical intricacies through which they are derived (i.e., the process for producing raw scores, scale scores, and equating the results to maintain year-to-year comparability)
From page 49...
... Indeed, he found that, for example, fifth grade teachers were nearly as strongly linked statistically to their students' fourth grade scores as were the stu dents' fourth grade teachers. Rothstein also found that the relationship between current teachers and prior gains differs by time span: that is, the strength of the statistical association of the fifth grade teacher with fourth grade gains differs from that with third grade gains.
From page 50...
... Although Rothstein's study was intended to test the specification of the econometric models, it has important implications for the interpretation of estimates from statistical models as well, because dynamic classroom assignment would also violate the assumptions that Lockwood and McCaffrey (2007) establish for allowing causal interpretation of statistical model estimates.
From page 51...
... Using student fixed effects captures all unchanging (time-invariant) student characteristics and thus eliminates selection bias stemming from the student characteristics not included in the model, provided that the model is otherwise properly specified.
From page 52...
... If there is more information on some teachers, then those on whom there is less information will have less precisely estimated teacher effects, and these estimated effects will be shrunk more. Such teachers will rarely be found in the extreme tails of the distribution of value-added estimates.
From page 53...
... • What are the effects of violations of model assumptions on the accuracy of value-added estimates? For example, what are the effects on accuracy of not meeting assumptions about the assign ment of students to classrooms, the characteristics of the missing data, as well as needed sample sizes?
From page 54...
... But most thought that the degree of precision and stabil ity does seem sufficient to justify low-stakes uses of value-added results for research, evaluation, or improvement when there are no serious consequences for individual teachers, administrators, or students.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.