Skip to main content

Currently Skimming:

2 Nonresponse Bias
Pages 40-50

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 40...
... Most important, nonresponse creates the potential for bias in estimates, in turn affecting survey design, data collection, estimation, and analysis. We discuss the issue of non­ esponse bias in r this chapter as well as the relation of nonresponse bias to nonresponse rates.
From page 41...
... means only that the potential for nonresponse bias has increased, not necessarily that nonresponse bias has become more of a problem. That is because nonresponse bias is a function of both the nonresponse rate and the difference between respondents and nonrespondents on the statistic of interest, so high nonresponse rates could yield low nonresponse errors if the difference between respondents and nonrespondents is quite small or, in survey methodology terms, if nonresponse in the survey is ignorable and the data can be used to make valid inferences about the target population.
From page 42...
... . • The Belgian National Health Interview Survey with a response rate of 61.4 percent obtained a 19 percent lower estimate for reporting poor health than did the Belgian census, which had a response rate of 96.5 percent (Lorant et al., 2007)
From page 43...
... They ascribed this anomaly to social processes that determine survey participation, finding that people who do volunteer work respond to surveys at higher rates than those who do not do volunteer work. As a result, surveys with lower response rates will usually have higher proportions of volunteers.
From page 44...
... On one hand, survey managers have intensified data collection activities in order to improve response rates. On the other hand, the trend toward higher survey costs has led to shortcuts, shortened collection periods, and cheaper modes that have had effects on survey response rates.
From page 45...
... Current challenges associated with paradata are the identification of what data elements to collect and how to organize such data structures in order to aid data collection and the creation of post-survey adjustments. One use of paradata has been to gain a better understanding of the characteristics of "converted" respondents (i.e., those who were persuaded to take part after refusing initially)
From page 46...
... government, this interest has been driven, in part, by Office of Management and Budget requirements that "sponsoring agencies conduct nonresponse bias analyses when unit or item response rates or other factors suggest the potential for bias to occur" (Office of Management and Budget, 2006, p. 8, italics added)
From page 47...
... Most important, its use in field operations may distort data collection practices -- for example, by suggesting that interviewers should attempt to interview the remaining cases with the highest response propensity, which is not necessarily a strategy that will reduce bias. Because the nonresponse rate can be such a poor predictor of bias, researchers have turned to developing new metrics for depicting the risk of nonresponse bias.
From page 48...
... R-indicators can be monitored during data collection to permit survey managers to direct effort to cases with lower response propensities and, in so doing, to reduce the variability among subgroup response rates. The indicator can be monitored during survey collection because the response propensities can be calculated with complete information available on the frame.
From page 49...
... for Current Population Survey cohorts by months in sample. NOTE: Top line includes 95 percent confidence interval error bars around m ­ onth-in-sample (mis)
From page 50...
... Such indicators are promising, but, as Wagner reminded the committee, a research agenda on alternative indicators of bias should include research on the behavior of different measures in different settings, the bounds on nonresponse bias under different assumptions (especially non-MAR) , how different indicators influence data collection strategies, and how to design or build better frames and paradata.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.