Skip to main content

Currently Skimming:

4 Scientific Criteria for Recommended Measures
Pages 63-70

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 63...
... The panel is not aware of unique criteria that specifically address testing and evaluation of demographic measures collected in clinical settings, but we found little reason to use different evaluation criteria to assess these measures across these settings. Nor is the committee aware of unique criteria for evaluating measures collected for administrative records.
From page 64...
... ; • cognitive interviews, in which a small set of respondents is inter viewed to discuss in detail their thought processes as they inter preted and responded to potential items (see, e.g., Willis, 2005; Desimone and La Floch, 2004) ; • respondent debriefings, in which respondents are provided with additional information about the data collection process and asked to provide feedback on specific questions after they have completed the instrument (see, e.g., Campanelli, Martin, and Rothgeb, 1991)
From page 65...
... Behavior coding and field pretests yield interviewer administration problems, as well as respondent comprehension. Split-ballot experiments produce indicators such as differential item nonresponse rates, refusal and don't know rates, response bias, and response distributions, that are used to evaluate different question wordings.
From page 66...
... . Adjustments to existing well-tested measures that appear on prominent national surveys, such as the National Health Interview Survey sexual orientation identity item, are often proposed by minority communities as a way of giving the community voice, better representation, or legitimation within the data collection process.
From page 67...
... This can occur when a person fits the definition of a category or experience but does not recognize the terminology provided, finds the response options offensive, or is otherwise uncomfortable reporting an identity that is marginalized or stigmatized. Although it is almost impossible to entirely eliminate false positives and false negatives, careful pretesting of items through cognitive interviews and experimental studies that compare results from different wordings help to minimize these misclassifications and improve data validity.
From page 68...
... . Summary Although we recognize that all kinds of data can inform public policy and community action, the statement of task stipulated that the panel's recommendations be focused on the types of information collected in population-based surveys, large-scale administrative contexts, and other data collection activities that track entire populations or large general samples, not just those that target sexual and gender minorities.
From page 69...
... provides consistent estimates when measured across data collection contexts; and 6. tested or previously administered with adequate performance us ing multiple administration modes (i.e., web-based, interviewer administered, computer-assisted, and telephone administration)
From page 70...
... For clinical settings, the panel reviewed the available information on measures, including data collection guidance from a variety of sources, including government agencies, such as the Centers for Disease Control and Prevention and the National Institutes of Health, as well as research and practice from public and private health care organizations. We did not find any reasons to modify our recommended measures for data collection in this setting.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.