Skip to main content

Currently Skimming:

7 Analysis Techniques for Small Population Research
Pages 87-102

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 87...
... He broke his presentation into the following parts: informative analysis, a definition of small in a data context, an explanation of the finite population correction factor, and design and measurement qualities that optimize research when samples are small. He concluded with a discussion of multivariate possibilities that might be applicable in small sample situations.
From page 88...
... Because small sample data analyses require compromises, it is difficult to justify those situations when it would be possible to do better. Finite Population Correction In a small sample situation, he said, and in particular when sample size is constrained by population size, one potential approach for increasing the power of statistical tests is to use the finite population correction.
From page 89...
... He explored the situation in which the sample size varied from 10 to 175, noting that the standard error associated with each sample is equal to 10 times the finite population correction. In this case, the finite population corrected standard error ranged from 9.77 to 3.54.
From page 90...
... He noted that analytic models that allow the researcher to include all the data available in every analysis are preferred, and there are many accessible methods of dealing with missing data that create the possibility for leveraging the data that have been provided. Examples include multiple imputation and model-based methods such as full information maximum likelihood in structural equation modeling.
From page 91...
... Hoyle noted that multilevel modeling assumes continuous measures, four to eight predictor variables, no missing data, two or fewer cluster-level random effects, at least five observations per cluster, and an inter-cluster correlation of about 0.2. For multilevel modeling, small might be considered fewer than 40 clusters.
From page 92...
... Hoyle noted an interesting, just-published application.3 It uses the finite population correction factor in a multilevel model when the level two variable represents a small cluster population. Interestingly the authors argue that the usual choice by people using multilevel modeling is a fixed effect model versus a random effects model.
From page 93...
... Hoyle pointed to person-level dynamic modeling in a meta-analysis of social skills interventions on autism spectrum disorder children.5 These models incorporate time to allow modeling of intra-individual change over time and use lagged covariance matrices that permit modeling of within-lag covariances between variables, autoregressive covariances (for stability) , and cross-lagged covariances (for prospective relationships between variables)
From page 94...
... However, if the direct estimate has high variance relative to the regression estimate, the weight on the regression estimate will be higher. Examples of Bayesian Approaches Louis's first example was estimating the prevalence of modern contraceptive practices in Uganda and other African countries using a sample survey, where each woman reported about her contraceptive device uses (yes or no)
From page 95...
... Louis also noted that another, perhaps underappreciated and under­ used, aspect of empirical Bayesian approaches is to stabilize variance estimates themselves. He noted that this is less controversial than stabilizing the direct estimates but is used relatively infrequently.
From page 96...
... . Constrained maximum likelihood estimation for model calibration using summary-level information from external big data sources (with discussion)
From page 97...
... The combination of the survey estimates and regression estimates using a Bayesian procedure produces estimates with lower mean squared error than would be possible using the survey data alone or regression estimates alone. SAIPE is an important example of the use of a Bayesian approach to produce small area estimates.12 The Bayesian approach to stabilizing estimates can be attractive, but Louis cited Normand and colleagues (2016)
From page 98...
... She noted that a researcher may want to estimate the size of a population to assess the existence or magnitude of a health issue experienced by the population, assess how resources should be allocated for program planning and management, aid other estimation methods, or assess population dynamics. A key issue in working with hidden populations is maintaining their confidentiality and privacy.
From page 99...
... Researchers count the number of people who received the memorable item during the first sample as the overlap. The size of the hidden population is estimated as the product of the sizes of the two samples divided by the number of individuals in the overlap.
From page 100...
... Obtaining the initial data source may be a challenge. Network Scale-up McLaughlin explained different variations of network scale-up methods and ongoing research into new variations.16 The general procedure involves asking, in a general population survey, how many people each individual knows and how many of those are in the hidden population.
From page 101...
... . Estimating the size of hidden populations using respondent-driven sampling data: Case examples from Morocco.
From page 102...
... Marc Elliott (RAND) commented that the finite population correction (fpc)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.