National Academies Press: OpenBook

The 2000 Census: Counting Under Adversity (2004)

Chapter:6 The 2000 Coverage Evaluation Program

« Previous: 5 Coverage Evaluation: Methods and Background
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

CHAPTER 6
The 2000 Coverage Evaluation Program

WE TRUN NOW TO THE FINAL REVISION II ESTIMATES of the population from the 2000 Accuracy and Coverage Evaluation (A.C.E.) Program. By early 2003 the Census Bureau had completed extensive reanalyses of the evaluation data sets used for the October 2001 preliminary revised A.C.E. estimates of net undercount. It released a new set of detailed estimates based on the original A.C.E. and evaluation data—the A.C.E. Revision II estimates—at a joint public meeting of our panel and the Panel on Research on Future Census Methods on March 12, 2003 (see http://www.census.gov/dmd/www/Ace2.html [12/22/03]). These estimates showed a small net overcount of 0.5 percent of the total population instead of a net undercount of 1.2 percent as originally estimated from the A.C.E. in March 2001. The latest demographic analysis estimates still showed a net undercount of the population, but it was negligible (0.1 percent) (Robinson, 2001b:Table 2).

At the joint panel meeting Census Bureau officials announced the Bureau’s recommendation, accepted by the secretary of commerce, to produce postcensal population estimates on the basis of the official 2000 census results. The A.C.E. Revision II estimates would not be used as the basis for developing estimates throughout the decade

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

of the 2000s. The decision document (U.S. Census Bureau, 2003b:1) stated:

The Accuracy and Coverage Evaluation (A.C.E.) Revision II methodology represents a dramatic improvement from the previous March 2001 A.C.E. results. However, several technical errors remain, including uncertainty about the adjustment for correlation bias, errors from synthetic estimation, and inconsistencies between demographic analysis estimates and the A.C.E. Revision II estimates of the coverage of children. Given these technical concerns, the Census Bureau has concluded that the A.C.E. Revision II estimates should not be used to change the base for intercensal population estimates.

With the final decision not to adjust population estimates for measured net undercount in the 2000 census behind it, the Census Bureau announced its intention to focus exclusively on planning for the 2010 census. Plans for that census include work on possibly using computerized matching of the type conducted for the October 2001 and March 2003 adjustment decisions to eliminate duplicate enumerations as part of the census process itself. Bureau officials also expressed the view that coverage evaluation could not be completed and evaluated in a sufficiently timely fashion to permit adjustment of the data used for legislative redistricting (Kincannon, 2003).

In this chapter we assess the A.C.E. Revision II estimation methodology and the resulting estimates of population coverage in the 2000 census. We first review key aspects of the original A.C.E. Program (6-A) and then review the data sources, methods, and results of the A.C.E. Revision II effort (6-B). We discuss two kinds of enumerations that had more impact on coverage in 2000 than in 1990: (1) whole-person (including whole-household) imputations and (2) duplicate enumerations in the census and the A.C.E. (6-C). Section 6-D provides an overall summary of what we know and do not know about population coverage in 2000. Section 6-E provides our recommendations for coverage evaluation research and development for 2010.

6–A ORIGINAL A.C.E. DESIGN AND OPERATIONS

An important part of evaluating the Revision II A.C.E. population estimates for 2000 is to consider how well the original A.C.E.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Program was designed and executed to produce the underlying input data. We consider below 10 aspects of the original A.C.E. followed by a summary of findings in Section 6-A.11:

  1. basic design;

  2. conduct and timing;

  3. definition of the P-sample—treatment of movers;

  4. definition of the E-sample—exclusion of “insufficient information” cases;

  5. household noninterviews in the P-sample;

  6. imputation for missing data in the P-sample and E-sample;

  7. accuracy of household residence information in the P-sample and E-sample;

  8. quality of matching;

  9. targeted extended search; and

  10. poststratification.

6–A.1 Basic Design

The design of the 2000 A.C.E. was similar to the 1990 Post-Enumeration Survey (PES). The goal of each program was to provide a basis for estimating two key components of the dual-systems estimation (DSE) formula for measuring net undercount or overcount in the census (see Section 5-A). They are:

  1. the match rate, or the rate at which members of independently surveyed households in a sample of block clusters (the P-sample) matched to census enumerations, calculated separately for population groups (poststrata) and weighted to population totals, and

  2. the correct enumeration rate, or the rate at which census enumerations in the sampled block clusters (the E-sample) were correctly included in the census (including both matched cases and nonmatched correct enumerations), calculated separately for poststrata and weighted to population totals.1

1  

The E-sample by design excluded some census cases in the A.C.E. block clusters (see Appendix E.1.e and E.3).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Other things equal, the higher the match rate, the lower will be the DSE population estimate and the estimated net undercount in the census. Conversely, the more nonmatches, the higher will be the DSE population estimate and the estimated net undercount. In contrast, the higher the correct enumeration rate, the higher will be the DSE population estimate and the estimated net undercount. Conversely, the more erroneous enumerations, the lower will be the DSE population estimate and the estimated net undercount (for how this result obtains, refer to Equation 5.1 in Section 5-A).

The A.C.E. and PES design and estimation focused on estimating the net undercount and not on estimating the numbers or types of gross errors of erroneous enumerations (overcounts) or of gross omissions. There are not widely accepted definitions of components of gross error, even though such errors are critically important to analyze in order to identify ways to improve census operations. Some types of gross errors depend on the level of geographic aggregation. For example, assigning a census household to the wrong small geographic area (geocoding error) is an erroneous enumeration for that area (and an omission for the correct area), but it is not an erroneous enumeration (or an omission) for larger areas. Also, the original A.C.E. design, similar to the PES, did not permit identifying duplicate census enumerations as such outside a ring or two of blocks surrounding a sampled block cluster. On balance, about one-half of duplicate enumerations involving different geographic areas should be classified as an “other residence” type of erroneous enumeration at one of the two addresses because the person should have been counted only once, but this balancing may not be achieved in practice.

Several aspects of the original A.C.E. design were modified from the PES design in order to improve the timeliness and reduce the variance and bias of the results (see Section 5-D.1). Some of these changes were clearly improvements. In particular, the larger sample size (300,000 households in the A.C.E. compared with 165,000 households in the PES) and the reduction in variation of sampling rates considerably reduced the variance of the original A.C.E. estimates compared with the PES estimates (see Starsinic et al., 2001). The coefficient of variation for the originally estimated coverage correction factor of 1.012 for the total population in 2000 was 0.14 percent, a reduction of 30 percent from the comparable coefficient of variation

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

in 1990.2 (The 0.14 percent coefficient of variation translates into a standard error of about 0.4 million people in the original DSE total household population estimate of 277.5 million.) The coefficients of variation for the originally estimated coverage correction factors for Hispanics and non-Hispanic blacks were 0.38 and 0.40 percent, respectively, in 2000, reduced from 0.82 and 0.55 percent, respectively, in 1990 (Davis, 2001:Tables E-1, F-1). However, some poststrata had coefficients of variation as high as 6 percent in 2000, which translates into a large confidence interval around the estimate of the net undercount for these poststrata and for any geographic areas in which they are a large proportion of the population.

Three other improvements in the A.C.E. design deserve mention. First, an initial housing unit match of the independent P-sample address listing with the Master Address File (MAF) facilitated subsequent subsampling, interviewing, and matching operations. Second, the use of computer-assisted interviewing—by telephone in the first wave—facilitated timeliness of the P-sample data, which had the positive effect of reducing the percentage of movers compared with the 1990 PES (see Section 6-A.3). Third, improved matching technology and centralization of matching operations probably contributed to a higher quality of matching than achieved in 1990 (see Section 6-A.8).

Another innovation—the decision to target the search for matches and correct enumerations in surrounding blocks more narrowly in the A.C.E. than in the PES—was originally suspected of having contributed balancing errors to the original (March 2001) DSE estimates, but subsequent evaluation allayed that concern (see Section 6-A.9). The treatment of movers was more complex in the A.C.E. than in the 1990 PES, but, primarily because there were proportionately fewer movers in the A.C.E. compared with the PES, on balance, movers had no more effect on the dual-systems estimates for 2000 than on those for 1990 (see Section 6-A.3).

In retrospect, the decision to exclude the group quarters population from the A.C.E. universe (see Section 5-D.1) was unfortunate, as it precluded the development of coverage estimates for group

2  

The coefficient of variation (CV) is the standard error of an estimate as a percentage of the estimate. The coverage correction factor, which would be used in implementing an adjustment, is the dual-systems estimate divided by the census count (including whole-person imputations and late additions); see Section 5-A.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

quarters residents, who appear to have been poorly enumerated (see Section 4-F). Finally, a flaw in the planning for not only the A.C.E., but also the census itself, was the failure to anticipate the extent to which certain groups of people (e.g., college students, prisoners, people with two homes) would be duplicated in more than one census record (see Section 6-A.7).

6–A.2 Conduct and Timing

Overall, the original A.C.E. was well executed in terms of timely and well-controlled address listing, P-sample interviewing, matching, follow-up, and original estimation. Although the sample size was twice as large as that fielded in 1990, the A.C.E. was carried out on schedule and with only minor problems that necessitated rearrangement or modification of operations after they had been specified. Mostly, such modifications involved accommodation to changes in the MAF that occurred in the course of the census. For example, the targeted extended search (TES) procedures had to be modified to handle deletions from and additions to the MAF that were made after the determination of the TES housing unit inventory (Navarro and Olson, 2001:11).

Some procedures proved more useful than had been expected. In particular, the use of the telephone (see Appendix E.2) enabled P-sample interviewing to begin April 24, 2000, whereas P-sample interviewing for the PES did not begin until June 25, 1990. All A.C.E. processes, from sampling through estimation, were carried out according to well-documented specifications, with quality control procedures (e.g., reviews of the work of clerical matchers and field staff) implemented at appropriate junctures.

6–A.3 Defining the P-Sample: Treatment of Movers

The A.C.E. P-sample, partly because of design decisions made for the previously planned Integrated Coverage Measurement Program (see Section 5-D.1), included three groups of people and not two as in the 1990 PES. The three groups were: nonmovers who lived in a P-sample housing unit on Census Day and on the A.C.E. interview day; outmovers who lived in a P-sample housing unit on Census Day but had left by the A.C.E. interview day; and inmovers

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

who moved into a P-sample housing unit between Census Day and the A.C.E. interview day. In the dual-systems estimation for each poststratum (population group), the number of matched movers was calculated by applying the estimated match rate for outmovers to the weighted number of inmovers.3 This procedure, called PES-C, assumed that inmovers would be more completely reported than outmovers. The Bureau also anticipated that it would be easier to ascertain the Census Day residence status of outmovers than to search nationwide for the Census Day residence of inmovers, as was done in the 1990 PES using the PES-B procedure.

An analysis of movers in the A.C.E. P-sample conducted by the Census Bureau in summer 2001 supported the assumption of more complete reporting of inmovers: the total weighted number of outmovers was only two-thirds (0.66) the total weighted number of inmovers (the outmover/inmover ratio varied for broad population groups from less than 0.51 to more than 0.76—Liu et al., 2001:App.A). A subsequent evaluation found little effect on the dual-systems population estimates of using inmovers to estimate the number of movers (Keathley, 2002).

Noninterview and missing data rates were substantially higher for outmovers compared with inmovers and nonmovers (see Sections 6-A.5 and 6-A.6), so one might have expected to see an increase in variance of the dual-systems estimates from using outmovers to estimate match rates from the P-sample. Yet Liu et al. (2001) found that movers had no more and probably less of an effect on the dual-systems estimates in 2000 than in 1990. A primary reason for this result is that the percentage of movers among the total population was lower in the A.C.E. than in the PES—using the number of inmovers in the numerator, the A.C.E. mover rate was 5.1 percent, compared with a mover rate of 7.8 percent in the PES. In turn, the lower A.C.E. mover rate resulted from the 2-month head start that was achieved by telephone interviewing in the A.C.E. (see Section 6-A.2). The mover rate for A.C.E. cases interviewed after June 25, 2000, was comparable to the PES mover rate (8.2 and 7.8 percent, respectively); the mover rate for A.C.E. cases interviewed before June 25, 2000, was only 2.1 percent (Liu et al., 2001:5).

3  

For 63 poststrata with fewer than 10 outmovers, the weighted number of outmovers was used instead.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

6–A.4 Defining the E-Sample: Exclusion of “Insufficient Information” Cases

Dual-systems estimation in the census context requires that census enumerations be excluded from the E-sample when they have insufficient information for matching and follow-up (so-called IIs—see Section 5-A). The 2000 census had almost four times as many IIs as the 1990 census—8.2 million, or 2.9 percent of the household population, compared with 2.2 million or 0.9 percent of the population. In 2000 5.8 million people fell into the II category because they were whole-person imputations (types 1–5, as described in Section 4-D); another 2.4 million people were IIs because their records were not available in time for the matching process. These people were not in fact enumerated late; rather, they represented records that were temporarily deleted and subsequently reinstated on the census file as part of the special MAF unduplication process in summer–fall 2000 (see Section 4-E). In 1990 only 1.9 million whole-person imputations and 0.3 million late additions from coverage improvement programs fell into the II category.

Because the phenomenon of reinstated cases in the 2000 census was new and the number of such cases was large, the Bureau investigated the possible effects of their exclusion from the E-sample on the dual-systems estimate. Hogan (2001b) demonstrated conceptually that excluding the reinstated people would have little effect so long as they were a small percentage of census correct enumerations or their A.C.E. coverage rate (ratio of matches to all correct enumerations) was similar to the E-sample coverage rate. To provide empirical evidence, a clerical matching study was conducted in summer 2001 of reinstated people whose census records fell into an evaluation sample of one-fifth of the A.C.E. block clusters (Raglin, 2001). This study found that 53 percent of the reinstated records in the evaluation sample duplicated another census record (and, hence, had no effect on the DSE), 25 percent matched to the P-sample, and 22 percent were unresolved (such a large percentage resulted from the infeasibility of follow-up to obtain additional information). Using a range of correct enumeration rates for the unresolved cases, the analysis demonstrated that the exclusion of reinstated records from the E-sample had a very small effect on the DSE for the total population (less than one-tenth of 1 percent). Moreover, because

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

total and matched reinstated cases were distributed in roughly the same proportions among age, sex, race/ethnicity, and housing tenure groups, their exclusion from the E-sample had similar (negligible) effects on the DSE estimates for major poststrata.

Nonetheless, the large number of IIs in 2000 cannot be ignored in understanding patterns of net undercount. Although reinstated cases accounted for roughly the same proportion of each major poststratum group (about 1 percent) in 2000, whole-person imputations accounted for higher proportions of historically undercounted groups, such as minorities, renters, and children, than of historically better counted groups. We consider the role of whole-person imputations in helping to account for the measured reduction in net undercount rate differences among major population groups in 2000 from 1990 in Section 6-C.1.

6–A.5 Household Noninterviews in the P-Sample

The P-sample survey is used to estimate the match rate component of the dual-systems estimation formula. A small bias in the match rate can have a disproportionately large effect on the estimated net undercount (or overcount) because coverage error is typically so small relative to the total population (1–2 percent or less). To minimize variance and bias in the estimated match rate, it is essential that the A.C.E. successfully interview almost all P-sample households and use appropriate weighting adjustments to account for noninterviewed households.

Interview/Noninterview Rates

Overall, the A.C.E. obtained interviews from 98.9 percent of households that were occupied on the day of interview. This figure compares favorably with the 98.4 percent interview rate for the 1990 PES.4 However, the percentage of occupied households as of Census Day that were successfully interviewed in A.C.E. was somewhat lower—97 percent, meaning that a weighting adjustment had to account for the remaining 3 percent of noninterviewed households.

4  

These percentages are unweighted; they are about the same as weighted percentages for 2000. Weighted percentages are not available for 1990 (see Cantwell et al., 2001).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

The lower interview rate for Census Day households was due largely to the difficulty of finding a respondent for housing units in the P-sample that were entirely occupied by people who moved out between the time of the census and the A.C.E. interview (outmovers). Such units were often vacant, and it was not always possible to interview a neighbor or landlord who was knowledgeable about the Census Day residents. The interview rate for outmover households was 81.4 percent. Such households comprised 4 percent of Census Day occupied households in the P-sample.

Noninterview Weighting Adjustments

Two weighting adjustments—one for the A.C.E. interview day and one for Census Day—were calculated so that interviewed households would represent all households that should have been interviewed. Each of the two weighting adjustments was calculated separately for households by type (single-family unit, apartment, other) within block cluster.

For Census Day, what could have been a relatively large noninterview adjustment for outmover households in a block cluster was spread over all interviewed Census Day households in the cluster for each of the three housing types. Consequently, adjustments to the weights for interviewed households were quite low, which had the benefit of minimizing the increase in the variance of A.C.E. estimates due to differences among weights: 52 percent of the weights were not adjusted at all because all occupied households in the adjustment cell were interviewed; for another 45 percent of households, the weighting adjustment was between 1.0 and 1.2 (Cantwell et al., 2001:Table 2).

Evaluation

Although the P-sample household noninterview adjustments were small, a sensitivity analysis determined that alternative weighting adjustments could have a considerable effect on the estimated value of the DSE for the national household population. Three alternative noninterview adjustments were tested: assigning weights on the basis of characteristics other than those used in the original A.C.E. estimation, assigning weights only to

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

late-arriving P-sample interviews, and replicating the weights for missing interviews from nearby households. All three alternatives produced weighted total household population estimates that were higher than the original (March 2001) DSE estimate (Keathley et al., 2001:Table 4). Two of the alternative estimates exceeded the original estimate by 0.5–0.6 million people, which translates into an added 0.2 percentage points of net undercount on a total household population of 277.2 million. The differences between these two estimates and the original estimate also exceeded the standard error of the original estimate, which was 0.4 million people.

6–A.6 Missing and Unresolved Data in the P-Sample and E-Sample

Missing and unresolved person data can bias the estimated P-sample match rate, the estimated E-sample correct enumeration rate, or both rates. Imputation procedures used to fill in missing values can also add bias and variance, so achieving high-quality P-sample and E-sample data is critical for dual-systems estimation.

Missing Characteristics Needed for Poststratification

Overall rates of missing characteristics data in the P-sample and E-sample were low, ranging between 0.2 and 3.6 percent for age, sex, race, Hispanic origin, and housing tenure. Missing data rates for most characteristics were somewhat higher for the E-sample than for the P-sample. Missing data rates for the 2000 A.C.E. showed no systematic difference (up or down) from the 1990 PES; see Table 6.1.

As would be expected, missing data rates in the P-sample were higher for proxy interviews, in which someone outside the household supplied information, than for interviews with household members; see Table 6.2. By mover status, missing data rates were much higher for outmovers than for nonmovers and inmovers, which is not surprising given that 73.3 percent of interviews for outmovers were obtained from proxies, compared with only 2.9 and 4.8 percent of proxy interviews for nonmovers and inmovers, respectively. Even “nonproxy” interviews for outmovers could have been from household members who did not know the outmover.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 6.1 Missing Data Rates for Characteristics, 2000 A.C.E. and 1990 PES P-Sample and E-Sample (weighted)

 

Percentage of People with Imputed Characteristics

 

2000 A.C.E.

1990 PES

Characteristic

P-Sample

E-Sample

P-Sample

E-Sample

Age

2.4

2.9

0.7

2.4

Sex

1.7

0.2

0.5

1.0

Race

1.4

3.2

2.5

11.8

Hispanic Origin

2.3

3.4

Housing Tenure

1.9

3.6

2.3

2.5

Any of Above

5.4

10.4

NOTES: Accuracy and Coverage Evaluation (A.C.E.) E-sample imputations were obtained from the imputations performed on the census records; Post-Enumeration Survey (PES) E-sample imputations were performed specifically for the E-sample. A.C.E. E-sample “edits” (e.g., assigning age on the basis of the person’s date of birth, or assigning sex from first name) are not counted as imputations here. The base for the A.C.E. P-sample imputation rates includes nonmovers, inmovers, and outmovers, including people who were subsequently removed from the sample as nonresidents on Census Day. Excluded from the base for the A.C.E. P-sample and E-sample imputation rates are people eligible for the targeted extended search who were not selected for the targeted extended search sample and who were treated as noninterviews in the final weighting.—, not available.

SOURCE: Cantwell et al. (2001:Tables 3b, 3c).

Table 6.2 Percentage of 2000 A.C.E. P-Sample People with Imputed Characteristics, by Proxy Interview and Mover Status (weighted)

 

Percentage of People with Imputed Characteristics

Characteristic

Household Interview

Proxy Interview

Nonmover

Inmover

Outmover

Age

2.1

7.9

2.3

2.3

6.0

Sex

1.5

4.2

1.7

0.4

3.4

Race

1.0

8.7

1.2

1.3

8.0

Hispanic Origin

1.8

11.0

2.1

0.8

9.0

Housing Tenure

1.7

5.2

1.9

0.4

2.4

Any of Above

4.4

21.9

5.0

3.7

17.4

Percent of Total

P-Sample

94.3

5.7

91.7

4.8

3.4

NOTES: See notes to Table 6.1.

SOURCE: Cantwell et al. (2001:Table 3b).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

P-sample imputations were performed separately for individual missing characteristics after all matching and follow-up had been completed. For example, tenure on the P-sample was imputed by using tenure from the previous household of the same type (e.g., single-family home) with tenure reported, while race and ethnicity were imputed when possible from the distribution of race and ethnicity of other household members or from the distribution of race and ethnicity of the previous household with these characteristics reported (see Cantwell et al., 2001). Imputations for missing characteristics in the E-sample records were obtained from those on the census data file (see Section 7-B). Because the overall rates of missing data were low, the imputation procedures had little effect on the distribution of individual characteristics (Cantwell et al., 2001:24–26). However, given the somewhat different procedures for the P-sample and the E-sample, imputation could misclassify people by poststrata and contribute to inconsistent poststrata classification for matching P-sample and E-sample cases (see Section 6-A.10).

P-sample Unresolved Residence Status

The use of the PES-C procedure to define the P-sample for estimating match rates (see Sections 5-D.1 and 6-A.3) made it necessary to impute a residence status probability to P-sample nonmover and outmover cases whose status as Census Day residents in an A.C.E. block cluster was not resolved after matching and follow-up. (Mover status was assigned before follow-up, which explains why “nonmovers” and “outmovers” could be reclassified as Census Day nonresidents or as having unresolved residence status.) On a weighted basis, unresolved residence cases accounted for 2.2 percent of all the cases considered for inclusion in the Census Day P-sample. Outmovers accounted for 29 percent of P-sample cases with unresolved residence status, although they were less than 4 percent of the total P-sample (Cantwell et al., 2001:Table 5b).

The imputation procedure assigned an average residence probability to each unresolved case taken from one of 32 cells defined by owner/renter, non-Hispanic white/other, and before follow-up match status (eight categories). After imputation the percentage of Census Day residents among the original Census Day P-sample dropped slightly from 98.2 percent of resolved cases to 97.9 per-

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

cent of all cases, because the imputation procedure assigned lower residence probabilities to unresolved cases (77.4 percent overall).5

P-sample Unresolved Match Status

The weighted percentage of P-sample cases with unresolved match status was only 1.2 percent. The denominator for the percentage is P-sample nonmovers and outmovers who were confirmed Census Day residents or had unresolved residence status; confirmed non-Census Day residents were dropped from the P-sample at this point. This percentage compares favorably with the 1.8 percent of cases with unresolved match status in the 1990 PES. Very little was known about the A.C.E. P-sample people with unresolved match status; 98 percent of them lacked enough reported data for matching (i.e., they lacked a valid name or at least two characteristics or both).

The imputation procedure assigned an average residence probability to each unresolved case taken from one of 16 cells defined by resolved/unresolved residence status, nonmover/outmover status, housing unit a match/not a match, and person had one or more characteristics imputed/no characteristics imputed (Cantwell et al., 2001:Table 9). After imputation, the percentage of matches dropped slightly, from 91.7 percent of resolved cases (matches and nonmatches) to 91.6 percent of all cases because the imputation procedure assigned lower match status probabilities to unresolved cases (84.3 percent overall).

E-sample Unresolved Enumeration Status

The weighted percentage of E-sample cases with unresolved enumeration status was 2.6 percent, slightly higher than the comparable 2.3 percent for the 1990 PES. Most of the unresolved cases (89.4 percent) were nonmatches for which field follow-up did not resolve whether their enumeration was correct or erroneous.

The imputation procedure assigned an average correct enumeration probability to each unresolved case taken from one of 64 cells defined by 13 categories of before-follow-up match status (3 of which were tabulated separately for non-Hispanic whites and

5  

This figure is a correction from the original number in Cantwell et al., 2001:Table 8).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

others), housing unit a match/not a match, and person had one or more characteristics imputed/no characteristics imputed (Cantwell et al., 2001:Table 10). After imputation, the percentage of correct enumerations dropped slightly, from 95.5 percent of resolved cases (correct and erroneous enumerations) to 95.3 percent of all cases because the imputation procedure assigned lower correct enumeration probabilities to unresolved cases (76.2 percent overall).

Evaluation

A sensitivity analysis determined that alternative procedures for imputing P-sample residence and match status probabilities and E-sample correct enumeration status probabilities could have a considerable effect on the estimated value of the DSE for the national household population, particularly when combined with alternative procedures for making P-sample household noninterview adjustments (see Section 6-A.5).6 One of the alternative imputation procedures substituted multivariate logistic regressions for the average cell values used in the original A.C.E. Another procedure, which assumed that unresolved cases differed significantly from resolved cases (what is termed nonignorable missingness), used 1990 PES data to develop alternative (lower) probabilities of residence, match, and correct enumeration status. These probabilities are illustrative; there is no evidence for their reasonableness compared with the probabilities used in the original A.C.E.

The results of the sensitivity analysis demonstrate the difference that alternative procedures could make. Thus, for all 128 combinations of noninterview adjustment and imputation procedures, about one-third of them differed from the average DSE population estimate by more than plus or minus 0.7 million household members; the remaining two-thirds differed by less than this amount (Keathley et al., 2001:2, as revised in Kearney, 2002:5).

6–A.7 Accuracy of Household Residence Information

For dual-systems estimation to produce highly accurate population estimates, it is critical not only for there to be very low household

6  

A sensitivity analysis was not conducted for imputations for missing demographic characteristics.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

and item nonresponse rates in the P-sample and E-sample, but also for household composition to be accurately reported. Two factors are critical to accurate reporting: first, the ability of the A.C.E. questionnaires and interviewing procedures for P-sample interviewing and follow-up of nonmatched P-sample and E-sample cases to elicit the Census Day residence status of household members; and, second, the willingness of respondents to answer the questions as they were intended.

As an example, a household in the census that was part of the E-sample may have claimed a college student or an institutionalized family member as a household member even though the person was enumerated in his or her group quarters according to census residence rules. The result would be a duplicate census enumeration. In the case when the household was missed by the P-sample, the matching and follow-up process should have identified the nonduplicated E-sample household residents as correct (nonmatched) enumerations and the duplicated college student as having been erroneously enumerated at the household address in the census. If, however, the household persisted in claiming the student as a household member, then the A.C.E. would incorrectly classify him or her as a correct (nonmatched) enumeration, thereby overstating the correct enumeration rate and the DSE estimate of the population. This example and one other involving undetected duplicates of census enumerations with nonmatched P-sample cases are described, along with their effects, in Box 6.1.

The A.C.E. questionnaires were improved over the PES questionnaires, and computer-assisted interviewing ensured that interviewers asked all of the questions as they were written. However, Census Bureau staff worried that the A.C.E. interviewing might not have ascertained Census Day household membership accurately in many cases because the original A.C.E. estimated only 4.6 million duplicate and “other residence” erroneous enumerations whereas the PES estimated 10.7 million of these types of erroneous enumerations (see Anderson and Fienberg, 2001:Table 2). Indeed, the Evaluation Follow-Up and Person Duplication Studies conducted by the Census Bureau in summer 2001 provided evidence that the A.C.E. failed to detect numerous instances in which census respondents listed one or more residents who should not have been counted as part of the household on Census Day. Consequently, the original

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Box 6.1
Alternative Treatment of Duplicate Census Enumerations, Two Examples

  1. Census Household-to-Group Quarters Duplication; Household in E-Sample, not P-Sample: College student is enumerated in group quarters (college dormitory) and by parents at home.

Proper treatment in the A.C.E. when parents’ household is not in the P-sample:

E-sample follow-up of nonmatched household should classify the parents as correct enumerations and the student as an “erroneous enumeration, other residence” (i.e., should have been enumerated at the college location only). In this instance, the A.C.E. would not label the college student as a duplicate because it would not know of the group quarters enumeration; the label would be “other residence,” meaning that the person should have been enumerated at the group quarters. Regardless, the A.C.E. would correctly classify the enumeration as erroneous.

Erroneous treatment:

Household persists in claiming the student in E-sample follow-up, so all three household members are classified as correct (nonmatched) enumerations.

Effect of erroneous treatment on DSE:

Extra “correct” enumeration raises the correct enumeration rate, which (incorrectly) raises the DSE estimate of the population and net under count.

  1. P-Sample Resident Nonmover Household-to-Census Household Duplication Outside A.C.E. Search Area: P-sample household duplicates census enumeration outside its block cluster and ring of surrounding blocks.

Proper treatment in the A.C.E.:

The P-sample interview should have reclassified the household as comprising inmovers (and hence not eligible for estimating the match rate) or dropped it from the sample as having been wrongly assigned to an A.C.E. block cluster.

Erroneous treatment:

The household is retained in the P-sample as a nonmover resident and used to contribute to the numerator as well as the denominator of the match rate depending on whether it matches a census enumeration inside the A.C.E. search area.

Effect of erroneous treatment on the DSE:

Depends on match status of erroneously retained P-sample resident nonmover cases. If predominantly matches, their inclusion (incorrectly) raises the match rate and lowers the DSE estimate of the population and net undercount.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

(March 2001) A.C.E. underestimated duplicate enumerations in the census and correspondingly overestimated correct enumerations. It also overestimated P-sample Census Day residents, particularly nonmatches. The net effect (assuming an accurate estimate of omissions) was to overstate the correct enumeration rate, understate the match rate, and overstate the DSE estimate of the population by about 6.3 million people. Correcting this overstatement (before an adjustment for undercounting of men relative to women) would have produced an estimated net overcount of the population of 3 million people or 1.1 percent of the household population (see U.S. Census Bureau, 2003c:Table 12; see also Sections 6-B and 6-C.2).

6–A.8 Quality of Matching

The A.C.E. (and PES) involved a two-stage matching process. The first stage of matching occurred after P-sample interviewing; it began with a computer match followed by a clerical review of possible matches and nonmatches in order to establish an initial match status (P-sample) or enumeration status (E-sample) for as many cases as possible. The second stage occurred after follow-up of specified nonmatched and unresolved cases to try to resolve their status using additional information from the follow-up interviews. The accuracy of the matching can be no better than the accuracy of the underlying data about household composition and residence (as discussed in Section 6-A.7). Assuming accurate information, the question is the quality of the matching itself.

Examination of data from the original (production) A.C.E. matching provides indicators that the quality was high. Specifically, initial match status codes were rarely overturned at a subsequent stage of matching: clerks confirmed a high percentage (93 percent) of computer-designated possible matches as matches; technicians and analysts who reviewed clerical matches rarely overturned the clerks’ decisions, and field follow-up most often confirmed the before-follow-up match code or left the case unresolved.7 Because of the dependent nature of the production matching, however, such

7  

From tabulations by panel staff of the P-Sample and E-Sample Person Dual-System Estimation Output Files, provided to the panel February 16, 2001 (see U.S. Census Bureau, 2001b). Tabulations weighted using TESFINWT.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

indicators do not answer the question of whether the final match status codes were correct.

In summer 2001, Census Bureau analysts completed a Matching Error Study to evaluate the quality of the A.C.E. matching criteria and procedures (Bean, 2001). The Matching Error Study involved an independent rematch in December 2000 by highly trained matching staff (technicians and analysts) of all of the P-sample and E-sample cases in one-fifth of the A.C.E. block clusters (2,259 clusters). The Matching Error Study used the original A.C.E. data on household composition and residence and not any data from evaluation studies, so that it could measure the extent of matching error only, not confounded with measurement error. The study assumed that agreement of the original (production) and rematch codes, or agreement of an analyst in conflicting cases with either the production or the rematch code would produce match codes as close to truth as was possible. The study also assumed that the production matching and the evaluation rematching were independent—Matching Error Study rematchers did not review clusters that they worked on during the original A.C.E. and did not have access to the original match codes. Bean (2001:3) notes some minor ways in which independence could have been compromised.

A comparison of the results of the Matching Error Studies for the A.C.E. (Bean, 2001) and the PES (Davis and Biemer, 1991a,b) provides evidence of improved matching quality in the A.C.E. over the PES. For the four final P-sample match codes (match, nonmatch, remove from the P-sample, and unresolved), the A.C.E. matching error study estimated only a 0.4–0.5 percent gross difference rate compared with a 1.5 percent gross difference rate for the PES.8 The net difference rate was also reduced in the A.C.E. from the PES (0.4 and 0.9 percent, respectively).9 Gross and net difference rates for classification of E-sample cases (correct enumeration, erroneous enumeration, unresolved) were also substantially reduced in the A.C.E. from the PES (0.5–0.6 percent and 2.3 percent gross difference rates, respectively; 0.2 and 1.1 percent net difference rates, respec-

8  

The gross difference rate is the proportion of cases whose match codes differ in the production and the rematch.

9  

The net difference rate is the sum of the absolute differences between the production and rematch totals for all four match codes divided by the population total.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

tively). A measure of proportionate error for matches and correct enumerations for 16 aggregated poststratum groups showed smaller errors and less variation in the degree of error among groups in the A.C.E. than in the PES (Bean, 2001:Tables 5b, 5c).

Despite the improved quality of matching in the A.C.E. compared with the PES, matching error still affected the original DSE population estimates for 2000. It significantly decreased the original P-sample match rates for the nation as a whole and for 2 of 16 aggregated poststratum groups (minority and nonminority renters in large or medium metropolitan mailback areas with high return rates). It did not, however (assuming correct information on residence), significantly affect the original E-sample correct enumeration rates for the nation or any of the 16 groups. The effect of matching error on the ratio of the match rate to the correct enumeration rate resulted in an overstatement of the 2000 DSE total population estimate by about 0.5 million people (Bean, 2001:20), which amounts to an overstatement of the net undercount of about 0.2 percentage points.

6–A.9 Targeted Extended Search

The TES operation in the A.C.E. was designed to reduce the variance and bias in match and correct enumeration rates that could result from geocoding errors (i.e., assignment of addresses to the wrong block) in the census or in the P-sample address listing. In a sample of block clusters for which there was reason to expect geocoding errors (2,177 of 6,414 such clusters), the clerical search for matches of P-sample and census enumerations and for correct E-sample enumerations was extended to one ring of blocks surrounding the A.C.E. block cluster. Sampling was designed to make the search more efficient than in 1990, as was targeting the search in some instances to particular blocks (see Appendix E.3.b).

For the P-sample, only people in households that did not match an E-sample address (4.7 percent of total P-sample cases that went through matching) were searched in the sampled block clusters. On the E-sample side, only people in households identified as geocoding errors (3 percent of total E-sample cases) were searched in the sampled block clusters. Weights were assigned to the TES persons in the sampled block clusters to adjust for the sampling. Correspond-

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

ingly, persons who would have been eligible for TES but were not in a sampled block cluster were assigned a zero weight.

The TES had the desired effect of reducing the variance of the DSE estimates for poststrata. The reduction in the average and median coefficient of variation (the standard error of an estimate as a percentage of the estimate) for poststrata was 22 percent, similar to an average reduction of 20 percent for the nationwide extended search operation in 1990 (Navarro and Olson, 2001:7).

The TES operation in the A.C.E. was methodologically more complex than the corresponding operation in the 1990 PES. At the time of the decision not to use the original DSE estimates of the population to adjust census data for redistricting in March 2001, the Census Bureau cited concerns that the TES may have been un-balanced, thereby introducing bias into the DSE. Suggestive of an imbalance, which could occur if the P-sample and E-sample search areas were not defined consistently for the TES, was the larger increase in the P-sample match rate (3.8 percentage points) compared with the E-sample correct enumeration rate (2.9 percentage points) (Navarro and Olson, 2001:Table 1). Such an imbalance may also have occurred in 1990, when the extended search increased the P-sample match rate by 4.1 percentage points and the E-sample correct enumeration rate by 2.3 percent. A follow-up study to the 1990 census was not able to determine whether balancing error had occurred (Bateman, 1991).

A subsequent evaluation, which used data from two TES follow-up studies that rechecked certain kinds of housing units (Adams and Liu, 2001:i), determined that the larger increase in the TES in the P-sample match rate compared with the E-sample correct enumeration rate was due to P-sample geocoding errors and E-sample classification errors that did not affect the DSE. P-sample geocoding errors were the primary explanation; they occurred when P-sample address listers mistakenly assigned addresses from surrounding blocks to A.C.E. block clusters. When the original A.C.E. clerk did not find matches for these cases in the A.C.E. block cluster because there were no corresponding census addresses, then a search for matches in the surrounding ring was likely to be successful. If the TES had not been conducted, these matches would have been missed, resulting in an underestimate of the P-sample match rate and an overestimate of the DSE population estimate and the net undercount.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

The TES evaluation study did find about 246,000 P-sample nonmatches and about 195,000 E-sample correct enumerations located beyond the surrounding blocks for a search. These cases should have been treated as nonresidents and geocoding errors, respectively; their inclusion as nonmatched residents and correct enumerations resulted in a slight overestimate in the DSE population estimate.

6–A.10 Poststratification

Poststratification is an important aspect of dual-systems estimation. Because research suggests that the probabilities of being included in the census or in the P-sample vary by individual characteristics, it is important to classify P-sample and E-sample cases into groups or strata for which coverage probabilities are as similar as possible within the group and as different as possible from other groups. The DSE then is performed stratum by stratum.

Counterbalancing the need for finely defined poststrata are two considerations: each poststratum must have sufficient sample size for reliable estimates, and the characteristics used to define the poststrata should be consistently measured between the P-sample and the E-sample. As an example, a census respondent whose household was in the E-sample may have reported age 30 for a household member when a different respondent for the same household in the P-sample reported the person to be age 29. The matched person, then, would contribute to the P-sample match rate for the 18- to 29-year-old poststrata and to the E-sample correct enumeration rate for the 30- to 49-year-old poststrata. Misclassification can be consequential if the proportions misclassified are large and if the coverage probabilities vary greatly for the affected poststrata. Finally, a consideration for the Census Bureau for 2000 was the need to define the poststrata in advance and to specify categories for which direct estimates could be developed without the complications of the modeling that was used in 1990.

Taking all these issues into account, the Census Bureau specified 448 poststrata in advance of the original A.C.E. (collapsed to 416 in the estimation, see Table E.3 in Appendix E). The somewhat larger number of A.C.E. poststrata, compared with the 357 poststrata used for the revised 1990 PES estimation, was made possible by the larger

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

A.C.E. sample size.10 On the face of it, the original A.C.E. poststratification seemed reasonable in terms of using characteristics (age, sex, race, ethnicity, housing tenure, mail return rate) that historically have related to coverage probability differences. There was some inconsistency of classification by poststrata between the P-sample and E-sample in the A.C.E., although whether the level of inconsistency was higher or lower than in 1990 cannot be determined because of the unavailability of data for 1990 matched cases. Overall, 4.7 percent of A.C.E. matched cases (unweighted) were inconsistently classified as owner or renter; 5.1 percent were inconsistently classified among age and sex groups, and 3.9 percent were inconsistently classified among race/ethnic domains (Farber, 2001a:Table 1).

Among race/ethnicity domains, inconsistent cases as a percentage of E-sample matches showed wide variation, ranging from 1.5 percent for American Indians and Alaska Natives on reservations, to 18.3 percent for Native Hawaiians and Pacific Islanders, to 35.7 percent for American Indians and Alaska Natives off reservations. The latter inconsistency rate is very high. The major factor is that a large number of people (relative to the Native American population) who identified themselves as non-Hispanic white or other race in one sample identified themselves as American Indian or Alaska Native off reservations in the other sample (see Section 8-C.2). The effect was to lower the coverage correction factor for the latter group below what it would have been had there been no inconsistency. However, the coverage correction factor would have been lower yet for American Indians and Alaska Natives off reservations if they had been merged with the non-Hispanic white and other races domain. The reverse flow of American Indians and Alaska Natives identifying themselves as non-Hispanic whites or other races had virtually no effect on the coverage correction factor for the latter group, given its very large proportion of the population.

10  

The original 1990 PES estimation used 1,392 poststrata together with a composite estimation procedure to smooth the resulting DSEs. The much smaller revised set of 1990 poststrata were developed by analyzing census results that had become available (e.g., mail return rates, imputation rates, crowding) to determine which characteristics that could be used for poststratification best explained variations in those results. The Census Bureau also analyzed 1990 data to determine the original A.C.E. poststratification (Haines, 1999a,b).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

6–A.11 Original A.C.E.: Summary of Findings

From the extensive evaluations of the original A.C.E. data conducted by Census Bureau staff, we draw several conclusions. First, coverage evaluation using the dual-systems estimation method is a highly complex effort in the census context. Many things have to go very well, and errors need to be very small, particularly as the quantity being estimated—net undercount—is so small relative to the population. Second, as a major data collection, data processing, and estimation operation, the A.C.E. in fact went extremely well. It was conducted in a timely, controlled manner. Evaluation studies of matching error, the targeted extended search, poststratification inconsistency, and treatment of movers in the A.C.E. found generally improved performance over the PES and only small biasing effects on the DSE estimates—at least for national totals and aggregates of poststrata. Percentages of household noninterviews and cases with unresolved residence, match, or enumeration status were small (although evaluation found significant effects of plausible alternative reweighting and imputation methods on the DSE estimates).

Dwarfing all of these generally positive outcomes for the original A.C.E., however, were the findings from the Evaluation Follow-up and Person Duplication Studies of substantial underestimation of duplicate and other census erroneous enumerations and overestimation of P-sample residents, particularly nonmatched cases (see Sections 6-B.1 and 6-B.2). There was no counterpart of the Person Duplication Studies for the 1990 PES.

Finding 6.1: The 2000 Accuracy and Coverage Evaluation (A.C.E.) Program operations were conducted according to clearly specified and carefully controlled procedures and directed by a very able and experienced staff. In many respects, the A.C.E. was an improvement over the 1990 Post-Enumeration Survey, achieving such successes as high response rates to the P-sample survey, low missing data rates, improved quality of matching, low percentage of movers due to more timely interviewing, and substantial reductions in the sampling variance of coverage correction factors for the total population and important population groups. However, inaccurate

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

reporting of household residence in the A.C.E. (which also occurred in the census itself) led to substantial underestimation of duplicate enumerations in 2000 in the original (March 2001) A.C.E. estimates.

6–B A.C.E. REVISION II ESTIMATION DATA, METHODS, AND RESULTS

The A.C.E. Revision II process began in spring 2002 and was completed in early 2003. It had five major goals, which we discuss in turn: (6-B.1) improve estimates of erroneous census enumerations; (6-B.2) improve estimates of census omissions; (6-B.3) develop new models for missing data; (6-B.4) enhance the estimation poststratification; and (6-B.5) consider adjustment for correlation bias (phrased simply, the assumption that some groups are disproportionately missed in both the census and the independent P-sample survey). We consider the combined results of these efforts in Section 6-B.6 and the findings from analyses of error in the Revision II estimates in Section 6-B.7. The Revision II work did not collect new data beyond the additional data collected in evaluation studies conducted in 2001. It used a combination of the original A.C.E. data and evaluation data to develop the revised dual-systems estimates of the population and coverage error (see Table 6.3).

6–B.1 Reestimation of Erroneous Census Enumerations

A major concern of Census Bureau staff in reviewing the original March 2001 A.C.E. estimates was the smaller number of duplicates and “other residence” erroneous enumerations identified in the A.C.E. compared with the 1990 PES. To develop better estimates of erroneous enumerations, the Revision II analysts relied on two major studies: the Evaluation Follow-up Study and the Person Duplication Studies.

Evaluation Follow-Up, E-Sample

The Evaluation Follow-Up (EFU) Study was planned as one of the longer term A.C.E. evaluations (i.e., evaluations that would be completed after March 2001); it resembled a similar study conducted

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 6.3 Data Sources and Evaluations Used in A.C.E. Revision II

Program

Sample Size

Use

Decennial Census

Not applicable

Source of C (total census enumerations) and II (whole-person imputations and reinstated cases) in dual-systems estimation (DSE) formula; Source of E-sample of about 700,000 people in 11,000 block clusters; Conducted spring–summer 2000

Accuracy and Coverage Evaluation (A.C.E.)

P-Sample Interview

About 700,000 people in 11,000 block clusters

Information on nonmovers, inmovers, and outmovers (as of Census Day) at households on independent address list to use in first-stage matching; Conducted April–August 2000

Follow-up Interview (also known as Person Follow-Up, or PFU)

Nonmatched E-sample cases; Selected nonmatched P-sample cases; in 11,000 block clusters

Additional information to facilitatesecond-stagematching;conducted fall 2000

Matching Error Study (MES)

About 170,000 P-sample and E-sample people in 2,259 block clusters

Rematch by highly trained staff of A.C.E. subsample using original A.C.E. information to estimate matching error; Conducted December 2000.

Evaluation Follow-up (EFU)

About 70,000 E-sample cases; About 52,000 P-sample cases; in 2,259 block clusters

Additional information on residency collected in January–February 2001; Highly trained staff rematched cases using the EFU information in summer 2001

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

EFU Reanalysis (also known as PFU/EFU Review)

About 17,500 E-sample cases in 2,259 block clusters

Most experienced matchers used PFU (Person Follow-Up) and EFU data to determine enumeration status; Conducted summer 2001

Revision II EFU Reanalysis (also known as Recoding Operation)

About 77,000 E-sample cases; About 61,000 P-sample cases; in 2,259 block clusters

EFU Reanalysis recoding extended to full EFU evaluation samples (plus other cases, e.g., with insufficient information for matching); computer recoding used for about one-half of cases; results used to correct for measurement error in cases not linked outside the search area; Conducted summer 2002 (MES results also used for P-sample correction)

Person Duplication Studies

Full E-sample (700,000 cases); Full P-sample (700,000 cases)

Matched to census enumerations by name and birthdate nationwide; Conducted summer 2001

Further Study of Person Duplication (FSPD)

Full E-sample (700,000 cases); Full Psample (700,000 cases)

Refinement of Person Duplication Study matching; results used to correct for duplicate E-sample cases linked to census enumerations outside A.C.E. search area; and to correct nonmover residence status for P-sample cases linked to census enumerations outside A.C.E. search area; Conducted summer 2002.

NOTES: Adapted from Kostanich (2003b:Chart 1).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

for the 1990 PES (see West, 1991). In the EFU, interviewers revisited a subsample of the E-sample housing units in one-fifth of the A.C.E. block clusters in January–February 2001, obtaining data for about 70,000 people on their Census Day residence (about 10 percent of the total E-sample). The EFU subsample included E-sample cases in the evaluation block clusters who were followed up in the original A.C.E. (mostly nonmatches) and a sample of E-sample cases in the evaluation block clusters who were not followed-up in the original A.C.E. (mostly matches). The EFU also interviewed households containing about 52,000 P-sample cases in the evaluation clusters; see Section 6-B.2. The EFU interview asked detailed questions about other residences for the entire household, while the original A.C.E. follow-up interview conducted after the first stage of matching focused on specific nonmatched individuals. Experienced matchers used the information from the EFU interview to determine the match status of the EFU cases.

The rematching estimated that the A.C.E. should have classified an additional 2.8 million people as erroneous and not correct census enumerations, most often because the person lived elsewhere on Census Day. The EFU also estimated that the A.C.E. should have classified an additional 0.9 million people as correct and not erroneous census enumerations. On balance, the EFU estimated that the A.C.E. failed to measure 1.9 million erroneous census enumerations. The EFU did not resolve the status of an estimated 4.6 million census enumerations, or 1.7 percent of the total E-sample (by comparison, the unresolved rate estimated from the original A.C.E. for the E-sample cases in the EFU was 2.6 percent).11 Population groups that exhibited the highest percentages of classification errors included people ages 18–29 and nonrelatives of the household head (Krejsa and Raglin, 2001:i–ii).

Because the EFU estimate of 1.9 million (net) unmeasured erroneous census enumerations in the A.C.E. seemed high, a subset of the EFU sample (about 17,500 cases) was reanalyzed by Census Bureau staff with the most extensive experience in matching, using the information from the original A.C.E. follow-up and the EFU interview. The result was an estimate that, on balance, the A.C.E. had failed to measure about 1.5 million erroneous census enumerations. (This es-

11  

All numbers and percentages are weighted to the household population.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

timate was produced independently of the original EFU estimate of 1.9 million net unmeasured erroneous enumerations.) However, the reanalysis could not resolve the enumeration status of an estimated 15 million cases, including some cases in which the original A.C.E. and EFU enumeration status codes (correct or erroneous) conflicted (Adams and Krejsa, 2001:i).

The reanalysis examined the source of E-sample classification errors. Of cases that changed from correct to erroneous in the reanalysis, the three largest groups were: (1) people who duplicated a correct enumeration in a group quarters (34 percent, made up of 17 percent college dormitory, 8 percent nursing home, 8 percent other); (2) people who duplicated a correct enumeration at another residence (23 percent, made up of 7 percent other home, 4 percent joint custody, 5 percent visiting, 3 percent other home for work, 4 percent other type of second home); and (3) movers who had been incorrectly coded as residents (16 percent) (Adams and Krejsa, 2001:Table 3). Of cases that changed from erroneous to correct, the three largest groups were: (1) people who had been miscoded in the original A.C.E. (16 percent); (2) movers who had been incorrectly coded as nonresidents (14 percent); and (3) correctly enumerated people for whom a duplicate enumeration at another residence or a group quarters residence had been incorrectly accepted (14 percent) (Adams and Krejsa, 2001:Table 4).

The information from the reanalysis of 17,500 E-sample cases was used in developing the October 2001 preliminary revised estimates of the population and net undercount (see Section 5-D.3). For the Revision II reestimation, the reanalysis was extended to the full 70,000 EFU sample plus 7,000 cases not included in the EFU. To make best use of the limited time available for the full reanalysis, computer recoding was tested and ultimately used for 39,000 cases; the rest were recoded by technicians and analysts; a special review was conducted to resolve conflicting cases.

The A.C.E. Revision II E-sample reanalysis estimated that the A.C.E. should have classified an additional 2.7 million people as erroneous and not correct census enumerations. It also estimated that the A.C.E. should have classified 0.7 million people as correct and not erroneous enumerations. On balance, the Revision II reanalysis estimated that the A.C.E. failed to measure 2 million erroneous census enumerations (slightly higher than the EFU estimate). The

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Revision II reanalysis did not resolve the status of an estimated 6.4 million census enumerations (2.4 percent of the total weighted E-sample; see Krejsa and Adams, 2002:Table 6). The unresolved cases were imputed an enumeration status probability (see Section 6-B.3).

Person Duplication Studies, E-Sample

A new study was implemented in summer 2001 to measure duplicate enumerations in the census and provide the basis for determining how many such duplications were not detected in the A.C.E. In a first stage, all E-sample cases were processed by computer, searching for an exact match by first and last name and month and year of birth among all nonimputed census enumerations nationwide (including group quarters and reinstated cases). In a second stage, members of pairs of households for which an exact match had been identified were statistically matched to identify additional duplicate enumerations.12

This computer matching study estimated that 1.9 million census household enumerations (weighted) duplicated another household enumeration within the A.C.E. search area (block cluster and surrounding ring); another 2.7 million census household enumerations duplicated another household enumeration outside the A.C.E. search area; and 0.7 million census household enumerations duplicated a group quarters enumeration (Mule, 2001:Table 6). In addition, an estimated 2.9 million census household enumerations duplicated other enumerations in housing units that were deleted from the census as part of the special summer 2000 effort to reduce duplicate enumerations from duplicate MAF addresses. Only 260,000 of these links were outside the A.C.E. search area.

For the Revision II reestimation, a Further Study of Person Duplication was conducted, in which some refinements were made to the computer matching methodology, including greater use of statistical matching and computing a probability for establishing a duplicate link instead of using a model weight approach. Statistical matching was used when two or more duplicates were detected within a

12  

See Mule (2001) for details of the matching, which was more complex than indicated in the text. Such matching could not be conducted for the 1990 PES because the 1990 census ascertained only year of birth and did not capture names for cases not in the PES E-sample.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

household; exact matching had to be relied on when there was a link for only one person to another household or to a group quarters. No efficiency correction was made, in contrast to the October 2001 preliminary revised estimates when Fay (2001) increased the estimate of duplicate enumerations from the Person Duplication Studies by making an “efficiency” adjustment to allow for the likelihood that the computer matching did not detect all the duplicates that a computer and clerical matching process would have done. For the basis of the decision not to make an efficiency correction in Revision II, see U.S. Census Bureau (2003c:52).

In total, the Further Study of Person Duplication estimated 5.8 million duplicates in the 2000 census, an increase of 0.5 million over the original Person Duplication Studies. This number included 2.5 million duplicates involving another household in the A.C.E. search area; 2.7 million duplicates involving another household outside the A.C.E. search area; and 0.6 million duplicates involving group quarters, of which 0.5 million were outside the A.C.E. search area (Mule, 2002a:Table 2). The estimated percentages of duplicate household-to-household enumerations among total census household enumerations were higher for non-Hispanic blacks, Hispanics, and American Indians than for other race/ethnicity groups, and higher for children under 18 and young adults ages 18–29 than for older people. The estimated percentages of duplicate household-to-group quarters enumerations were higher for non-Hispanic blacks and Asians than for other race/ethnicity groups and higher for young men and women ages 18–29 than for children or older adults (Mule, 2002a:Tables F1, F3, F5, F7). We discuss in greater detail the characteristics of duplicate enumerations in Section 6-C.2.

The Further Study of Person Duplication was evaluated by two different studies. One study compared the Further Study of Person Duplication results with duplicates detected using the Census Bureau’s database of administrative records (the Census and Administrative Records Duplication Study—see Bean and Bauder, 2002). The other study used the Bureau’s elite matching team to clerically review samples of duplicates detected by the Further Study of Person Duplication statistical matching and by the administrative records review outside the A.C.E. search area (the Clerical Review of Census Duplicates study—see Byrne et al., 2002). The clerical review concluded that the Further Study of Person Duplication was more

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

effective than the administrative records review in finding duplicates that were geographically close. Conversely, the administrative records review identified more duplicates that were geographically distant, but many of them were questionable.

With regard to the accuracy of the Further Study of Person Duplication, the clerical review agreed 95 percent of the time when the Further Study of Person Duplication established an E-sample duplicate link outside the A.C.E. search area and 94 percent of the time when the Further Study of Person Duplication concluded that an apparent link was not in fact a duplication. For the 1.2 million additional duplicates found by the administrative records review, but not the Further Study of Person Duplication, the clerical review agreed with the administrative records review 37 percent of the time, disagreed 47 percent of the time, and was undecided about the rest (U.S. Census Bureau, 2003c:38–39). Overall, these evaluations indicate that the Revision II estimates of census duplicates, even though higher than the preliminary revised estimates, were still an underestimate of duplication in the census.

Estimating Correct Census Enumerations in Revision II

For the Revision II estimates of correct census enumerations to include in the DSE formula, the Census Bureau used both the EFU and the Further Study of Person Duplication results. It used the Further Study of Person Duplication directly to estimate correct enumerations from among the E-sample cases that duplicated another census household or group quarters enumeration outside the A.C.E. search area (including a small number of links to deleted housing unit enumerations). It used the EFU to estimate factors to correct the measurement error in the original A.C.E. estimate of correct enumerations among the remaining E-sample cases. However, because the EFU was a subset of the full E-sample (see Table 6.3), the correction factors were calculated only for a small number of aggregate poststrata and not for the full set of E-sample poststrata (see Kostanich, 2003b:Table 4).

For the estimate from the Further Study of Person Duplication of correct enumerations among E-sample cases linked to enumerations outside the A.C.E. search area, the Census Bureau had to assign probabilities of being a duplicate or not—the Further Study

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

of Person Duplication by itself could not determine which of two linked enumerations was correct and which the duplicate. In some cases, a decision rule was used to assign enumeration status as 0 (erroneous) or 1 (correct). Thus, for E-sample links that were already coded as erroneous in the A.C.E., this code was accepted; for E-sample links with group quarters residents, the group quarters enumeration was assumed to be correct and the E-sample case erroneous; for E-sample links of people age 18 and over who were listed as a child of the householder in one source and not so in the other source, the enumeration for “not a child of” was accepted as correct. This rule handled adults living independently in housing units who were also listed at the parents’ residence, such as college students living off campus. For other duplicate links, a correct enumeration probability was assigned so that the weighted number of correct enumerations would be one-half the total weighted number of duplicate links. Probabilities were assigned separately within 18 categories defined by race/ethnicity (blacks, Hispanics, others), housing tenure, and type of linkage situation (entire household, children under age 18, all other links).

The effect of the Revision II adjustments for measurement error and undetected census duplications on the estimated number of census erroneous enumerations was substantial: the estimated number of erroneous enumerations increased from 12.5 million in the original A.C.E. to 17.2 million in Revision II.13 Correspondingly, the estimated correct enumeration rate decreased from 95.3 percent in the original A.C.E. to 93.5 percent in Revision II, which had the effect of lowering the DSE estimate of the population and reducing the net undercount (see Section 6-B.6).

6–B.2 Reestimation of Census Omissions

The preliminary revised (October 2001) A.C.E. estimates did not take account of possible errors in the P-sample. The Revision II estimation did take account of measurement errors in the P-sample, using results from a reanalysis of the Evaluation Follow-Up Study and from the Further Study of Person Duplication (see Section 6-

13  

From tabulations by panel staff (see National Research Council, 2001a:Table 7-5) of the Full Sample Poststratum-Level Summary File, provided to the panel June 20, 2003 (see Haines, 2002:5).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

B.1), as well as results from the Matching Error Study (see Section 6-A.8).

Evaluation Follow-Up Study, P-Sample

The original Evaluation Follow-Up Study completed in summer 2001 included a sample of 52,000 P-sample cases in addition to the sample of 70,000 E-sample cases discussed in Section 6-B.1 (see Raglin and Krejsa, 2001). For the preliminary revised October 2001 population estimates, no effort was made to reexamine the P-sample component of the EFU. For Revision II, all 52,000 P-sample cases were reanalyzed by a combination of computer matching and clerical review by the Bureau’s most experienced matchers using information from the EFU interview and the original A.C.E. follow-up interview.14 The purpose of the original EFU and the reanalysis for P-sample cases was to determine their residence status on Census Day; if they were not Census Day residents then they did not belong in the P-sample for estimating the match rate component of the DSE formula (see Section 6-A.3).

After a special review to minimize the number of conflicting cases, the Revision II P-sample reanalysis estimated that the A.C.E. should have classified 2.5 million Census Day residents as nonresidents; conversely, it should have classified 0.3 million nonresidents as residents. The net difference is 2.2 million people who should have been excluded from the P-sample match rate component because they were not Census Day residents. (Classification errors also occurred for inmovers who were used to estimate the number of movers in the DSE formula for most poststrata. On net, the reanalysis estimated a decrease in the estimated number of inmovers from 14.1 million to 13.3 million.) The Revision II reanalysis did not resolve the status of an estimated 7 million P-sample cases (compared with 5.8 million unresolved P-sample cases in the original A.C.E.) (Krejsa and Adams, 2002:Table 11). The unresolved cases were imputed a residence status probability (see Section 6-B.3).

14  

See the description of the Revision II reanalysis operation in Section 6-B.1; the P-sample component of the reanalysis added another 9,000 cases to the 52,000 examined in the original EFU.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Further Study of Person Duplication, P-Sample

The original Person Duplication Studies implemented in summer 2001 matched all of the P-sample cases, as well as all of the E-sample cases, to all nonimputed census enumerations nationwide. The objective for the P-sample was to determine how many cases originally classified as Census Day residents matched census enumerations outside the A.C.E. search area and, hence, might not be part of the P-sample. The preliminary revised October 2001 population estimates did not use the P-sample results. For Revision II, the Further Study of Person Duplication was implemented for the P-sample as well as the E-sample (see Section 6-B.1). The Further Study of Person Duplication results for P-sample cases who had been resident nonmovers in the original A.C.E. were used in the Revision II estimation as summarized below.

For original P-sample resident nonmovers, the Further Study of Person Duplication found links for 5.4 million cases to census enumerations outside the A.C.E. search area. Of these, 5 million involved links of household members to members of other households, and 0.4 million involved links of household members to group quarters residents (Mule, 2002a:Table 5). Of these linked P-sample cases, 2.7 million had been originally coded as P-sample nonmatches and the other 2.7 million had been originally coded as P-sample matches (implying that the matched census enumerations might also be duplicates).

The estimated percentages of linked P-sample household-to-census household cases among total original P-sample resident nonmovers were higher for non-Hispanic blacks, Hispanics, Native Hawaiians and Other Pacific Islanders, and American Indians than for other race/ethnicity groups, and higher for children under 18 and young adults ages 18–29 than for older adults. The estimated percentages of linked P-sample household-to-census group quarters cases were higher for non-Hispanic blacks than for other race/ethnicity groups and higher for young men and women ages 18–29 than for children or older adults (Mule, 2002a:Tables G1, G3, G5, G7).

The Further Study of Person Duplication P-sample component was evaluated by the Census and Administrative Records Study and the Clerical Review of Census Duplicates Study (see Section 6-B.1),

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

which generally supported the accuracy of the Further Study of Person Duplication links for P-sample nonmover residents (U.S. Census Bureau, 2003c:39–41). The clerical review agreed 96 percent of the time when the Further Study of Person Duplication established a P-sample duplicate link with a census enumeration outside the A.C.E. search area, but only 66 percent of the time when the Further Study of Person Duplication concluded that an apparent link was not in fact a duplication. For the 2.3 million additional duplicates found by the administrative records review, but not the Further Study of Person Duplication, the clerical review agreed with the administrative records review 29 percent of the time, disagreed 56 percent of the time, and was undecided about the rest (U.S. Census Bureau, 2003c:39–41). Overall, these evaluations indicate that the Revision II estimates of P-sample nonmover resident cases that duplicated a census enumeration outside the A.C.E. search area, while substantial in number, still probably underestimated cases that should have been classified as nonresident and dropped from the P-sample.

Estimating P-Sample Residents and Matches in Revision II

The use of evaluation study results for the P-sample in the Revision II estimates of the population was complex. It involved adding separate terms to the DSE formula for total and matched nonmovers not linked outside the search area, total and matched nonmovers linked outside the search area, total and matched outmovers, and total inmovers with a duplication adjustment (Kostanich, 2003b:15). We briefly summarize the main elements in the Revision II estimation.

The correction for nonmovers linked to census enumerations outside the search area was performed using the Further Study of Person Duplication P-sample component in a manner similar to that described for the E-sample in Section 6-B.1. The purpose of the correction was to remove some of the linked cases from the P-sample on the assumption that some of them were not truly residents in the A.C.E. search area on Census Day. Unlike the E-sample, however, there was no obvious assumption on which to base the correction—it might be that one-half of the linked P-sample cases were residents and one-half nonresidents, but it might be that the proportion of residents was not one-half. By default, the Revision II estimation

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

assigned residence probabilities to each P-sample case linked to a census enumeration outside the A.C.E. search area in the same manner as the E-sample duplicates as described in Section 6-B.1 (see also U.S. Census Bureau, 2003c:58). The reduction in nonmover residents was proportionately greater for nonmatched cases than for matched cases, which had the result of raising the P-sample match rate and lowering the DSE and net undercount overall.

The correction for measurement error for P-sample outmovers and nonmover cases that did not match to a census enumeration outside the search area was based on the results of the reanalysis of the Evaluation Follow-up Study and the Matching Error Study. The EFU reanalysis results were used to adjust residence probabilities, and the Matching Error Study results were used to correct for false matches and false nonmatches. As was the case for the E-sample, the P-sample correction factors from the EFU reanalysis and the Matching Error Study were calculated only for a small number of aggregate poststrata and not for the full set of P-sample poststrata.

On balance, the effect of the adjustments to the P-sample was to raise the match rate slightly, from 91.59 to 91.76. In turn, this increase lowered the DSE population estimate and net undercount estimates (see Section 6-B.6).

6–B.3 New Models for Missing Data

In A.C.E. Revision II, as in the original A.C.E., not all E-sample cases could be clearly assigned a status as correct or erroneous, and not all P-sample cases could be clearly assigned a status as a resident or nonresident, or, if a resident, as a match or nonmatch. Enumeration, residence, and match statuses had to be imputed to these unresolved cases. Evaluation of the original imputation procedures, which used only variables that were available after the initial matching and before follow-up to define imputation cells, indicated that they contributed significant variability, and possibly bias, to the original DSE population estimates (see Section 6-A.6). For Revision II, new imputation procedures were devised; they used variables from the evaluation follow-up data, such as whether the respondent provided an alternate Census Day address (Beaghen and Sands, 2002; Kostanich, 2003a:Ch.4). New procedures were also devised to adjust P-sample Census Day weights for households determined

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

in the Evaluation Follow-Up reanalysis to be noninterviews and to impute residence and match status to the small number of E-sample and P-sample cases with conflicting information between the original A.C.E. follow-up and the Evaluation Follow-Up. The new procedures were estimated to have reduced the variance from imputation in the Revision II DSE estimates by 60 percent compared with the original A.C.E. procedures (Kearney, 2002:5).

6–B.4 Refined Poststratification

The Revision II effort followed the advice of many statisticians to reassess the definition of the poststrata (population groups) for which DSE population estimates were calculated, by examining the usefulness of variables obtained in the A.C.E. itself.15 For the P-sample, for which considerable research had been done prior to 2000, the poststrata were retained largely unchanged from the original A.C.E. (see Section 6-A.10). The only change was to split people ages 0–17 into two groups: ages 0–9 and ages 10–17.

For the E-sample, the poststrata were substantially revised (U.S. Census Bureau, 2003c:18). For each of the seven race/ethnicity domains, separate E-sample poststrata were created for proxy responses, which were cases obtained by enumerators from landlords or neighbors. Generally, these cases had low correct enumeration rates (average 60 percent).16 For nonproxy cases for non-Hispanic whites, the revised E-sample poststratification dropped region, metropolitan population size and type of enumeration area, and mail return rate of census tract. In their place it used household size for heads of nuclear families and all others by type of census return (early mail return, late mail return, early nonmail return, late nonmail return). Housing tenure (owner/renter) was retained. For non-Hispanic blacks and Hispanics, the stratification was the same as for non-Hispanic whites except that it did not use household size. For Native Hawaiians and Other Pacific Islanders, non-Hispanic

15  

Several participants at a workshop in fall 1999 (National Research Council, 2001e) urged the Bureau to use modeling techniques to develop poststrata on the basis of the A.C.E. data themselves. The model would assess the best predictors of coverage, but the Bureau decided such an approach was not feasible for the original estimation.

16  

Tabulations by panel staff from the Full E Sample Post-Stratum-Level Summary File, provided to the panel June 20, 2003 (see Haines, 2002:5).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Asians, and American Indians off reservations, the stratification dropped housing tenure and used household relationship by the four categories of type of census return. For American Indians on reservations, the stratification used only household relationship (head of nuclear family, other). The revised poststrata captured more of the variation in correct enumeration rates than the original poststrata.

On net, the new poststratification had little effect on the national net undercount. However, for some small geographic areas with large numbers of proxy responses, the new poststratification produced large estimated net overcounts of their population (5 percent or more—see U.S. Census Bureau, 2003c:Table 10). While such sizeable net overcount estimates could be accurate, they could have resulted instead from the lack of comparability of the Revision II E-sample and P-sample poststrata. Consider the situation in which proxy respondents (landlords or neighbors) tended not only to report people who should not have been counted (erroneous enumerations), but also to omit people who should have been counted (omissions). In such a case, the Revision II estimation would have calculated a high erroneous enumeration rate, based on the proxy poststrata, but it would not have calculated a comparably high nonmatch rate because there were no P-sample poststrata comparable to those for proxy respondents in the E-sample—all P-sample poststrata included household respondents as well as proxies. The result would have been an underestimate of the DSE population. Nationally, this underestimation would have little effect because only 3 percent of household members were enumerated by proxies. However, the effects could have been substantial for some subnational areas (see Section 6-D.4).

6–B.5 Adjustment for Correlation Bias

The last major goal of the Revision II A.C.E. work was to consider a further adjustment for census omissions among selected population groups that were assumed to be missed disproportionately in both the P-sample and the census; such groups contribute disproportionately to the people who are not included in the DSE estimate of the population. This phenomenon is referred to as “correlation bias.” The term correlation bias has been used in several ways in

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

research and evaluation of dual-systems estimation (see National Research Council, 1999b:Ch.4). Here, we use the term loosely to refer to the phenomenon in which some groups are more apt than other groups to be missed in both systems (P-sample and census).

For several censuses, comparisons of demographic analysis results with those from dual-systems estimation have lent support to a hypothesis that adult black men (and nonblack men to a much lesser extent) contribute disproportionately compared with black (nonblack) women to the group of people not included in the DSE estimate of the population. This hypothesis is based on the finding of higher estimated net undercount rates from demographic analysis compared with dual-systems estimation for black men relative to black women, together with the finding that sex ratios (men per 100 women) appear reasonable in demographic analysis but are low in both the DSE and the census, particularly for blacks (see Table 6.4).

Census Bureau analysts previously conducted research on the possibilities of using the sex ratios from demographic analysis to reduce correlation bias in DSE population estimates (see Bell, 1993). However, not until the 2000 A.C.E. Revision II work did the Bureau seriously contemplate making such an adjustment to the DSE estimates.

A motivation for using a sex ratio adjustment for correlation bias emerged from the results of all the revisions incorporated into the A.C.E. Revision II population estimates summarized in Sections 6-B.1 through 6-B.4. The corrections for duplication of E-sample enumerations with census enumerations outside the A.C.E. search area and other measurement error in the E-sample decreased the estimated correct enumeration rate, while the correction for duplication of P-sample nonmover resident cases with census enumerations outside the search area increased the estimated match rate. The consequence was to reduce the DSE population estimate by 6.3 million people from the original March 2001 estimate, leaving an estimated net overcount of 3 million people, or 1.1 percent of the household population, instead of an estimated net undercount of 1.2 percent or 3.3 million people. Furthermore, several race domains, including American Indians on reservations, non-Hispanic blacks, non-Hispanic Asians, and non-Hispanic whites, went from an estimated net undercount in March 2001 to an estimated net overcount in Revision II (Mule, 2003:Table 2, cumulative column for the row

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 6.4 Sex Ratios (Men per 100 Women) from the Census, Demographic Analysis (DA), Accuracy and Coverage Evaluation (A.C.E.) Revision II, and Post-Enumeration Survey (PES), 1990 and 2000

 

1990

2000

Race/Age Group

DA

PES

Census

Revised DAa

A.C.E. Revision IIb

Census

Black

 

Total

95.2

90.4

89.6

95.1

90.8

90.6

0–17 years

102.4

102.4

102.4

0–9

102.7

103.1

103.1

10–17

102.7

103.4

103.4

18–29 years

99.3

92.1

94.0

100.2

94.0

93.9

30–49 years

95.9

89.0

86.2

96.9

88.9

88.5

50 or more years

78.3

72.1

71.5

77.2

73.4

73.4

Nonblack

Total

97.2

96.5

95.9

98.1

97.6

97.1

0–17 years

105.2

105.5

105.5

0–9

104.8

105.2

105.2

10–17

105.5

105.9

106.0

18–29 years

104.9

104.6

103.8

106.7

107.1

105.3

30–49 years

102.0

100.3

99.6

102.3

100.7

100.6

50 or more years

80.8

79.9

79.4

84.2

83.3

83.1

NOTE:—, not available.

a “Revised” demographic analysis estimates are those from October 2001, incorporating changes to births and legal and illegal immigration; see Table 5.3.

b A.C.E. Revision II estimates are before adjustment for correlation bias.

SOURCE: Robinson (2001a:Table 8); Robinson and Adlakha (2002:Table 4).

labeled “P-sample coding corrections”). The Bureau was concerned that the DSE without a correlation bias adjustment could move the estimates further from the truth for groups that were truly undercounted but were estimated to be overcounted (U.S. Census Bureau, 2003c:50).

Hence, the Bureau made an adjustment for correlation bias, based on sex ratios from demographic analysis. Assuming that the DSE estimates for females were correct, the P-sample match rates were recomputed for black males ages 18–29, 30–49, and 50 and over, and all other males ages 30–49 and 50 and over (U.S. Census Bureau, 2003c:5). The resulting match rate estimates are termed “census inclusion rates”; they are lower for the relevant groups than the originally estimated match rates (see “Match Rates”

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

in Section 6-B.6). No correlation bias adjustment was made for children under age 18 or for nonblack men ages 18–29 because the sex ratios for these groups exceeded 100 by similar amounts in all three sources: demographic analysis, the A.C.E., and the census (see Table 6.4). Because of the limitations of demographic analysis it was not possible to determine if adjustments for correlation bias for particular groups of nonblack men age 30 and over (e.g., Asians or Hispanics) were warranted.

6–B.6 Putting It All Together: The A.C.E. Revision II Estimates

Four tables present results from the A.C.E. Revision II effort for major population groups. They are: a table of correct enumeration rates (Table 6.5); a table of match rates (Table 6.6), including rates before and after the correlation bias adjustment described in Section 6-B.5; a table of net undercount rates (Table 6.7); and a table of the percentage contribution of each major change in the A.C.E. Revision II estimation method to the Revision II net undercount rates (Table 6.8).

Correct Enumeration Rates

Correct enumeration rates show marked changes comparing the A.C.E. Revision II estimates and the original March 2001 A.C.E. estimates (Table 6.5). The direction of the change is always toward a lower rate—2 percentage points for most groups and 5 percentage points for American Indians on reservations. Most often, the Revision II rate is below the corresponding PES rate, which is not surprising because there was no equivalent to the Further Study of Person Duplication in the PES, and the results of the Evaluation Follow-Up that was conducted were not used to adjust the PES. It is quite possible that a nationwide matching study would have turned up more erroneous enumerations than the PES estimated, although whether the increase would have been as great as in the A.C.E. cannot be assessed.

Match Rates

Match rates show very small changes comparing the A.C.E. Revision II estimates and the original March 2001 A.C.E. estimates (Ta-

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 6.5 Correct Enumeration Rates Estimated from the E-Sample (percents), 2000 A.C.E. and 1990 PES, by Race/Ethnicity Domain and Housing Tenure (weighted)

Domain and Tenure Group

Original A.C.E. (March 2001)

A.C.E. Revision II (March 2003)

1990 PESa

American Indian/Alaska Native on Reservation

 

Owner

95.65

90.95

91.54b

Renter

96.15

91.23

 

American Indian/Alaska Native Off Reservation

 

Owner

94.56

92.42

Renter

93.16

91.09

Hispanic Origin

 

Owner

96.25

94.33

95.56

Renter

92.79

90.88

90.58

Black (Non-Hispanic)

 

Owner

94.25

92.28

92.84

Renter

91.16

88.96

89.19

Native Hawaiian/Other Pacific Islander

 

Owner

93.79

91.86

Renter

92.33

90.67

Asian (Non-Hispanic)c

 

Owner

95.84

93.70

93.13

Renter

92.45

91.41

92.22

White and Other Races (Non-Hispanic)

 

Owner

96.70

95.10

95.84

Renter

93.20

91.25

92.61

Total

95.28

93.48

94.27

NOTES: Correct enumeration rates are correct enumerations divided by the sum of correct and erroneous enumerations;—, not estimated. See Appendix E (Table E.3) for definitions of race/ethnicity domains.

a Revision II and PES rates are not comparable because there was no equivalent to the Further Study of Person Duplication in 1990 and the results of the 1990 Evaluation Follow-Up were not used to adjust the estimated rates.

b Total; not available by tenure.

c 1990 correct enumeration rates include Pacific Islanders.

SOURCES: Original A.C.E. and PES correct enumeration rates from Davis (2001:Tables E-2, F-1, F-2); Revision II A.C.E. correct enumeration rates by race/ethnicity and housing tenure group from table provided by the U.S. Census Bureau to the panel, May 22, 2003; total Revision II correct enumeration rate from Fenstermaker (2002:Table 9).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 6.6 Match Rates and Census Inclusion Rates Estimated from the P-Sample (percents), 2000 A.C.E. and 1990 PES, by Race/Ethnicity Domain and Housing Tenure (weighted)

 

 

Revision II A.C.E. (March 2003)

 

 

Original A.C.E. (March 2001)

 

Domain and Tenure Group

Match Rate

Match Rate

Census Inclusion Rate

1990 PES Match Ratea

American Indian/Alaska Native on Reservation

 

Owner

85.43

86.38

86.13

78.13b

Renter

87.08

87.34

87.14

 

American Indian/Alaska Native off Reservation

 

Owner

90.19

90.86

90.54

Renter

84.65

84.48

84.25

Hispanic Origin

 

Owner

90.79

91.25

90.96

92.81

Renter

84.48

84.57

84.34

82.45

Black (Non-Hispanic)

 

Owner

90.14

90.56

88.27

89.65

Renter

83.67

83.88

82.03

82.28

Native Hawaiian/Other Pacific Islander

 

Owner

87.36

87.46

87.15

Renter

82.39

83.49

83.27

Asian (Non-Hispanic)c

 

Owner

92.34

92.66

92.32

93.71

Renter

87.33

87.37

87.07

84.36

White and Other Races (Non-Hispanic)

 

Owner

94.60

95.02

94.63

95.64

Renter

88.37

88.43

88.14

88.62

Total

91.59

91.76

91.19

92.22

NOTES: Match rates are matches divided by the sum of matches and nonmatches; census inclusion rates are match rates adjusted for correlation bias for adult males using sex ratios from demographic analysis (see U.S. Census Bureau, 2003c:49–52);—, not estimated. See Appendix E (Table E.3) for definitions of race/ethnicity domains.

a Revision II and PES rates are not comparable because there was no equivalent to the Further Study of Person Duplication in 1990, and the results of the 1990 Evaluation Follow-Up and Matching Error Studies were not used to adjust the estimated match rates.

b Total; not available by tenure.

c 1990 match rates include Pacific Islanders.

SOURCES: Original A.C.E. and PES match rates from Davis (2001:Tables E-2, F-1, F-2); Revision II A.C.E. match rates and census inclusion rates for race/ethnicity and housing tenure groups from table provided by the U.S. Census Bureau to the panel, May 22, 2003; total Revision II rates from Fenstermaker (2002:Table 9).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 6.7 Estimated Net Undercount Rates for Major Groups (percents), Original 2000 A.C.E. (March 2001), Revision II A.C.E. (March 2003), and 1990 PES (standard error percents in parentheses)

 

Original A.C.E.

Revision II A.C.E.

1990 PES

Major Group

Estimate

(S.E.)

Estimate

(S.E.)

Estimate

(S.E.)

Total Population

1.18

(0.13)

−0.49

(0.20)

1.61

(0.20)

Race/Ethnicity Domain

 

American Indian/Alaska Native on Reservation

4.74

(1.20)

−0.88

(1.53)

12.22

(5.29)

American Indian/Alaska Native off Reservation

3.28

(1.33)

0.62

(1.35)

Hispanic Origin

2.85

(0.38)

0.71

(0.44)

4.99

(0.82)

Black (Non-Hispanic)

2.17

(0.35)

1.84

(0.43)

4.57

(0.55)

Native Hawaiian or Other Pacific Islander

4.60

(2.77)

2.12

(2.73)

Asian (Non-Hispanic)a

0.96

(0.64)

−0.75

(0.68)

2.36

(1.39)

White and Other Races (Non-Hispanic)

0.67

(0.14)

−1.13

(0.20)

0.68

(0.22)

Age and Sex

 

Under 10 yearsb

1.54

(0.19)

−0.46

(0.33)

3.18

(0.29)

10-17 years

1.54

(0.19)

−1.32

(0.41)

3.18

(0.29)

18-29 years Male

3.77

(0.32)

1.12

(0.63)

3.30

(0.54)

Female

2.23

(0.29)

−1.39

(0.52)

2.83

(0.47

30-49 years Male

1.86

(0.19)

2.01

(0.25)

1.89

(0.32)

Female

0.96

(0.17)

−0.60

(0.25)

0.88

(0.25)

50 years and over Male

−0.25

(0.18)

−0.80

(0.27)

−0.59

(0.34)

Female

−0.79

(0.17)

−2.53

(0.27)

−1.24

(0.29)

Housing Tenure

 

Owner

0.44

(0.14)

—1.25

(0.20)

0.04

(0.21)

Renter

2.75

(0.26)

1.14

(0.36)

4.51

(0.43)

NOTES: Net undercount is the difference between the estimate (A.C.E. or PES) and the census, divided by the estimate. Minus sign (−) indicates a net overcount. For 2000, total population is the household population; for 1990, it is the household population plus the noninstitutional group quarters population. See Appendix E (Table E.3) for definitions of race/ethnicity domains;—, not estimated.

a In 1990 includes Pacific Islanders.

b In the original A.C.E. and PES, children ages 0–17 were a single category.

SOURCE: U.S. Census Bureau (2003c:Table 1).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

ble 6.6). The direction of the change is usually toward a higher rate. For most groups, the Revision II rate is similar to the corresponding PES rate, even though Revision II estimated a much reduced net undercount (or overcount) compared with the PES (see Table 6.7). The reason for this result has to do with the much larger number of census whole-person imputations and reinstated records that could not be included in the A.C.E. E-sample for matching (see Section 6-C.1).

Census inclusion rates incorporate the sex ratio-based correlation bias adjustments. They are about 2 percentage points below the Revision II match rates for blacks, and only slightly below the match rates for other groups.

Net Undercount Rates

Net undercount rates show substantial changes comparing the A.C.E. Revision II estimates with the original March 2001 A.C.E. estimates (Table 6.7). The national net undercount rate declined from 1.2 percent of the household population to a slight net overcount (0.5 percent) of the population—a swing of 1.7 percentage points. By race and ethnicity, net undercount rates were reduced by 1.7 to 5.6 percentage points for every group except blacks, for whom the reduction was only 0.3 percentage points (from 2.2 percent in the original A.C.E. to 1.8 percent in Revision II). By age and sex, net undercount rates were reduced by 1.6 to 3.6 percentage points for every group except men ages 30–49, for whom the reduction was only 0.2 percentage points, and men age 50 and over, for whom the reduction was only 0.6 percentage points. For homeowners, the net undercount rate decreased by 1.7 percentage points (from 0.4 percent in the original A.C.E. to a net overcount of 1.3 percent in Revision II). For renters, the net undercount rate decreased by 1.6 percentage points.

In regard to differential net undercount among population groups, which is more important than levels of net undercount, there were increases as well as decreases in net undercount rate differences comparing the original A.C.E. with Revision II. The difference in net undercount rates between Hispanics and non-Hispanic whites and other races narrowed somewhat from 2.2

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

percentage points in the original A.C.E. to 1.8 percentage points in Revision II. However, the difference in net undercount rates between owners and renters increased slightly from 2.3 percentage points in the original A.C.E. to 2.4 percentage points in Revision II. The difference in net undercount rates between non-Hispanic blacks and non-Hispanic whites and other races increased from 1.5 percentage points in the original A.C.E. to 3.0 percentage points in Revision II. By comparison with the 1990 PES, differences in Revision II net undercount rates between population groups were smaller: for example, the differences in rates in the 1990 PES between Hispanics and non-Hispanic whites and other races, between owners and renters, and between non-Hispanic blacks and non-Hispanic whites and other races were 4.3 percentage points, 4.5 percentage points, and 3.9 percentage points, respectively. The important changes in the Revision II estimation methods, however, impair the validity of comparisons with the PES (see Section 6-D.2).

We do not have information with which to examine differences in net undercount rates among states and other geographic areas, either for the original A.C.E. estimates compared with Revision II or for either set of 2000 estimates compared with the 1990 PES. However, given the reduction in net undercount rates and in differential net undercount observed in 2000 for poststrata defined for population groups, it is reasonable to infer that net undercount rates and differential net undercount were also probably reduced for geographic areas. The reason is that poststratification research to date has not strongly supported the use of geographic variables in preference to such variables as age, sex, race, and housing tenure (see, e.g., Griffin and Haines, 2000; Schindler, 2000). In the poststratification for the original 2000 A.C.E. and Revision II P-sample (but not E-sample), a regional classification was used for non-Hispanic whites along with a classification by size of metropolitan statistical area and type of enumeration area for non-Hispanic whites, blacks, and Hispanics. The Integrated Coverage Measurement design would have supported direct state estimates of net undercount, but that design was adopted principally because of concerns from Congress and other stakeholders that state population totals for reapportionment should be estimated from data for the individual state and not borrow data from other states (see Appendix A.4.a).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Components of Change

Specific components of change in the Revision II methods show somewhat different effects for four race and ethnicity groups (Table 6.8). Overall, the new poststratification and adjustment for measurement error in the P-sample had little effect on Revision II net undercount rates compared with the original A.C.E. rates. The correction for E-sample duplications with census enumerations outside the A.C.E. search area had the largest effect of all of the changes for most groups, reducing the net undercount rate by 0.9 percentage points for non-Hispanic whites and other races up to 1.5 percentage points for non-Hispanic blacks. E-sample measurement error corrections also reduced the net undercount rate by about 1 percentage point for each group shown, and the correction for duplications of P-sample nonmover residents with census enumerations outside the A.C.E. search area reduced the net undercount rate by 0.3 to 0.5 percentage points for each group shown. Finally, the correlation bias adjustment had a small effect (less than 0.4 percent increase in the net undercount rate ) for Hispanics, non-Hispanic Asians, and non-Hispanic whites and other races, but it increased the net undercount rate for blacks by 2.4 percentage points. Consequently, the correlation bias adjustment largely explains why black net undercount rates did not decline as much as the rates for Hispanics and other groups. The correlation bias adjustment also explains why net undercount rates for men ages 30–49 and age 50 and over did not decline as much as the rates for women, children, and younger men (see Table 6.7).

6–B.7 Assessment of Error in the A.C.E. Revision II Estimates

As part of the immense amount of data analysis and estimation undertaken for the A.C.E. Revision II estimates of net undercount in 2000, Census Bureau analysts conducted evaluations to estimate the possible bias (systematic error) and variance (random error) in the estimates. The evaluations of bias included the construction of 95 percent confidence intervals around the Revision II estimates and comparisons of the relative accuracy of the Revision II estimates and census counts for geographic areas (see Section 6-D.4 for the latter analysis). These evaluations were limited because data that

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 6.8 Components of Change from the Original A.C.E. Net Undercount Rate to the Revision II Net Undercount Rate for Selected Race/Ethnicity Domains

 

 

 

Non-Hispanic

Component of Change

Total

Hispanic

Black

Asian

White

Census Count, Household Population (thousands)

273,578

34,538

33,470

9,960

192,924

Original A.C.E. Net Undercount as Percent of Census

1.19

2.94

2.21

0.97

0.68

Change in Estimated Net Undercount Rate as Percent of Census

 

New Post-Stratification

0.01

−0.09

0.16

0.16

<0.01

E-Sample Duplication Corrections

−1.03

−1.07

−1.50

−1.07

−0.93

E-Sample Measurement Error Corrections

−0.89

−0.97

−0.98

−0.92

−0.85

P-Sample Duplication Corrections

−0.40

−0.45

−0.49

−0.26

−0.38

P-Sample Measurement Error Corrections

<0.01

0.07

0.07

0.01

−0.02

Correlation Bias Adjustment

0.62

0.29

2.40

0.36

0.39

Cumulative Change

−1.68

−2.22

−0.34

−1.72

−1.79

Net Undercount as Percent of Census, A.C.E. Revision II

−0.49

0.72

1.88

−0.75

−1.11

NOTES: Net undercount rates differ slightly from those in Table 6.7 because of different denominators (census count in this table; A.C.E. or PES estimate of population group in Table 6.7); minus sign (–) indicates a net overcount or a decrease in the net undercount rate. See Appendix E (Table E.3) for definitions of race/ethnicity domains. Individual component effects are from introducing one change at a time in the estimation methodology. A different ordering of the revisions would result in slightly different component effects. E- sample and P-sample duplication corrections were based on links of cases to census enumerations outside the A.C.E. search area identified in the Further Study of Person Duplication. E-sample and P-sample measurement error corrections were based on the results of the Evaluation Follow- Up reanalysis for cases not linked to census enumerations outside the A.C.E. search area (the P- sample measurement error corrections also include the results of the Matching Error Study).

SOURCE: Adapted from Mule (2003: Tables 1, 2).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

were used to estimate bias in the original A.C.E. estimates were used as part of the Revision II estimation. Thus, the bias evaluations did not take account of the following sources of error: response error or coding error in the Revision II determination of P-sample residency or match status or E-sample correct enumeration status, which were based on evaluation samples listed in Table 6.3; response error or coding error in the Revision II determination of P-sample mover status; error in the approach used to estimate the contribution to correct enumerations from E-sample cases with duplicate links; error in demographic analysis sex ratios; or error in the model used to estimate correlation bias from these sex ratios (see U.S. Census Bureau, 2003c:68).

The evaluation to construct 95 percent confidence intervals around the Revision II estimates found a possible bias for some population groups (see Mulry and ZuWallack, 2002). In particular, it appears that the Revision II population estimates for non-Hispanic black owners and renters might be too low (U.S. Census Bureau, 2003c:45). The results of the Census and Administrative Records Duplication Study largely account for this result (see Section 6-B.1).

Looking simply at variance from sampling, imputation, and other sources, the Revision II estimates exhibited only slightly larger standard errors than the original A.C.E. estimates, and, in most cases, the Revision II standard errors were lower than the corresponding standard errors for the 1990 PES estimates (see Table 6.7). This result obtained even though many of the data sources used in the Revision II estimation were subsamples of the original A.C.E. The explanation is that the Revision II estimation used data from the full A.C.E. to estimate some components (e.g., E-sample duplications of census enumerations outside the A.C.E. search area), while, for other components (e.g., corrections for measurement error), the estimation used evaluation subsamples to develop correction factors to apply to the full A.C.E. samples. This strategy produced a complex DSE formula with multiple components for each of the elements in the basic formula (see Kostanich, 2003b:15), but it enabled the Revision II estimation to make use of all the available data and not just small evaluation samples as was done for the October 2001 preliminary revised estimates.

A caveat is in order, however. The various E-sample and P-sample correction factors that were developed from subsamples

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

were computed only for highly aggregated poststrata. The variance calculations had to assume that the aggregate correction factors did not vary for individual poststrata within the larger aggregates. If for no other reason, it is likely that the variance estimates for the Revision II A.C.E. estimates are biased downward.

6–C FACTORS IN COVERAGE

Two factors merit discussion for the role they played in producing an estimated net overcount in the 2000 census. They are computer-based, whole-person census imputations and duplicate census enumerations.

6–C.1 Whole-Person Imputations

The much larger number of whole-person imputations in 2000 (5.8 million) compared with 1990 (1.9 million) helps explain one of the initial puzzles regarding the original A.C.E. estimates of net undercount.17 The puzzle was that the original A.C.E. correct enumeration and match rates were very similar to the PES rates (see Tables 6.5 and 6.6). Other things equal, these similarities should have produced similar estimates of net undercount. Yet the original A.C.E. estimates showed marked reductions in net undercount rates from 1990 levels for such groups as minorities, renters, and children and a consequent narrowing of differences in net undercount rates between historically less-well-counted and better-counted groups (see Table 6.7).

The explanation lies in those census cases that had to be excluded from the A.C.E. because they were wholly imputed and hence could not be matched or because they were available too late for matching—the II term in the DSE formula (see Section 5-A). There were so many more of these cases in 2000 than in 1990 that when they were added back to the census counts for comparison with the DSE population estimates, the result was to lower the net undercount estimates for 2000 compared with 1990.

The IIs in 2000 included 2.4 million reinstated cases from the special summer 2000 MAF unduplication operation, whereas the

17  

Whole-person imputations include types 1–5 as described in Section 4-D and Box 4.2; see also Section G.4.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

IIs in 1990 included only 0.3 million late enumerations. However, the reinstated cases were distributed in roughly the same proportions across major population groups (see Table 6.9), so that they do not explain the narrowing of coverage differences observed in the A.C.E. compared with the PES. In contrast, the 5.8 million whole-person imputations in 2000 accounted for proportionately more of historically less well-counted groups than of better counted groups, whereas the much smaller number of 1.9 million whole-person imputations in 1990 did not show such large differences among groups (see Table 6.9).

If all of the whole-person imputations were accurately assigned to poststrata and represented people who otherwise would have been omitted from the census, then the puzzle of similarly low P-sample match rates and yet lower net undercount estimates in 2000 compared with 1990 would have a ready explanation. The explanation is that these cases would have matched to the P-sample if they had been enumerated instead of imputed, so that the original A.C.E. would have exhibited higher match rates than the PES and lower net undercount estimates. Instead, because imputation was substituted for additional field work, the original A.C.E. had artificially low match rates, but once the IIs were added back to the census count for comparison with the DSE estimates, the original A.C.E. had lower net undercount estimates than the PES.

The question is the accuracy of the whole-person imputations in 2000 in terms of their numbers and imputed characteristics needed for poststratification. Of the 5.8 million whole-person imputations, a large number—2.3 million—were imputed in situations when the household size and characteristics of other members were known (type 1 imputations). Many of these were children in large households who could not be reported because of lack of room on the questionnaire. (The 2000 questionnaire had room for six persons, compared with seven in 1990.) Another large group—also about 2.3 million—were people imputed into households believed to be occupied for which household size, but not other information, was available (type 2 imputations).

These two types of whole-person imputations did not alter the numbers of people who were included in the census overall so long as household size was accurately reported. Moreover, because the imputation process used the characteristics of other household mem-

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

bers and neighboring households of the same size, respectively, type 1 and 2 imputations may have been quite accurate with regard to their characteristics. Evidence from the 2000 Census Administrative Records Experiment (see Section 4-D.2) suggests that imputations of household size and demographic characteristics may not have been accurate in many individual cases, although whether distributions were biased is not known.

A problematic group with regard to accuracy—amounting to 1.2 million people in 2000 compared with only 54,000 people in 1990—comprised those who were imputed into the census when there was no information about the size of the household, or, in some instances, whether the address was occupied or even a housing unit (imputation types 3–5). An alternative approach could have been to delete all of these addresses from the census; however, such an approach would undoubtedly have underestimated the true number of household residents (particularly when a unit was known to be occupied). The question is whether the numbers (and characteristics) of imputed people were larger (or smaller) than the true numbers and therefore contributed to overestimating (or underestimating) the population at these addresses. The effects of such over-or underestimation on coverage would be quite small for the nation as a whole, but they could be significant for particular geographic areas or population groups.

For example, geographic analysis of types of whole-person imputations for census tracts conducted by panel staff revealed considerable clustering of some imputation types by geographic area.18 In particular, type 5 imputations, in which status as a housing unit had to be imputed first, followed by imputation of occupancy status, household size, and, finally, household member characteristics, were heavily clustered in rural list/enumerate areas, such as the Adirondacks region of New York State and parts of Arizona and New Mexico. Although there were only 415,000 type 5 imputations nationwide (0.2 percent of the household population), for some counties in list/enumerate areas, type 5 imputations accounted for significant percentages of the population. The address list for list/enumerate areas was developed by enumerators in a single-stage

18  

The analysis used a census tract summary file of whole-person imputations by type, provided to the panel April 4, 2002; see also Kilmer (2002).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 6.9 Percentage Distribution of People Requiring Imputation and Reinstated Records in the 2000 Census, and Percentage Distribution of Total People with Insufficient Information in 1990, by Race/Ethnicity Domain and Housing Tenure and by Age/Sex Categories

 

Percent of Household Population, 2000

 

Panel A Domain and Tenure Group

People Requiring Imputation

Reinstated Records

Total with Insufficient Information

Percent of Household Population with Insufficient Information, 1990a

American Indian/Alaska Native on Reservation

 

Owner

5.13

0.97

6.00

3.16

Renter

4.74

0.94

5.58

(Total)

American Indian/Alaska Native off Reservation

 

Owner

2.36

1.20

3.51

Renter

3.00

1.16

4.12

Hispanic Origin

 

Owner

3.74

0.92

4.61

1.03

Renter

3.99

1.00

4.96

1.56

Black (Non-Hispanic)

 

Owner

2.84

1.00

3.81

1.20

Renter

3.95

0.96

4.88

1.89

Native Hawaiian/Pacific Islander

 

Owner

3.67

0.87

4.49

Renter

3.83

0.92

4.70

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Asian (Non-Hispanic)

 

Owner

2.46

0.69

3.13

0.74

Renter

3.35

0.77

4.10

1.71

White and Other Races (Non-Hispanic)

 

Owner

1.24

0.71

1.93

0.46

Renter

2.38

1.12

3.47

1.44

Total Owner

1.66

0.75

2.39

0.56

Total Renter

3.08

1.05

4.10

1.55

Panel B

Age/Sex Group

Children Under Age 18

3.11

0.92

4.00

0.82

Men Ages 18–29

2.86

0.82

3.65

1.45

Women Ages 18–29

2.56

1.03

3.46

1.45

Men Ages 30–49

1.77

0.79

2.53

0.76

Women Ages 30–49

1.58

0.81

2.37

0.70

Men Age 50 and Over

1.25

0.81

2.04

0.69

Women Age 50 and Over

1.30

0.80

2.08

0.79

Total

2.11

0.85

2.93

0.90

NOTES: The 2000 total with insufficient information is the unduplicated sum of people requiring imputation and reinstated records to the census; 1990 figures include small numbers of reinstated records to the census from coverage improvement operations.—, not available.

a Data exclude American Indians living on reservations; the Asian (non-Hispanic) data for 1990 include Pacific Islanders.

SOURCE: Data for 2000 are from tabulations by panel staff of U.S.Census Bureau, Pre-Collapsed Post-Stratum Summary File (U.S.), provided to the panel February 16, 2001; data for 1990 are from Davis (2001: Tables F.1, F.2).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

operation, with a follow-up operation to recheck units classified as vacant. However, some of these addresses may have been temporary recreational lodging (e.g., fishing camps) for which it was difficult to determine their housing status. In addition, the Census Bureau identified some data processing problems that produced larger-than-expected numbers of addresses for which housing status was not known (type 5 imputations) and that probably contributed to overimputation of households for which occupancy status was not known (type 4 imputations; see Section 4-D.2).

6–C.2 Duplicate Census Enumerations

The 2000 census included several operations that were explicitly designed to reduce duplicate enumerations. They included the Primary Selection Algorithm (PSA) and the various operations to unduplicate MAF addresses and associated households. The MAF unduplication operations included a planned operation prior to nonresponse follow-up and the special unplanned operation in summer 2000, which temporarily deleted census records that appeared to duplicate records at another address and then reinstated some of them (see Section 4-E).

The purpose of the PSA was to determine which households and people to include in the census when more than one questionnaire was returned with the same MAF identification number (see Appendix C.5.c). Such duplication could occur, for example, when a respondent mailed back a census form after the cutoff date for determining the nonresponse follow-up workload and the enumerator then obtained a second form from the household. In all, 9 percent of census housing units had two returns and 0.4 percent had three or more returns that were eligible for the PSA operation. In most instances, the PSA discarded duplicate returns; less often, the PSA found additional people to assign to a basic return or identified more than one household at an address (Baumgardner et al., 2001:22–27).

Despite these operations, however, the original A.C.E. identified 1.9 million census duplicates (Anderson and Fienberg, 2001:Table 2). The original A.C.E. also identified 2.7 million “other residence” erroneous enumerations (e.g., the person should have been enumerated in group quarters or at another home), many of which were probably duplicates. On the basis of the Evaluation Follow-Up Study and the

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Further Study of Person Duplication, A.C.E. Revision II identified an additional 5.2 million duplicates and “other residence” erroneous enumerations in 2000 (see Section 6-B.1), for a total of 9.8 million such enumerations (Mule, 2003:2). Moreover, evaluations of the Further Study of Person Duplication determined that it underestimated duplicate census enumerations.

Because a similar Further Study of Person Duplication cannot be conducted for the 1990 census, it is hard to know how many duplicate enumerations occurred in that census beyond those identified in the PES (4.5 million plus some fraction of 6.2 million “other residence” erroneous enumerations; Anderson and Fienberg, 2001:Table 2). Societal trends, such as more children of divorced parents in joint custody, more people with winter and summer homes or weekday and weekend homes, could mean that duplication was more of a potential and actual problem in 2000 than in 1990, but the panel knows of no evidence on that point.

The Further Study of Person Duplication provided distributions of duplicate enumerations (Table 6.10), which indicate that they occurred disproportionately in 2000 for some historically less-well-counted groups. This result differed from previous censuses in which omissions were more concentrated than duplicates or other erroneous enumerations among hard-to-count groups (e.g., see Ericksen et al., 1991:Table 1, which examined omission and erroneous enumeration rates for census tracts grouped into deciles of mail return rates).

For race/ethnicity domains, the Further Study of Person Duplication estimated higher census household-to-household duplication rates for American Indians on reservations (2.7 percent), blacks (2.4 percent), Hispanics (2.4 percent), and American Indians off reservations (2.3 percent), compared with non-Hispanic whites and other races (1.8 percent). By age and sex, the Further Study of Person Duplication estimated higher census household-to-household duplication rates for population groups under age 30 (rates were 2.3 percent or higher) compared with population groups age 30 and over (rates were 1.8 percent) or lower. The highest rates of census household-to-group quarters duplications among race/ethnicity domains were for blacks (0.4 percent, mostly in college dormitories and prisons) and Asians (0.4 percent, almost entirely in college dormitories). The highest rates of census household-to-group quarters

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 6.10 Percent Duplicate Enumerations in 2000 Census by Type for Race/Ethnicity Domains and Age/Sex Groups from the Further Study of Person Duplication

 

Census Housing Unit to Housing Unit

 

Population Group

Within A.C.E. Search Area

Outside A.C.E. Search Area

Total

Census Housing Unit to Group Quarters

Race/Ethnicity Domain

 

American Indian/Alaska Native off Reservation

0.91

1.38

2.29

0.20

American Indian/Alaska Native on Reservation

0.97

1.77

2.74

0.21

Hispanic

1.41

1.02

2.43

0.16

Black (Non-Hispanic)

1.27

1.19

2.46

0.36

Native Hawaiian/Other Pacific Islander

0.66

0.96

1.63

0.16

Asian (Non-Hispanic)

1.22

0.85

2.08

0.35

White and Other Races (Non-Hispanic)

0.79

0.97

1.76

0.22

Age and Sex

 

Under 10 years

0.96

1.36

2.33

0.03

10-17 years

1.05

1.45

2.50

0.11

18-29 years

Male

1.03

1.32

2.35

0.88

Female

1.11

1.53

2.64

0.80

30-49 years

Male

0.89

0.63

1.52

0.17

Female

0.89

0.61

1.51

0.06

50 years and over

Male

0.88

0.92

1.81

0.15

Female

0.84

0.74

1.58

0.18

NOTE: See Appendix E (Table E.3) for definitions of race/ethnicity domains. Housing unit to housing unit duplications include duplications to reinstated units.

SOURCE: Mule (2002a: Tables F1, F3, F5, F7).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

duplications among age/sex groups were for men ages 18–29 (0.9 percent, mostly in college dormitories and prisons) and for women ages 18–29 (0.8 percent, almost entirely in college dormitories).

At least two factors contributed to the large number of duplicates in 2000. One factor—the problem-plagued development of the MAF from new and multiple sources—has already been discussed. The second factor had to do with the “usual residence” rules for the census—each U.S. resident is supposed to be enumerated once at his or her usual residence—and how these rules did or did not match people’s living situations. The rules were often not explained on the questionnaires or were unclear. In some instances, respondents did not want to follow the rules as stated. The result was often duplication of enumerations. For example, many college dormitory residents and people in prisons were counted at those locations, according to Census Bureau rules, but they were also reported by their families back home, counter to the instructions on the questionnaire. Some divorced parents with joint custody reported the children at both parents’ homes, and people with two houses (e.g., in New York and Florida) were sometimes counted in both locations. (Such double counting probably occurred mainly in follow-up operations, when enumerators at the second house were told that the owners lived there.)

From the Evaluation Follow-Up Study and the Further Study of Person Duplication, it became evident not only that the census residence rules were not always recognized in the census, but that they were not always recognized in the A.C.E., either. Consequently, the original A.C.E. substantially underestimated census duplicates. In turn, corrections for undetected duplications substantially reduced the net undercount, particularly for blacks and young people.

6–D WHAT CAN WE CONCLUDE ABOUT COVERAGE ERROR IN 2000?

Our conclusions about coverage error in the 2000 census address eight issues. They are: (6-D.1) the quality of the A.C.E. Revision II analysis and documentation; (6-D.2) comparability with the 1990 PES; (6-D.3) net coverage error at the national level and for major population groups; (6-D.4) net coverage error for subnational areas; (6-D.5) net coverage error for group quarters residents; (6-D.6) gross

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

coverage errors; (6-D.7) comparisons with demographic analysis; and (6-D.8) the Census Bureau’s decision not to use the Revision II estimates to adjust the census counts that form the base for updated population estimates over the decade. Section 6-D.9 summarizes our findings.

6–D.1 A.C.E. Revision II Estimation and Documentation

From the outset of the 2000 Accuracy and Coverage Evaluation Program, the Census Bureau commendably dedicated the staff and resources needed to meet high standards for quality of implementation, evaluation, and documentation. In particular, when it became clear in early 2001 that the original A.C.E. estimates required further evaluation before the results could be considered for adjustment purposes, the Bureau speeded up planned evaluations and mounted additional evaluations to address its concerns. Then, when the results of these evaluations suggested that the original A.C.E. substantially underestimated the number of duplicates and other erroneous census enumerations, the Bureau devoted yet additional resources to a full reestimation of the dual-systems population and net undercount estimates by using the original A.C.E. and evaluation data. All of these efforts were commendable in the highest degree.

The A.C.E. Revision II work exhibited an outstanding level of creativity and productivity devoted to a very complex problem. It provided useful, previously unavailable information about problems of erroneous enumerations in the census and the original A.C.E. from such evaluations as the Further Study of Person Duplication, for which algorithms were developed to permit nationwide matching of A.C.E. cases with census enumerations.

The Revision II work, however, shed no light on additional omissions from the census that the A.C.E. may have missed (beyond what could be inferred from sex ratios for a limited number of population groups for which there were demographic analysis estimates). Evaluation studies provided estimates of measurement error in the P-sample, such as misclassification of match and nonmatch status and of residency status. However, time and resource constraints did not permit investigation of methods to estimate census omissions that the P-sample did not identify. One such method, for exam-

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

ple, could be to match administrative records with E-sample and P-sample records.19

Part of the Revision II estimation included a commendable effort to estimate bias, as well as variance, in the estimates. However, this effort was necessarily limited, because most of the available evaluation data were used in Revision II directly. The Census Bureau’s description of the Revision II methods identified areas of weakness that could not be assessed in the bias evaluations. At the national level for population groups, weaknesses included the uncertain and limited nature of the available data with which to adjust for correlation bias and the uncertainty in the selection of a model with which to determine Census Day residence for P-sample cases that linked to a census enumeration outside the A.C.E. search area (U.S. Census Bureau, 2003c:48). For subnational estimates (see Section 6-D.4), the decision to use separate E-sample and P-sample poststrata could have increased error for some small places.

The identification of potential weaknesses in Revision II exemplifies the Census Bureau’s praiseworthy thoroughness of documentation and explanation for every step of the effort. Complete documentation was prepared as well for every component of the original A.C.E. and evaluation studies and for the preliminary revised (October 2001) estimates. Commendably, the Bureau produced the extensive A.C.E. documentation in a timely manner: most documentation was released at the same time as or very shortly after each adjustment decision, in March 2001, October 2001, and March 2003.

6–D.2 Comparability with the 1990 PES

The original A.C.E. was comparable in design and execution in most respects to the 1990 PES. Most changes in the A.C.E. design and operations were intended to facilitate timely, accurate data collection and matching and to reduce variance. The two programs did differ in coverage: the PES covered the household and noninstitutional group quarters population, while the A.C.E. covered the household population; the two programs also used somewhat different defini-

19  

Research related to such a triple-systems estimation was conducted as part of the 2000 Census Administrative Records Experiment (see Bauder and Judson, 2003; Berning and Cook, 2003; Berning, 2003; Heimovitz, 2003).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

tions of race/ethnicity domains and a different treatment of movers (PES-B for the PES, PES-C for the A.C.E.).

In contrast, five features of the A.C.E. Revision II methodology significantly impaired comparability with the PES (see Mule, 2003:Table 1):

  1. the use in an expanded DSE formula of results from nationwide matching of the E-sample and the census on name and birthdate to estimate duplicate enumerations outside the A.C.E. search area, which reduced the estimated net undercount by 2.8 million people;

  2. the use of Evaluation Follow-Up Study results to reestimate correct enumerations among E-sample cases not linked to census enumerations outside the A.C.E. search area, which reduced the estimated net undercount by 2.4 million people;

  3. the use of results from nationwide matching of the P-sample and the census to estimate deletions from the P-sample for nonmover resident cases that linked to census enumerations outside the A.C.E. search area, which reduced the estimated net undercount by 1.1 million people;

  4. the use of Evaluation Follow-Up and Matching Error Study results to reestimate residents and matches among P-sample cases not linked to census enumerations outside the A.C.E. search area, which increased the estimated net undercount by 0.01 million people; and

  5. the use of sex ratios from demographic analysis for black men age 18 and over and nonblack men age 30 and over to adjust for correlation bias for these groups, which increased the estimated net undercount by 1.7 million people (0.8 million black men and 0.9 million nonblack men).

It would be possible (although probably not feasible) to reestimate the 1990 PES dual-systems estimates to include the fifth item (adjustment for correlation bias) and to use the results of the PES Evaluation Follow-Up Study to adjust the PES for E-sample and P-sample measurement error. However, the lack of a question on month of birth in the 1990 census and the fact that names were captured only for E-sample cases preclude the possibility of nationwide

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

matching of the E-sample or P-sample to census enumerations in 1990.

One could only speculate about the likely results if all five changes in the A.C.E. Revision II methodology could be implemented for the PES. The correlation bias adjustments might be similar, given that sex ratios were similar for most groups in 1990 and 2000 for demographic analysis, the two censuses, and the A.C.E. and PES (refer to Table 6.4 above). Measurement error corrections might be similar also, given that evaluations estimated data quality advantages for the A.C.E. on some dimensions and for the PES on others (e.g., a smaller percentage of movers and a somewhat higher quality of matching in the A.C.E., but a somewhat higher household interview rate in the PES—see Sections 6-A.3, 6-A.5, and 6-A.8). Whether a Further Study of Person Duplication would result in such a large number of additional duplicate enumerations not detected by the PES as turned out to be the case for the A.C.E. is highly speculative.

6–D.3 Net Coverage Error in 2000, Nation and Poststrata

From the A.C.E. Revision II work, it appears that net undercount rates were lower in 2000 compared with the 1990 rates estimated by the PES and that differences in net undercount rates between historically less-well-counted groups (minorities, renters, and children) and better counted groups were smaller as well. Thus, the A.C.E. Revision II and PES estimates of the national net undercount were a 0.5 percent net overcount and a 1.6 net undercount, respectively. Estimates of differences in net undercount rates between Hispanic and non-Hispanic white domains were 1.8 percentage points in Revision II and 4.3 percentage points in PES; estimates of differences in net undercount rates between blacks and whites were 3 percentage points in Revision II and 3.9 percentage points in PES; and estimates of differences in net undercount rates between owners and renters were 2.4 percentage points in Revision II and 4.5 percentage points in PES. The smaller decline in differential net undercount for blacks and whites compared with the declines for Hispanics and whites and for owners and renters was because of the large size of the correlation bias adjustment for black men, which increased their net undercount estimate.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

It is also apparent that whole-person imputations played a large role in explaining the reason why the original A.C.E. estimated lower net undercount rates than the PES despite similar match and correct enumeration rates. In turn, census duplications and other erroneous enumerations not detected by the original A.C.E. played a major role in the further reduction in estimated net undercount rates in Revision II from the original A.C.E.

Beyond that, it is hard to draw definitive conclusions about trends in coverage error from 1990 to 2000 because of the significant differences in the methods for Revision II compared with the PES. Given that the original A.C.E. also estimated lower net undercount rates than the PES and smaller differences in net undercount rates among population groups, we are fairly confident in concluding that net undercount and differences in net undercount rates were, by and large, reduced in 2000 from 1990. We are also fairly confident, despite the considerable reductions in estimated net undercounts from the original A.C.E. to Revision II, that differences in net undercount rates between such groups as minorities and others and owners and renters remain. Beyond these general statements, we cannot be more specific. We do not know the effect on the PES net undercount estimates that would have resulted from implementation of the changes in methods for A.C.E. Revision II.

6–D.4 Coverage Error in 2000, Subnational Areas

Assessment of net undercount rates and differences in net undercount for states and other geographic areas is an important part of census coverage evaluation because of the many uses of the data for small-area analysis. However, it is difficult to estimate error in subnational estimates, which are constructed by a synthetic method. In that procedure, coverage correction factors (the DSE estimate divided by the census count) are developed for individual poststrata; these factors are then multiplied by the population in each poststratum in each census block, the results rounded to whole people, and the rounded population estimates summed by block and, in turn, summed for larger geographic areas. This method makes the strong assumption that coverage probabilities and errors in estimation do not vary markedly across geographic areas within any poststratum.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

As discussed in Section 6-B.6, the reduction in net undercount rates and differences in net undercount rates for population groups estimated in 2000 compared with 1990 probably had the effect of lowering net undercount rates and differences in net undercount rates in 2000 for geographic areas as well. There is some evidence on this point, but the limited available comparisons are affected by the significant differences in methodology between the Revision II A.C.E. and the 1990 PES.

For states, it appears that differences in estimated net undercount and overcount rates were smaller in 2000 compared with 1990. Thus, the estimated net undercount (overcount) rates for states in 2000 from A.C.E. Revision II spanned a range of 2.2 percentage points (from Minnesota with the largest net overcount rate of 1.7 percent to Nevada with the largest net undercount rate of 0.5 percent; see Schindler, 2003:Table 1). By comparison, the estimated net undercount rates for states in 1990 from the revised PES spanned a range of 3 percentage points (from Rhode Island with the smallest net undercount rate of 0.1 percent to New Mexico with the largest net undercount rate of 3.1 percent; see Bureau of the Census, 1992: At-tachment 4). There were similarities in patterns of coverage error among states between 2000 and 1990 and clustering by region. Most striking, most of the midwestern states were not only in the quartile with the highest estimated net overcount rates in 2000 (exceeding 1.1 percent) but also in the quartile with the lowest estimated net undercount rates in 1990 (smaller than 0.7 percent). Other states with high net overcount rates in 2000 and low net undercount rates in 1990 were in the Northeast region. At the other end of the distribution, a group of nine states in the South and West were not only in the quartile with estimated net undercount rates or the lowest net overcount rates in 2000 (below 0.1 percent net overcount), but also in the quartile with the highest estimated net undercount rates in 1990 (exceeding 2 percent): Georgia, Maryland, Louisiana, Texas, Colorado, Montana, Nevada, New Mexico, and California.

For large counties of 1 million or more population (33 in 2000, 30 in 1990), differences in estimated net undercount and overcount rates were also smaller in 2000 compared with 1990. In 2000, estimates ranged from net undercounts of less than 2 percent to net overcounts of less than 2 percent, with 27 counties having rates between 1 percent net undercount and 1 percent net overcount (U.S.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Census Bureau, 2003c:Table 11). In 1990, the range was wider, from a net undercount of 4.9 percent to a net overcount of 0.8 percent, and only 16 counties were in a range of 2 percentage points (from 1 percent to 3 percent net undercount; Bureau of the Census, 1992: Attachment 12). For large places with 100,000 or more population, the story is similar (compare U.S. Census Bureau, 2003c:Table 10, with Bureau of the Census, 1992:Attachment 11). Whether estimated net overcount and undercount rates were smaller in 2000 than in 1990 for smaller areas is not clear, particularly given the very large net overcounts estimated for some small counties and places, as discussed below. Interested researchers can examine Revision II A.C.E. estimated net overcount and undercount rates for counties and places of all sizes from data files that are available at www.census.gov/dmd/www/ACEREVII_COUNTIES.txt and www.census.gov/dmd/www/ACEREVII_PLACES.txt, with record layouts described in Schindler (2003).20 However, similarly detailed information is readily available for 1990 only for counties.

The A.C.E. Revision II work included a loss function analysis to assess the relative accuracy of the Revision II estimates and census figures for population levels and shares for counties and places nationwide and within state (see Mulry and ZuWallack, 2002). The loss function analysis used estimates of sampling variance and nonsampling bias and variance to compute the weighted mean square error of the Revision II estimates. The analysis indicated that the Revision II estimates were more accurate than the census for every loss function considered except for places with 100,000 or more people, for which the error appeared to be concentrated in the Revision II estimates for the nine places with 1 million or more people (U.S. Census Bureau, 2003c:42). The Revision II loss function analysis was greatly improved over the analysis conducted for the original A.C.E. estimates, but it, too, did not include all sources of error, excluding such potentially important sources as the errors in the various evaluation studies and in the correlation bias adjustments (see U.S. Census Bureau, 2003c:43–44,68; see also Section 6-D.1).

20  

The data files use census collection geography (the definitions and boundaries set before the census) rather than the finished 2000 census tabulation geography.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Of particular relevance for subnational estimates, the loss function analysis did not include any estimate of synthetic error. A simulation analysis, using artificial populations, that modeled patterns of coverage variation within poststrata was carried out to assess the effects on the loss function analysis for states, counties, and places of omitting consideration of synthetic error. The results did not in general contradict the loss function results, but the simulation analysis itself had limitations, principally that the variables chosen to construct artificial populations (e.g., people with two or more items imputed) did not correlate highly with estimated gross undercount or overcount (Griffin, 2002).

An examination of the A.C.E. Revision II subnational estimates by population size for counties and places identified high estimated net overcounts for some small counties and places. For example, 863 places with populations of fewer than 1,000 people (of a total of 10,421 such places) had estimated net overcount rates of 5 percent or more (106 had estimated net overcount rates of 10 percent or more). In contrast, places with 1,000 or more people rarely had estimated net overcounts of more than 5 percent, and places with 100,000 or more people had estimated net overcounts of no more than 2 percent (U.S. Census Bureau, 2003c:Table 10; Table 11 provides the same information for counties). A possible explanation would credit the high estimated net overcount rates for some small places to the presence of proportionately larger numbers of proxy census enumerations in these places. Proxy enumeration poststrata had low estimated correct enumeration rates, but there were no corresponding P-sample poststrata, so the use of correct enumeration rates for proxy enumerations and match rates spread over proxy and nonproxy enumerations would overestimate net overcount rates in these places if proxy enumerations also exhibited low match rates.

We note that accuracy of population estimates cannot be expected for very small geographic areas, such as blocks, whether the data are from an adjustment procedure or the census. A principal reason for low-quality block-level census counts is geocoding errors; that is, putting households in the wrong location (e.g., locating an apartment building on the opposite side of the street and hence in a different block from its true location). Research that expanded on the analysis of geocoding errors in the Housing Unit Coverage Study estimated that almost 5 percent of housing units in the 2000 census

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

were geocoded to the wrong block, rising to as much as 11 percent of housing units in multiunit structures with 10 or more units (Vitrano et al., 2003b:76–77; see also Section 4-E.2). Hence, the usefulness of block statistics is not the data themselves, but the facility they provide for the user to aggregate the data to larger areas defined by the user (e.g., congressional districts, local service areas). Because geocoding errors typically involve nearby areas, the data for aggregates of blocks will be more accurate than the data for individual blocks.

6–D.5 Coverage Error in 2000, Group Quarters

We can say virtually nothing about coverage error for group quarters residents—either erroneous enumerations or omissions. Net undercount estimates for residents of noninstitutional group quarters from the 1990 PES were based on very uncertain data because of the difficulty of tracking such populations as college students on spring or summer break. Net undercount estimates for group quarters residents were not available from the A.C.E., which did not include this population in its universe. All indications are that the group quarters enumeration process was poorly controlled (see Section 4-F), so that coverage errors for group quarters residents were probably large.

6–D.6 Gross Coverage Errors

Although coverage correction factors for adjustment are based on estimated net error rates, components of gross error—that is, types of census omissions and erroneous enumerations—are important to measure and analyze to understand the census process and how to improve it. In this regard, higher or lower net undercount does not relate directly to the level of gross errors. There can be a zero net undercount and high rates of gross omissions and gross erroneous enumerations.

However, there is not widespread acceptance of the definition of different types of gross errors (see Section 1-C.1). Moreover, some types of gross errors depend on the level of aggregation or are not clearly identified by type in the design used for the A.C.E. and PES. Many errors identified by the A.C.E. or PES involved the balancing of

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

a nonmatch on the P-sample side against an erroneous enumeration on the E-sample side—for example, when an E-sample case that should match was misgeocoded to another block. These kinds of balancing errors were not errors for such levels of geography as counties, cities, and even census tracts, although they affected error at the block cluster level. Also, the classification of type of gross error in the A.C.E. or PES was not necessarily clean. For example, without the nationwide matching used for A.C.E. Revision II, a duplicate enumeration involving an E-sample unit and a census enumeration in another state (perhaps winter and summer homes) would be classified as an “other residence” erroneous enumeration and not as a duplicate.

Gross errors in the original A.C.E. were smaller in number than gross errors in the PES. The original A.C.E. estimated 28.3 million gross errors of all types, including 12.5 million erroneous census enumerations and 15.8 million census omissions. By comparison, the PES estimated 36.6 million gross errors of all types, including 16.3 million erroneous census enumerations and 20.3 million census omissions (Anderson and Fienberg, 2001:Table 3). The A.C.E. Revision II estimated more gross errors than the original A.C.E. but still fewer than the PES; it estimated 33.1 million gross errors of all types, including 17.2 million erroneous census enumerations and 15.9 million census omissions.21 Many of these errors, as noted, were not consequential for larger geographic areas.

6–D.7 Comparison with Demographic Analysis

The use of sex ratios from the revised demographic analysis 2000 estimates to adjust net undercount rates for adult men in the A.C.E. Revision II estimation produced net undercount patterns for age and sex groups that more closely resembled the demographic analysis patterns than if no such adjustments had been made (see Table 6.11). The biggest remaining discrepancy was for children ages 0–9 years, for which demographic analysis estimated a sizeable net undercount

21  

Census omissions are estimated as a residual, by adding the net undercount estimate to the estimate of erroneous enumerations, which for Revision II is the original A.C.E. estimate plus 4.7 additional estimated erroneous enumerations (Mule, 2003:2).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

compared with a small net overcount estimate from A.C.E. Revision II. No explanation for this discrepancy has been advanced.

Demographic analysis estimates themselves are subject to considerable uncertainty from such sources as the estimates for immigrants (particularly illegal immigrants). The revised demographic analysis estimates released in October 2001 used 2000 census long-form-sample data to estimate components of immigration and so are not independent of the census. In addition, they included revised assumptions for births and coverage of immigrants in the census that are based primarily on expert judgment. For example, expert judgment was used to conclude that registration of births after 1984 should be assumed to be 100 percent complete (see Robinson, 2001b:11). Consequently, the small correction for estimated net underregistration of births after 1984 was omitted from the revised estimation. Such judgments may be reasonable, but they retain sufficient uncertainty so that it is not appropriate to conclude that the revised demographic estimates are necessarily more accurate than the census or the A.C.E. Revision II.

6–D.8 March 2003 Decision Not to Adjust Base for Postcensal Estimates

The Census Bureau decided not to use the A.C.E. Revision II population estimates to adjust the 2000 census results for estimated coverage error and instead to base the Bureau’s postcensal small-area population estimates program on the unadjusted census counts. The program includes periodic release of updated estimates for states, counties, places, and school districts (see Citro, 2000e), so an adjustment of the 2000 base counts would need to be made for small areas to be useful for the estimates program. For updated population controls for the major household surveys, such as the Current Population Survey, an adjustment of the 2000 base counts would only need to be made for national-level population groups. However, no such adjustments are currently planned.

The three major factors cited by the Bureau in its decision against adjustment were: the uncertainty about the correlation bias adjustment; the possible errors for small places from the construction of disparate E-sample and P-sample poststrata; and the discrepancies between demographic analysis and A.C.E. Revision II net under-

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 6.11 Estimated Net Undercount Rates (percents), Original 2000 A.C.E. (March 2001), Revised Demographic Analysis (October 2001), and A.C.E. Revision II (March 2003) by Race, Sex, and Age

Category

Original A.C.E.

Revised Demographic Analysis (March 2001)

A.C.E. Revision II

Total

1.15

0.12

−0.48

Black Male

 

All Ages

2.36

5.15

4.19

Under 10 yearsa

2.92

3.26

0.72

10–17 years

2.92

−1.88

−0.59

18–29 years

3.82

5.71

6.14

30–49 years

2.58

9.87

8.29

50 years and over

−0.68

3.87

2.43

Black Female

 

All Ages

1.77

0.52

−0.61

Under 10 yearsa

2.95

3.60

0.70

10–17 years

2.95

−1.20

−0.55

18–29 years

3.76

−0.66

0.00

30–49 years

1.26

1.28

−0.40

50 years and over

−0.84

−1.03

−2.51

Nonblack male

 

All Ages

1.40

0.21

−0.19

Under 10 yearsa

1.28

2.18

−0.68

10–17 years

1.28

−2.01

−1.46

18–29 years

3.39

−0.63

0.19

30–49 years

1.70

0.63

1.05

50 years and over

−0.20

0.14

−1.10

Nonblack female

 

All Ages

0.64

−0.78

−1.41

Under 10 yearsa

1.28

2.59

−0.68

10–17 years

1.28

−1.55

−1.44

18–29 years

1.83

−1.94

−1.54

30–49 years

0.91

−1.01

−0.63

50 years and over

−0.75

−1.18

−2.42

NOTES: Net undercount is the difference between the estimate (A.C.E. or demographic analysis) and the census, divided by the estimate. Minus sign (−) means a net overcount. Population is total population, including household members and group quarters residents.

a In the original A.C.E., children ages 0–17 were a single category.

SOURCE: U.S. Census Bureau (2003c:Tables 13, 14); Robinson (2001a:Table 4, Table 7, column labeled A.C.E. Model 1).

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

count estimates for children. We agree that these are important sources of concern about the reliability of basing adjustments for geographic areas on the A.C.E. Revision II results. We also have other concerns about the Revision II estimates. These concerns do not reflect on the high quality, exceptional effort, and innovative nature of the Revision II estimation, but rather stem primarily from the limitations of the data available for that work.

Generally, we are concerned that the Revision II results are too uncertain to be used with sufficient confidence about their reliability for adjustment of census counts for subnational geographic areas and population groups. We have identified at least six sources of uncertainty.

First, only small samples of the A.C.E. data (see Table 6.3) were available to provide estimates of classification of E-sample cases as erroneous and of P-sample cases as nonresidents in the A.C.E. search area. These subsamples were used to develop correction factors to apply to the full A.C.E. original samples, but these factors were subject to estimation error and were developed for aggregates of poststrata, not the full set of individual E-sample and P-sample poststrata.

Second, it was not possible, in many instances, to determine which of each pair of duplicates involving a census enumeration outside the A.C.E. search area in the E-sample component of the Further Study of Person Duplication was correct and which should not have been counted in the census. (The exceptions, based on Census Bureau residency rules, involved such cases as a duplication between the E-sample and a census group quarters enumeration.) Similarly, it was not possible to determine whether the P-sample case in a pair of duplicates involving a census enumeration outside the A.C.E. search area in the P-sample component of the Further Study of Person Duplication was correct or whether the census enumeration was correct and, consequently, the P-sample case was a nonresident and should be dropped from the sample. For residency status, it was not even clear whether the probability of being a resident or nonresident should be one-half or some other proportion.

Third, the correction for duplicate census enumerations with E-sample cases and with P-sample cases involved use of two separate evaluations—the EFU reanalysis for cases inside the A.C.E. search area and the Further Study of Person Duplication for cases outside

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

the search area. Differences in the methods of these two analyses could introduce biases into the estimates. In addition, there is evidence that the numbers of duplicates outside the A.C.E search area were underestimated in both studies for both the E-sample and the P-sample.

Fourth, the correlation bias adjustments incorporated strong assumptions that are not easily supported. A first assumption was that the DSE estimates for women (and children) were unbiased. The Census Bureau believes that the corrections for other known biases from the duplication studies and other analyses addressed the concern about possible bias in the DSE estimates for women, and the Revision II estimates for women accord reasonably well with the revised demographic analysis estimates (see Table 6.11). However, there remains a sizeable discrepancy between the Revision II estimates for children ages 0–9 and the revised demographic analysis estimates. A second assumption was that the adjustments for black men varied only by age group and not also by such categories as housing tenure, when it is plausible from findings about higher net undercount rates for renters compared with owners that correlation bias affected black male renters differently from black male owners. A third assumption was that the adjustments for nonblack men applied equally to all race/ethnicity groups in that broad category, when it is plausible that correlation bias affected Hispanics and other race groups differently from non-Hispanic whites. Of course, it was the absence of data that precluded the Census Bureau from making correlation bias adjustments for groups other than those defined by age and the simple dichotomy of blacks and all others.

Fifth, the Census Bureau had to pick a particular correlation bias adjustment model, but noted that alternative adjustment models could have been used. The selected model and the alternatives would all have produced estimates that were consistent with the demographic analysis sex ratios and the A.C.E. Revision II data at the national level, but they would have produced different subnational DSE estimates. The loss function analysis does not take account of the potential error from the choice of the correlation bias adjustment model, so we do not know the possible effects of this choice on subnational estimates.

Sixth, the use of different poststrata for the E-sample and the P-sample in the Revision II estimation could have increased the

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

error for some geographic areas. We understand that the revised E-sample poststrata better explained variations in correct enumeration rates compared with the original E-sample poststrata. However, there were no logical counterparts on the P-sample side for some of the E-sample poststrata (including those based on proxy response and type of return), and the use of different poststrata could have introduced bias for some estimates.

Because of these sources of uncertainty, our view is that the Census Bureau’s decision not to use the A.C.E. Revision II estimates to adjust the census data that provide the basis for postcensal estimates was justified. A consideration in our agreement with the bureau’s decision against adjustment was the finding that estimated net undercount rates and differences in net undercount rates for population groups (and, most probably, subnational areas) were smaller in 2000 than in 1990. The smaller the measured net coverage errors in the census, the smaller would be the effects of an adjustment on important uses of the data. Because the benefits of an adjustment would be less when net coverage errors are small, a high level of confidence is needed that an adjustment would not significantly increase the census errors for some areas and population groups. In our judgment, the A.C.E. Revision II estimates, given the constraints of the available data for correcting the original A.C.E. estimates, are too uncertain for use in this context. We do not intend by this conclusion, however, to set a standard of perfection whereby it would never be possible to carry out an adjustment that improved on the census counts. Indeed, had it been possible to implement the A.C.E. Revision II methodology from the outset on the original A.C.E. data and to make some other improvements in the estimation (see Section 6-E), it is possible that an adjustment of the 2000 census data could have been implemented that was well supported.

6–D.9 Revision II Coverage Evaluation Findings

See also Section 6-A.11.

Finding 6.2: The Census Bureau commendably dedicated resources to the A.C.E. Revision II effort, which completely reestimated net undercount (and overcount) rates for several hundred population groups (poststrata)

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

by using data from the original A.C.E. and several evaluations. The work exhibited high levels of creativity and effort devoted to a complex problem. From innovative use of matching technology and other evaluations, it provided substantial additional information about the numbers and sources of erroneous census enumerations and, similarly, information with which to correct the residency status of the independent A.C.E. sample. It provided little additional information, however, about the numbers and sources of census omissions.

Documentation for the original A.C.E. estimates (March 2001), the preliminary revised estimates (October 2001), and the A.C.E. Revision II estimates (March 2003) was timely, comprehensive, and thorough.

Finding 6.3: We support the Census Bureau’s decision not to use the March 2003 Revision II A.C.E. coverage measurement results to adjust the 2000 census base counts for the Bureau’s postcensal population estimates program. The Revision II results are too uncertain to be used with sufficient confidence about their reliability for adjustment of census counts for subnational geographic areas and population groups. Sources of uncertainty stem from the small samples of the A.C.E. data that were available to correct components of the original A.C.E. estimates of erroneous enumerations and non-A.C.E. residents and to correct the original estimate of nonmatches and the consequent inability to make these corrections for other than very large population groups; the inability to determine which of each pair of duplicates detected in the A.C.E. evaluations was correct and which should not have been counted in the census or included as an A.C.E. resident; the possible errors in subnational estimates from the choice of one of several alternative correlation bias adjustments to compensate for higher proportions of missing men relative to women; the inability to make correlation bias adjustments for population groups other than blacks and nonblacks; and the possible errors for some small

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

areas from the use of different population groups (poststrata) for estimating erroneous census enumerations and census omissions. In addition, there is a large discrepancy in coverage estimates for children ages 0–9 when comparing demographic analysis estimates with Revision II A.C.E. estimates (2.6 percent undercount and 0.4 percent net overcount, respectively).

Finding 6.4: Demographic analysis helped identify possible coverage problems in the 2000 census and in the A.C.E. at the national level for a limited set of population groups. However, there are sufficient uncertainties in the revised estimates of net immigration (particularly the illegal component) and the revised assumption of completeness of birth registration after 1984, compounded by the difficulties of classifying people by race, so that the revised demographic analysis estimates cannot and should not serve as the definitive standard of evaluation for the 2000 census or the A.C.E.

Finding 6.5: Because of significant differences in methodology for estimating net undercount in the 1990 Post-Enumeration Survey Program and the 2000 Accuracy and Coverage Evaluation Program (Revision II), it is difficult to compare net undercount estimates for the two censuses. Nevertheless, there is sufficient evidence (from comparing the 1990 PES and the original A.C.E.) to conclude that the national net undercount of the household population and net undercount rates for population groups were reduced in 2000 from 1990 and, more important, that differences in net undercount rates between historically less-well-counted groups (minorities, children, renters) and others were reduced as well. From smaller differences in net undercount rates among groups and from analysis of available information for states and large counties and places, it is reasonable to infer that differences in net undercount rates among geographic areas were also probably smaller in 2000 compared with 1990. Despite reduced differences in

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

net undercount rates, some groups (e.g., black men and renters) continued to be undercounted in 2000.

Finding 6.6: Two factors that contributed to the estimated reductions in net undercount rates in 2000 from 1990 were the large numbers of whole-person imputations and duplicate census enumerations, many of which were not identified in the original (March 2001) A.C.E. estimates. Contributing to duplication were problems in developing the Master Address File and respondent confusion about or misinterpretation of census “usual residence” rules, which resulted in duplication of household members with two homes and people who were enumerated at home and in group quarters.

6–E RECOMMENDATIONS FOR COVERAGE EVALUATION IN 2010

6–E.1 An Improved Accuracy and Coverage Evaluation Program

The complexities of the original A.C.E. and Revision II reestimation and the uncertainties about what the Revision II results tell us about net and gross coverage errors in the 2000 census could lead policy makers to question the value of an A.C.E.-type coverage evaluation program for the 2010 census. To the contrary, we recommend that research and development for the 2010 census give priority to improving an A.C.E.-type coverage evaluation mechanism and that it be implemented in 2010.

Without the 2000 original A.C.E. and Revision II estimation, we would not have acquired so much valuable information about strengths and weaknesses of the 2000 census. In particular, differences between the census counts, the original A.C.E., and the original demographic analysis estimates spurred the development of innovative methods for identifying duplicate census enumerations. These differences also motivated a reexamination of assumptions about immigration estimates in demographic analysis.

The plans for the 2010 census include the serious possibility that the matching methods used in the Further Study of Person Duplication would be used as part of the enumeration process, so that

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

duplicate enumerations could be identified, followed up, and eliminated from the census counts in real time (Smith and Whitford, 2003).22 If these plans reach fruition, then the 2010 census could be expected to have many fewer erroneous enumerations than the 2000 census. Because it is difficult to imagine the development of effective new ways of reducing census omissions, then a reduction in erroneous enumerations could well result in a significant net undercount in the 2010 census and an increase in differential undercoverage among population groups. Without an A.C.E.-type coverage evaluation program, it would not be possible to measure such an undercount or to adjust some or all of the census counts for coverage errors should that be warranted. Demographic analysis, while providing useful information about census coverage at the national level for a few population groups, could not substitute for an A.C.E.

We urge that the 2010 census testing program give priority to research and development for an improved A.C.E.-type coverage evaluation program. We see possibilities for improvements in many areas, such as the estimation of components of gross census error as well as net error, expansion of the search area for erroneous census enumerations and P-sample nonresidents, the inclusion of group quarters residents, better communication to respondents of residence rules (and reasons for them), understanding the effects of IIs on A.C.E. estimation, the treatment of movers, and the development of poststrata. The optimal size of a 2010 A.C.E. is also a consideration. The survey must be large enough to provide estimates of coverage errors with the level of precision that was targeted for the original (March 2001) A.C.E. estimates for population groups and geographic areas.

With regard to the estimates of erroneous enumerations and P-sample nonresidents outside the traditional search area, the nationwide matching technology developed for the 2000 A.C.E. Revision II would make it possible to incorporate the search for such errors

22  

Some observers may be concerned about privacy issues with regard to the capture of names on the computerized census records and their use for matching on such a large scale. The panel did not consider this issue, but we note that the Census Bureau has always been sensitive to such concerns, and Title 13 of the U.S. Code safeguards the data against disclosure.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

as part of the 2010 A.C.E. production process, not waiting for subsequent evaluation. Such cases could be identified and followed up in real time, similar to what is planned for the census itself. Such a procedure could not only significantly reduce errors in the A.C.E., it could also provide valuable information about types of gross errors that has not previously been available.

The nationwide matching technology together with the possible increased use of administrative records for group quarters enumeration, could make it possible to include group quarters residents in the 2010 A.C.E. with an acceptable level of data quality. Administrative records for such group quarters as college dormitories, prisons, nursing homes, and other institutions could provide home addresses for use in the matching to identify duplicate enumerations. With this information, the follow-up to verify a duplicate enumeration of a college student, for example, would simply need to establish that the student was in fact the same person, and the census residence rules would then be applied to designate the group quarters enumeration as correct and the home enumeration as erroneous. There would be no need to make the family choose the student’s “usual residence.”

With regard to communication of residence rules, cognitive research on the A.C.E. questionnaires and interviewer prompts could lead to interviewing procedures that better help respondents understand the Bureau’s rules and reasons for them. The 2000 A.C.E. demonstrated that it is not enough to rely on respondents’ willingness to follow the rules (e.g., when parents report a college student at home), which is a major reason for incorporating nationwide matching technology into the 2010 A.C.E. process. However, cognitive research could probably improve the interview process in ways that would improve the quality of residence information reported to the A.C.E. (Such research is also likely to improve the census enumeration.)

Furthermore, if plans to use mobile computing devices and global positioning system (GPS) technology for address listing and nonresponse follow-up in 2010 come to fruition, then there is the opportunity to reduce geocoding errors in the E-sample. Such technology could also be used to minimize geocoding errors in the listing operations conducted to build the independent P-sample address list.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

With regard to understanding the effects of census records that are excluded from the A.C.E. matching (IIs), such records in 2010 would probably be whole-person imputations. The large number of records that were reinstated from the special summer 2000 housing unit unduplication operation should not affect 2010, given that matching to detect duplicate addresses (and people) will probably be built into the enumeration process. However, there could still be large numbers of whole-person and whole-household imputations. In order to more fully understand the effects of an adjustment for population groups and geographic areas, it is important to analyze the distribution of such imputations and, in particular, how they may affect synthetic error.

For movers, with the nationwide matching capability that has been developed, it should be possible to use the PES-B procedure for a 2010 A.C.E., instead of the cumbersome PES-C procedure that was used in 2000. The speed of the 2000 P-sample interviewing reduced the number of movers and, consequently, their effects on the dual-systems estimates, but the quality of their data was markedly below that for nonmovers and inmovers. Finding where inmovers resided on Census Day would be greatly facilitated by nationwide matching, so that a PES-B procedure would be feasible and likely to provide improved data quality compared with its use in the 1990 PES.

Finally, with regard to poststratification, the Revision II effort to revise the E-sample poststrata on the basis of analyzing the A.C.E. data themselves was commendable. Further work on poststratification could be conducted with the 2000 data in planning for 2010, and plans for using the 2010 A.C.E. data to refine the poststrata could also be developed. Care should be taken to develop poststrata that do not produce the anomalies that were observed in Revision II from the use of E-sample poststrata for proxy enumerations for which no counterparts were developed on the P-sample side. The use of statistical modeling for developing poststrata from the A.C.E. data should also be part of the poststratification research for 2010.

We are confident that these and other improvements could be made in an A.C.E.-type coverage evaluation program for the 2010 census with sufficient attention to research, development, and testing in the next few years. The U.S. General Accounting Office (2003a:1) reported that the Census Bureau “obligated about

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

$207 million to [the 2000 census A.C.E. and the predecessor ICM program] from fiscal years 1996 through 2001, which was about 3 percent of the $6.5 billion total estimated cost of the 2000 Census” (see also U.S. General Accounting Office, 2002b). An equivalent expenditure for an A.C.E.-type program in 2010 would be money well spent to ensure that adequate data become available with which to evaluate not only net, but also gross coverage errors. Such errors could be more heavily weighted toward omissions, and not erroneous enumerations, in 2010 compared with the 2000 experience.

Recommendation 6.1: The Census Bureau and administration should request, and Congress should provide, funding for the development and implementation of an improved Accuracy and Coverage Evaluation Program for the 2010 census. Such a program is essential to identify census omissions and erroneous enumerations and to provide the basis for adjusting the census counts for coverage errors should that be warranted.

The A.C.E. survey in 2010 must be large enough to provide estimates of coverage errors that provide the level of precision targeted for the original (March 2001) A.C.E. estimates for population groups and geographic areas. Areas for improvement that should be pursued include:

  1. the estimation of components of gross census error (including types of erroneous enumerations and omissions), as well as net error;

  2. the identification of duplicate enumerations in the E-sample and nonresidents in the P-sample by the use of new matching technology;

  3. the inclusion of group quarters residents in the A.C.E. universe;

  4. improved questionnaire content and interviewing procedures about place of residence;

  5. methods to understand and evaluate the effects of census records that are excluded from the A.C.E. matching (IIs);

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
  1. a simpler procedure for treating people who moved between Census Day and the A.C.E. interview;

  2. the development of poststrata for estimation of net coverage errors, by using census results and statistical modeling as appropriate; and

  3. the investigation of possible correlation bias adjustments for additional population groups.

6–E.2 Improved Demographic Analysis for 2010

We support the usefulness of demographic analysis for intercensal population estimation and for helping to identify areas of possible enumeration problems in the census and the A.C.E. For this reason, it is important for the Census Bureau to continue its efforts to obtain additional funding for research and development of demographic analysis methods, particularly for estimates of immigrants, and to develop methods for estimating uncertainty in demographic analysis population estimates. Such developmental work needs to be conducted with other federal statistical agencies that have relevant data and that make use of postcensal population estimates.

Recommendation 6.2: The Census Bureau should strengthen its program to improve demographic analysis estimates, in concert with other statistical agencies that use and provide data inputs to the postcensal population estimates. Work should focus especially on improving estimates of net immigration. Attention should also be paid to quantifying and reporting measures of uncertainty for the demographic estimates.

6–E.3 Time for Evaluation and Possible Adjustment

The original A.C.E. data collection, matching, estimation, and initial evaluation were carried out according to carefully specified and controlled procedures with commendable timeliness (see Finding 6.1 in Section 6-A.11). However, the experience with the subsequent evaluations and A.C.E. Revision II demonstrates that the

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

development of net coverage estimates that are judged to be sufficiently reliable to use in evaluation of the census counts—and possible adjustment—should not be rushed. Similarly, even if the process for evaluating census operations and data items is improved relative to 2000 (see Chapter 9), that process—which is important to verifying the quality of the census content—requires ample time. Consequently, the panel believes that adequate evaluation of the census block-level data for congressional redistricting is not possible by the current deadline of 12 months after Census Day. The Congress should consider changing this deadline to provide additional time for evaluation and delivery of redistricting data.

Recommendation 6.3: Congress should consider moving the deadline to provide block-level census data for legislative redistricting to allow more time for evaluation of the completeness of population coverage and quality of the basic demographic items before they are released.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

This page intentionally left blank.

Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page185
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page186
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page187
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page188
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page189
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page190
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page191
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page192
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page193
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page194
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page195
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page196
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page197
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page198
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page199
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page200
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page201
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page202
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page203
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page204
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page205
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page206
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page207
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page208
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page209
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page210
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page211
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page212
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page213
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page214
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page215
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page216
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page217
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page218
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page219
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page220
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page221
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page222
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page223
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page224
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page225
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page226
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page227
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page228
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page229
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page230
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page231
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page232
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page233
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page234
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page235
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page236
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page237
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page238
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page239
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page240
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page241
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page242
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page243
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page244
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page245
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page246
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page247
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page248
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page249
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page250
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page251
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page252
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page253
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page254
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page255
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page256
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page257
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page258
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page259
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page260
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page261
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page262
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page263
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page264
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page265
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page266
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page267
Suggested Citation:"6 The 2000 Coverage Evaluation Program." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page268
Next: 7 Assessment of Basic and Long-Form-Sample Data »
The 2000 Census: Counting Under Adversity Get This Book
×
Buy Hardback | $80.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The decennial census was the federal government’s largest and most complex peacetime operation. This report of a panel of the National Research Council’s Committee on National Statistics comprehensively reviews the conduct of the 2000 census and the quality of the resulting data. The panel’s findings cover the planning process for 2000, which was marked by an atmosphere of intense controversy about the proposed role of statistical techniques in the census enumeration and possible adjustment for errors in counting the population. The report addresses the success and problems of major innovations in census operations, the completeness of population coverage in 2000, and the quality of both the basic demographic data collected from all census respondents and the detailed socioeconomic data collected from the census long-form sample (about one-sixth of the population). The panel draws comparisons with the 1990 experience and recommends improvements in the planning process and design for 2010. The 2000 Census: Counting Under Adversity will be an invaluable resource for users of the 2000 data and for policymakers and census planners. It provides a trove of information about the issues that have fueled debate about the census process and about the operations and quality of the nation’s twenty-second decennial enumeration.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!