National Academies Press: OpenBook

The 2000 Census: Counting Under Adversity (2004)

Chapter:4 Assessment of 2000 Census Operations

« Previous: 3 The Road to 2000
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

CHAPTER 4
Assessment of 2000 Census Operations

WE BEGIN OUR EVALUATION OF THE 2000 CENSUS by describing two of its three most important achievements (4-A): first, halting the historical decline in mail response rates and, second, implementing data collection, processing, and delivery in a smooth and timely manner. (We describe and assess the third important achievement—the reduction in differential coverage errors among important population groups—in Chapters 5 and 6.) We then assess the contribution of seven major innovations in census operations, not only to the realization of the above-cited achievements, but also to data quality and cost containment. By quality, we mean that an estimate from the census accurately measures the intended concept, not only for the nation as a whole, but also for geographic areas and population groups. With regard to costs, although cost reduction drove the research and planning for the original 2000 design (see Chapter 3), it was not an explicit goal for the final 2000 design.

The seven innovations that we review include two that facilitated public response—namely, redesigned questionnaires and mailing materials and paid advertising and expanded outreach (4-B, see also Appendix C.2, C.4); three that facilitated timeliness—namely, contracting for data operations, improved data capture technology, and aggressive recruitment of enumerators and implementation of non-

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

response follow-up (4-C, see also Appendix C.3, C.5); one that facilitated timeliness but may have had mixed effects on data quality—namely, greater reliance on computers to treat missing data (4-D, see also Appendix C.5); and one that was sound in concept but not well-implemented—namely, the use of multiple sources to develop the Master Address File, or MAF (4-E, see also Appendix C.1). We do not assess an eighth major innovation in 2000, which was the expanded use of the Internet for release of data products to users; see http://factfinder.census.gov [12/12/03].

The final section in this chapter (4-F) assesses the problem-plagued operations for enumerating residents of group quarters. Sections 4-B through 4-F end with a summary of findings and recommendations for research and development for 2010.

4–A TWO MAJOR OPERATIONAL ACHIEVEMENTS

4–A.1 Maintaining Public Cooperation

The 2000 census, like censuses since 1970, was conducted primarily by delivering questionnaires to households and asking them to mail back a completed form. Procedures differed somewhat, depending on such factors as type of addresses in an area and accessibility; in all, there were nine types of enumeration areas in 2000 (see Box C.2 in Appendix C). The two largest types of enumeration areas in 2000—mailout/mailback and update/leave/mailback or update/leave—covered 99 percent of the household population; together, they constituted the mailback universe. The goal for this universe for the questionnaire delivery and mail return phase of the census was to deliver a questionnaire to every housing unit on the MAF and motivate people to fill it out and mail it back.

The Census Bureau expected that mail response would continue to decline, as it had from 1970 to 1990, due to broad social and economic changes that have made the population more difficult to enumerate. These changes include rising numbers of new immigrants, both those who are legally in the country and those who are not, who may be less willing to fill out a census form or who may not be able to complete a form because of language difficulties; increasing amounts of junk mail, which may increase the likelihood

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

that households will discard their census form without opening it;1 and larger numbers of households with multiple residences, making it unclear which form they should mail back.

The Bureau budgeted for a decline in the mail response rate (mail returns as a percentage of all mailback addresses) to 61 percent in 2000, compared with a 70 percent budgeted rate in 1990. As the 2000 census got under way, the Census Bureau director initiated the ’90 Plus Five campaign, challenging state, local, and tribal government leaders to reach a mail response rate at least 5 percent higher than their 1990 mark. The Internet was used to post each jurisdiction’s goal and actual response rates day-by-day through April 11, 2000.

Maintaining the budgeted mail response rate was key to the Bureau’s ability to complete nonresponse follow-up on time and within budget, and, if the rate improved over the budgeted figure, that would be beneficial. Estimates produced in conjunction with the 1990 census were that each 1 percentage point decline in the mail response rate increased the nonresponse follow-up workload and costs of that census by 0.67 percent (National Research Council, 1995b:48). In addition, evidence from the 1990 census, confirmed by analysis of 2000 data, indicated that returns obtained in the field, on balance, were less complete in coverage and content than mail returns (see Appendix D and Chapter 7).

Success in the 2000 Mail Response

A significant achievement of the 2000 census was that it halted the historical decline in the mail response rate (see Box 4.1). The rate achieved at the time when the workload for nonresponse follow-up was identified (April 18, 2000) was 64 percent—3 percentage points higher than the 61 percent rate for which the Bureau had budgeted follow-up operations. By the end of 2000 the mail response rate was 67 percent (Stackhouse and Brady, 2003a:2, Table 4). A significant proportion of the mail returns received after April 18, 2000, had arrived by April 30, so that the Census Bureau could perhaps have saved even more follow-up costs if it had set a later date to establish the nonresponse follow-up workload. However, the Bureau was not

1  

A post-1990 census study of household behavior found that 4 percent of households that reported receiving a questionnaire in the mail discarded it without opening it (Kulka et al., 1991).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Box 4.1
Mail Response and Return Rates

The mail response rate is defined as the number of households returning a questionnaire by mail divided by the total number of questionnaires sent out in mailback areas. Achieving a high mail response rate is important for the cost and efficiency of the census because every returned questionnaire is one less household for an enumerator to follow up in the field.

The mail return rate is defined as the number of households returning a questionnaire by mail divided by the number of occupied housing units that were sent questionnaires in the mailback areas. This rate is an indicator of public cooperation. Achieving a high mail return rate (at least to the level of 1990) is important because of evidence that mail returns are more complete than enumerator-obtained returns.

In 2000, because of the alternative modes by which households could fill out their forms, the numerator of both “mail” responses and “mail” returns included responses submitted on the Internet, over the telephone, and on “Be Counted” forms. The denominator of the mail response rate included all addresses on the April 1, 2000, version of the MAF, covering both mailout/mailback and update/leave areas. The denominator of the mail return rate excluded addresses on the MAF that field follow-up determined were vacant, nonresidential, or nonexistent and included addresses for occupied units that were added to the MAF after April 1.

Final Rates, 1970-2000 Censuses (as of end of census year)

 

1970

1980

1990

2000

Mail response rate:

78%

75%

65%

67%

Mail return rate:

87%

81%

75%

78%

Differences in Final Mail Return Rates: Short and Long Forms

Return rates of long forms are typically below the return rates of short forms. This difference widened substantially in 2000.

 

1970

1980

1990

2000

Short-form rate:

88%

82%

76%

80%

Long-form rate:

86%

80%

71%

71%

Difference

2%

2%

5%

9%

NOTES: 1980 and 1990 rates shown here differ slightly from those in Bureau of the Census (1995b:1-24). Mail response and return rates are not strictly comparable across censuses because of differences in procedures used to compile the address list and in the percentage of the population included in the mailback universe (about 60 percent in 1970 and 95 percent or more in 1980–2000).

SOURCE: National Research Council (1995b:Table 3.1, App. A) for 1970 and 1980 rates; Stackhouse and Brady (2003a,b:v) for 1990 and 2000 rates.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

willing to risk possible delays in completing nonresponse follow-up by delaying the start of the effort (U.S. General Accounting Office, 2002a).

In 1990, the mail response rate at the time when the workload for nonresponse follow-up was identified (late April) was 63 percent—1 percentage point lower than the corresponding 2000 rate and 7 percentage points lower than the 70 percent rate for which the Bureau had budgeted follow-up operations. By the end of 1990 the mail response rate was 65 percent, 2 percentage points lower than the final 2000 rate (Stackhouse and Brady, 2003a:2).

Most of the improvement in mail response in 2000 was because of improved response to the short form. At the time when the nonresponse follow-up workload was specified (April 18, 2000), the long-form mail response rate was 12 percentage points below the rate for short forms (54 percent and 66 percent, respectively). When late mail returns are included, the gap between short- and long-form mail response rates was reduced from 12 to 10 percentage points (Stackhouse and Brady, 2003a:17,18).

The 2000 census was also successful in stemming the historical decline in the mail return rate (mail returns as a percentage of occupied mailback addresses), which is a more refined measure of public cooperation than the mail response rate (see Box 4.1). At the end of 2000, the final mail return rate was 78 percent, compared with a final mail return rate of 75 percent in 1990 (Stackhouse and Brady, 2003b:v). Again, most of the improvement in mail return rates in 2000 was because of improved response to the short form: the final mail return rate for short forms was 80 percent in 2000 compared with 76 percent in 1990. For long forms, the final mail return rate was 71 percent in both years, so that the gap between short-form and long-form mail return rates in 2000 was 9 percentage points, compared with only 5 percentage points in 1990.

Mail Return Patterns

A tabular analysis of all 2000 mail returns (Stackhouse and Brady, 2003b:Tables 8, 10, 12, 14, 16) found considerable variation in mail return rates for population groups:

  • Total mail return rates increased as a function of age of householder: the final mail return rate for householders ages 18–24

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

was 57 percent, climbing to a rate of 89 percent for householders age 65 and older.

  • White householders had the highest total final mail return rate of any race group—82 percent. Householders of other race groups had final mail return rates of 75 percent or lower; the lowest rates were for black householders (64 percent), householders of some other race (63 percent), and householders of two or more races (63 percent).

  • Hispanic householders had a lower total final mail return rate (69 percent) than non-Hispanic householders (79 percent); Hispanic householders had a larger difference between short-form and long-form return rates (71 percent short-form rate, 57 percent long-form rate) than did non-Hispanic householders (81 percent short-form rate, 72 percent long-form rate).

  • Total final mail return rates were similar for most household size categories, except that two-person households had a higher rate (82 percent) than other categories and households with seven or more members had a lower rate (72 percent) than other categories, largely because of differences in long-form return rates for these two household types (76 percent for two-person households and 58 percent for households with seven or more members).

  • Owners had a higher total final mail return rate (85 percent) than renters (66 percent).

Exploratory regression analysis conducted by the panel of preliminary total mail return rates for 2000 and 1990 for census tracts found a strong positive relationship in both years of a tract’s return rate to its 1990 percentage population over age 65 and a strong negative effect in both years of a tract’s return rate to its 1990 hard-to-count score, estimated percentage net undercount, percentage people in multiunit structures, and percentage people who were not high school graduates.2 Geographic region effects, which differed somewhat between 1990 and 2000, were also evident. While analysis found no variables that explained changes in mail return rates

2  

See National Research Council (2001a:App.B). The data sets used were U.S. Census Bureau, Return Rate Summary File (U.S.), provided to the panel February 26, 2001, and 1990 Planning Database (U.S. Census Bureau, 1999a).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

for census tracts from 1990 to 2000, graphical displays identified clusters of tracts with unusually large increases and decreases in return rates between the two censuses, suggesting that local factors may have had an effect in those areas. Thus, large clusters of tracts that experienced 20 percent or greater declines in mail return rates from 1990 to 2000 were found in central Indiana; Brooklyn, New York; and throughout Kentucky, Tennessee, and the Carolinas. At the other extreme, tracts that experienced increases of 20 percent or more in mail return rates were concentrated in the Pacific census division (particularly around Los Angeles and the extended San Francisco Bay Area) and also in New England. In contrast, mail return rates for tracts in the Plains and Mountain states were very similar between 1990 and 2000.

Further investigation of the characteristics of clusters of tracts that experienced unusually large increases or decreases in mail return rates and of local operations and outreach activities in those areas would be useful to identify possible problems and successes to consider for 2010 census planning. More generally, research is needed on patterns of mail response and the reasons for nonresponse.

4–A.2 Controlled Timing and Execution

The Census Bureau met its overriding goal to produce data of acceptable quality from the 2000 census for congressional reapportionment and redistricting by the statutory deadlines. This outcome may appear unremarkable because the Bureau has never failed to produce basic population counts on time. However, as we saw in Chapter 3, success was by no means a foregone conclusion given the serious problems that hampered planning and preparations for the census, which included lack of agreement on the 2000 design until early 1999.

Not only were the basic 2000 data provided on time, but most of the individual operations to produce the data, such as nonresponse follow-up and completion of edited data files, were completed on or ahead of schedule.3 Of the 520 census offices (excluding Puerto Rico), 68 completed nonresponse follow-up within 6 or fewer weeks, 290 finished within 7 weeks, 462 finished within 8 weeks, and all

3  

The 2000 Accuracy and Coverage Evaluation operations were also carried out on or ahead of schedule; see Chapter 6.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

520 finished within 9 weeks—a week ahead of the planned 10-week schedule for completion (U.S. General Accounting Office, 2002a:6).4 Few instances occurred in which operations had to be modified in major ways after Census Day, April 1, 2000. One exception was the ad hoc unduplication operation that was mounted in summer 2000 to reduce duplicate enumerations from duplicate addresses in the MAF (see Section 4-E).

In contrast, the 1990 census experienced unexpected problems and delays in executing such key operations as nonresponse follow-up, which took 14 weeks to finish, and the Census Bureau had to return to Congress to obtain additional funding to complete all needed operations. The basic 1990 data were released on time, however, and delivery schedules for more detailed products were similar for 1990 and 2000 (see Bureau of the Census, 1995a:Ch.10; http://www.census.gov/population/www/censusdata/c2kproducts.html [1/10/04]).

4–B MAJOR CONTRIBUTORS TO PUBLIC RESPONSE

The Census Bureau had three strategies to encourage response to the 2000 census. Two of the three strategies very likely contributed to the success in halting the historical decline in mail response and return rates—a redesigned questionnaire and mailing package (4-B.1) and extensive advertising and outreach (4-B.2). The third strategy, which we do not discuss further, was to allow multiple modes for response. This strategy had little effect partly because the Census Bureau did not actively promote alternative response modes. Specifically, the Bureau did not widely advertise the option of responding via the Internet (an option available only to respondents who had been mailed the census short form) because of a concern that it could not handle a large volume of responses. The Bureau also did not widely advertise the option of picking up a “Be Counted” form at a public place because of a concern about a possible increase in the number of responses that would require address verification and unduplication. Finally, telephone assistance was designed pri-

4  

Subsequently, every housing unit in the workload of one district office, in Hialeah, Florida, was reenumerated because of problems that came to light in that office, and selected housing units were reenumerated in seven other offices for which problems were identified (see Appendix C.3.b).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

marily to answer callers’ questions and only secondarily to record responses. As it turned out, of 76 million questionnaires that were returned by households, 99 percent arrived by mail and only 1 percent by other modes (see Appendix C.2.b). In particular, only about 600,000 Be Counted and 200,000 telephone questionnaires were obtained. At the end of a process of geocoding, field verification, and unduplication, the Be Counted and telephone operations added about 100,000 housing units that would otherwise not have been included in the census count (see Vitrano et al., 2003a:29).

4–B.1 Redesigned Questionnaire and Mailing Package

Strategy

Based on its extensive research in the early 1990s (see Section 3-B.2), the Census Bureau redesigned the census short-form and long-form questionnaires and mailing materials for 2000 as part of its effort to encourage the public to fill out questionnaires and return them in the mail (or by an alternative response mode). The questionnaires were made easier to read and follow; the respondent burden of the short form was reduced by providing room for characteristics for six household members instead of seven as in 1990 and by moving questions to the long form; an advance letter was sent out in mailout/mailback areas; the envelope with the questionnaire carried a clear message that response was required by law; and a reminder postcard (also used in 1990) was sent out in mailout/mailback and update/leave areas.

Evaluation

The success of the 2000 census in maintaining mail response and return rates at 1990 levels and above was of major importance for its success in terms of timely, cost-effective completion of operations. It seems highly likely that the changes to the questionnaire and mailing package and the use of an advance letter—despite or perhaps even because of the publicity due to an addressing error in the letter (see Appendix C.2.a)—contributed to maintaining the response and return rates, although how large a role each of these elements played in this achievement is not known. It is also unknown whether the redesigned questionnaire and mailing package

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

had greater effect on some population groups and geographic areas than others, or whether the effect was to boost return rates about equally across population groups and geographic areas and thereby maintain differences in response rates from past censuses.

One disappointment of the mailing strategy and other initiatives to encourage response in 2000 was that they had much less effect on households that received the long form than on those that received the short form. As noted above, at the time when the nonresponse follow-up workload was specified (April 18, 2000), the long-form mail response rate was 12 percentage points below the rate for short forms. This difference was double the difference that the Bureau expected (U.S. General Accounting Office, 2000c:5) and far larger than differences between long-form and short-form response rates seen in previous censuses. One of the Bureau’s questionnaire experiments in the early 1990s, using an appealing form and multiple mailings, presaged this outcome: it found an 11 percentage point difference between short-form and long-form response rates (Treat, 1993), perhaps because the user-friendly long form tested was longer than the 1990 long form (28 versus 20 pages). The 2000 long-form had about the same number of questions as the 1990 long form but twice as many pages.

When late mail returns are included, the gap between short- and long-form mail response and return rates in 2000 was somewhat reduced. In other words, some households that received the long form lagged in completing and mailing it back in. However, the wide gap at the time of nonresponse follow-up was important because it meant that a higher proportion of the workload comprised long forms, which experience in 1990 and 2000 demonstrated were more difficult to complete in the field than short forms.

A drawback of the 2000 census mailing strategy was that the plan to mail a second questionnaire to nonresponding households had to be discarded, even though dress rehearsal results suggested that doing so might have increased mail response by 8 percentage points above what was achieved (Dimitri, 1999), or even by 10–11 percentage points as estimated from earlier tests (see Section 3-B.2). The Census Bureau did not work with vendors early in the decade to develop a feasible system for producing replacement questionnaires. Consequently, its plans for a targeted second mailing were stymied when the vendors selected for the dress rehearsal were not prepared

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

to turn around the address list on the schedule required. Experience in the dress rehearsal suggested that mailing a second questionnaire to every address (instead of only to nonresponding households) would generate adverse publicity and increase the number of duplicate returns that would need to be weeded out from the census count, so no second mailing was made in 2000.

4–B.2 Paid Advertising and Expanded Outreach

Expansion of Efforts in 2000

A second important element of the Census Bureau’s strategy in 2000 to reverse the historical decline in mail response rates and to encourage nonrespondents to cooperate with follow-up enumerators was to advertise much more extensively and expand local outreach and partnership programs well beyond what was done in the 1990 census. An integral part of the advertising strategy was to pay for advertisements instead of securing them on a pro bono basis. The advertising ran from November 1, 1999, to June 5, 2000, and included separate phases to alert people to the importance of the upcoming census, encourage them to fill out the forms when delivered, and motivate people who had not returned a form to cooperate with the follow-up enumerators. Advertisements were placed on TV (including a spot during the 2000 Super Bowl), radio, newspapers, and other media, using multiple languages. Using information from market research, the advertisements stressed the benefits to people and their communities from the census, such as better targeting of government funds to needy areas for schools, day care, and other services.

The Census Bureau also hired 560 full-time-equivalent partnership and outreach specialists in local census offices (three times the number hired in 1990—see U.S. General Accounting Office, 2001a:11), who worked with community and public interest groups to develop special initiatives to encourage participation in the census. The Bureau signed partnership agreements with over 100,000 organizations, including federal agencies, state and local governments, business firms, and nonprofit groups. (Estimates vary as to the number of partnerships—see Westat [2002c], which reports on a survey of a sample of about 16,000 partners on their activities,

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

receipt of materials from the Census Bureau, and assessment of the materials’ effectiveness.)5 Finally, the Bureau developed a special program to put materials on the census in local schools to inform schoolchildren about the benefits of the census and motivate them to encourage their adult relatives to participate.

The advertising campaign appeared very visible and appealing, and, similarly, partnerships with local communities for outreach were more extensive and seemed more vigorous than in 1990. Correspondingly, costs of advertising and outreach quadrupled between 1990 and 2000, increasing from 88 cents per household in 1990 to $3.19 per household in 2000 (in constant fiscal 2000 dollars; U.S. General Accounting Office, 2002a:11).

Evaluation

We view it as likely that both the advertising and the local outreach efforts contributed to maintaining—even improving—the mail response and return rates in 2000 compared with 1990. However, linking the advertising campaign, much less specific advertisements or community-based communications, to individual behavior—and measuring the magnitude of the effects—is typically very difficult in market research, and the census is no exception. Similarly, the program to put materials about the census in local schools probably contributed to awareness of the census and may have contributed to participation, but evaluation cannot determine that (Macro International, 2002).

Three studies, one commissioned from the National Opinion Research Center (NORC), a second commissioned from the University of Michigan, and a third carried out by Census Bureau staff using data collected by InterSurvey, Inc. (now known as Knowledge Networks), shed light on the effects of advertising and outreach in 2000.

NORC Study The NORC study of the 2000 census marketing and partnership efforts (Wolter et al., 2002) obtained data from a total of

5  

See also U.S. General Accounting Office (2001a:17), which notes that a database developed by the Census Bureau to track the activities of partnership programs had problems keeping up-to-date records, which reduced its usefulness for management and subsequent evaluation.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

10,000 households interviewed by telephone in three separate cross-sectional waves in fall 1999, winter 2000, and spring 2000. Each wave’s sample comprised a population or core sample (about one-half of the total) and samples of American Indians, Asians, and Native Hawaiians. Under the most favorable calculation, the response rate to wave 1 was only 48 percent because it used a random-digit dialing telephone design with no field follow-up. Response rates for waves 2 and 3, which used samples from the MAF, were somewhat better, at 65 and 68 percent, respectively (Wolter et al., 2002:Table B-1, alternate response rate #4).

The study found that overall awareness of communications about the census increased significantly over time and was greater after the marketing program than before for the total sample and for six race/ethnicity groups analyzed separately. Examining types of communications, most populations recalled TV advertisements in greater proportions than they did magazine or other types of advertisements, and most were more aware of census job announcements, signs or posters, and informal conversations than they were of other types of community-based communications. Among Asian groups, English speakers were more likely to be aware of census communications than were non-English speakers.

With the exception of the American Indian population, the NORC survey found significant associations between awareness of the census and the development of more positive beliefs about the census prior to Census Day (e.g., “it lets government know what my community needs”). There was little evidence that census beliefs shifted after Census Day. Four race/ethnicity groups—non-Hispanic whites, blacks, Asians, and Native Hawaiians (but not Hispanics or American Indians)—became more likely to say they would return the census form after the marketing campaign began. Higher individual awareness of communications about the census also correlated with a greater intention of returning the questionnaire for each of the six groups except American Indians, for which small sample size may have been a problem for analysis.

However, a cross-sectional logistic regression analysis of households interviewed in waves 2 and 3 of the NORC survey and matched with their census returns found limited evidence of an association between awareness of census communications and the likelihood that households would actually complete and mail

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

back a census form instead of having to be followed up by an enumerator. (The study was not designed to support causal inference.) In a model developed for respondents in the wave 2 core sample, there were no significant main or interaction effects on mailback propensities of indexes of awareness of mass media and community-based communications. In the model developed for respondents in the wave 3 core sample, who were interviewed toward the very end of the publicity and outreach campaign, there were a few significant interaction effects. Specifically, an index of awareness of mass media communications increased mailback propensities for race/ethnicity groups in the “other” category in the regression model (non-Hispanic American Indians, Asians, Native Hawaiians, and all others) compared with non-Hispanic whites. An index of awareness of community-based communications increased mailback propensities for people ages 55 and older compared with younger people; it also increased mailback propensities for blacks compared with non-Hispanic whites (Wolter et al., 2002:Table 97).

Comparison of the NORC survey results for 2000 with an Outreach Evaluation Survey conducted for 1990 is difficult. It appears that the 2000 marketing and partnership effort was more effective in raising awareness of the census than the 1990 effort; also, the 2000 program appeared more effective than the 1990 program in creating favorable attitudes that the census cannot be used against you. Respondents in both years exhibited generally similar positive levels toward the importance of participating in the census.

Michigan Study The Michigan study obtained data from Surveys of Privacy Attitudes conducted of about 1,700 households in July through October 1999 and of about 2,000 households in April through July 2000. (The Gallup Organization collected the data by telephone using random-digit dialing.) Response rates to the two surveys were 62 and 61 percent, respectively (Singer et al., 2001:14).6

The results showed greater awareness of census data uses on the part of respondents in 2000 than in 1999 and, over the same time period, a significant decline in the belief that census data were likely

6  

The two Michigan surveys were similar to Surveys of Privacy Attitudes conducted in 1995 and 1996 (see Singer et al., 2001, for results from all four surveys).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

to be misused for law enforcement—changes the analysts attributed to the effects of the advertising and outreach programs.

Responses to a question in the 2000 survey on exposure to publicity about the census broke down as follows: 30 percent of the sample reported no exposure to publicity; 27 percent reported exposure to positive publicity only (e.g., the importance of being counted); 20 percent reported exposure to negative publicity only (e.g., concerns about privacy or confidentiality and also hearing controversy in the media about answering the long form, as discussed in the next section); and 22 percent reported exposure to both positive and negative publicity. Compared with the group who reported no exposure to publicity (referred to as the “control group” in subsequent analysis), the other three groups were significantly more likely to be knowledgeable of the census. Those exposed to positive publicity only were also significantly more likely to report a variety of positive attitudes about the census than the control group. Those exposed to both positive and negative publicity generally had more positive attitudes toward the census than the control group, but they were also significantly more likely than the control group to consider the census an invasion of privacy. Those exposed to negative publicity only were similar to the control group except for a greater awareness of the census (Singer et al., 2001:71).

As in the NORC study, respondents to the 2000 Survey of Privacy Attitudes were matched to their census forms, although a smaller-than-expected number of cases were successfully matched. (The analysts attempted to compensate for this and other biases in the matched sample.) Similar to the NORC study, a logistic regression model of matched cases in the 2000 Michigan survey showed no direct relationship of exposure to publicity and the propensity to complete and mail back the census form—indeed, the only group that was significantly more likely than the control group to mail back a form was the group exposed to negative publicity.

However, positive publicity may have had indirect effects on response in that exposure to such publicity generally led to more favorable attitudes toward the census, which in turn increased mailback propensities. In particular, those who believed that census data are not misused for any of three enforcement purposes (identifying illegal aliens, keeping track of troublemakers, and using census answers against respondents) mailed back their census forms at a rate

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

of 86 percent. In contrast, those who believed that census data are misused for all three of these purposes mailed back their census forms at a rate of only 74 percent. The NORC study found a smaller difference that was not significant: respondents to the wave 2 NORC survey who had mildly or highly favorable attitudes toward the census mailed back their forms at a rate of about 80 percent compared with a rate of 73 percent for those with unfavorable attitudes toward the census; the comparable percentages for respondents to the wave 3 NORC survey were 75 and 68 percent, respectively (Wolter et al., 2002:Table 79). The NORC study did not use an index of census attitudes in logistic regression models.

InterSurvey Analysis Martin (2001) analyzed five cross-sectional surveys conducted by InterSurvey, Inc., over WebTV between March 3 and April 13, 2000, with sponsorship from private foundations. The data are subject to substantial nonresponse and sample selection biases, but they help fill in a picture of the effects of hearing negative publicity about the census long form. Several prominent politicians (including then presidential candidate George W. Bush and then Senate majority leader Trent Lott) and radio talk show hosts commented on the intrusiveness of the long form and were widely quoted in the press in late March, about the time when census forms were being mailed to the public.7 Martin found that mistrust in government, receipt of a census long form, and hearing about the long-form controversy were strongly associated with respondents’ level of privacy concerns about the census. In turn, Martin found that respondents who received a long form or who had privacy concerns were less likely to return their census form or to fill it out completely if they did return it. The Michigan study (discussed above) reported similar results from the 2000 Survey of Privacy Attitudes regarding the negative effects of receipt of a long form and of a high score on an index of general privacy concerns on mailback propensities (Singer et al., 2001:80).

7  

See, e.g., “Census 2000 too nosey? Republicans criticize long-form questions,” posted on CNN.com, March 31, 2000; http://www.cnn.com/2000/US/03/31/census.01/. See also “Nosy Census?,” transcript of March 30, 2000 NewsHour with Jim Lehrer, posted at http://www.pbs.org/newshour/bb/fedagencies/jan-june00/census_3-30.html.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

4–B.3 Mail Response: Summary of Findings and Recommendations

The evidence on the effectiveness of the Census Bureau’s two major strategies to improve mail response to the census in 2000—redesigned questionnaire and mailing materials, and paid advertising and extensive outreach—is not definitive. In particular, there appear to have been few direct effects of advertising and outreach on mailback propensities, although there is evidence of indirect effects from favorable publicity inducing more favorable attitudes toward the census that, in turn, stimulated response. There is also evidence of positive effects of mass media or community-based communications for a few population groups, such as older people and blacks. Based on the higher response and return rates achieved in 2000 compared with 1990, however, it seems reasonable to conclude that the two strategies were successful, particularly the redesign of the questionnaire and mailing materials and particularly for the short form. Response to the long form may have been adversely affected by negative publicity, as well as by such factors as the length of the questionnaire.8

Finding 4.1: The use of a redesigned questionnaire and mailing strategy and, to a more limited extent, of expanded advertising and outreach—major innovations in the 2000 census—contributed to the success achieved by the Census Bureau in stemming the decline in mail response rates observed in the two previous censuses. This success helped reduce the costs and time of follow-up activities.

Looking to ways to further enhance mail response to the 2010 census, we note the strong base of evidence for the positive effects of mailing a second questionnaire to nonresponding households. Preliminary results from a 2003 test survey corroborate the positive effects found in earlier research (Angueira, 2003). Because handling a replacement questionnaire on the scale of a census presents logistical challenges, it is essential for the Census Bureau to begin work

8  

See Edwards and Wilson (2003), which summarizes findings from the NORC survey, the InterSurvey analysis, and process evaluations of the advertising and outreach programs, and reaches similar conclusions.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

immediately on ways to surmount those challenges so that nonresponding households receive a second questionnaire on a timely basis and in a manner that minimizes the likelihood of duplicate enumerations.

Recommendation 4.1: The Census Bureau must proceed quickly to work with vendors to determine cost-effective, timely ways to mail a second questionnaire to nonresponding households in the 2010 census, in order to improve mail response rates, in a manner that minimizes duplicate enumerations.

The Census Bureau also included in its 2003 test survey an emphasis on alternative response modes, including the Internet and telephone using interactive voice response. If widely accepted, such methods could lead to significant savings in costs and time for follow-up and data capture in 2010, even if there is no change in the overall response rate. Whether these methods would increase overall returns is not as clear, as they may mainly attract respondents who would fill out and mail back their questionnaires in any case.

Although mail response and return rate patterns have been extensively analyzed, there is much that is not known about factors that explain differences in return rates among population groups and geographic areas and over time. The Census Bureau now has available substantial data on local area characteristics from the 1990 and 2000 censuses that could be used to analyze geographic clusters of high and low return rates with a goal of suggesting testable innovations that might improve return rates in 2010, particularly for low-response areas. The Bureau also has extensive operational and characteristics data on a sample of 2000 census enumerations—a compilation known as the Master Trace Sample—that could be used to analyze demographic and socioeconomic return rate patterns with the goal of suggesting testable strategies for improving return rates for low-response groups. Such research should be pursued (see also Chapter 9).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

4–C MAJOR CONTRIBUTORS TO TIMELINESS

4–C.1 Contracting for Operations

Strategy

A major innovation for the 2000 census was the use of outside contractors for key operations. One outside vendor was contracted to develop the systems for checking-in questionnaires and putting the data into computerized form (data capture). Another contractor was hired to manage data capture at three processing centers; the Census Bureau’s long-established National Processing Center at Jeffersonville, Indiana, was the fourth data capture center. Also, outside vendors were used to provide telephone questionnaire assistance and to carry out telephone follow-up for questionnaires that were identified as possibly incomplete in coverage (e.g., households that reported more members than the number for which they answered individual questions—see Appendix C.5). The Census Bureau rarely used contractors for major operations in censuses prior to 2000, preferring a strategy of performing all operations in house (with some technical assistance from contractors), so the decision to contract for such large important operations as data capture marked a significant departure from past practice.

Evaluation

Information with which to assess the performance of specific contractors on a range of dimensions is not currently available, except insofar as performance can be inferred from such outcomes as timely and accurate data capture. With regard to data capture (see Section 4-C.2), the system development contractor was commended by the U.S. General Accounting Office (2000c:16–18) for following good practices for software development in the context of the work to reconfigure the system to separate the completion of data capture for long-form-sample items from the basic items.

4–C.2 Improved Data Capture Technology

Strategy

The Census Bureau early on made a decision to use new data capture technology to replace its in-house FOSDIC system (Film Optical

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Sensing Device for Input to Computers). First developed for the 1960 census, FOSDIC was improved at each successive census through 1990 (see Salvo, 2000; Bureau of the Census, 1995b:Ch.8). FOSDIC basically involved microfilming census questionnaires and using optical mark recognition (OMR) software to read light spots representing respondents’ filled-in answer circles and write the output onto computer files. Write-in answers (e.g., name, race, occupation, industry, place of work) were captured by having clerks key the data from the paper forms into specified fields shown on computer screens (for keying write-in race entries, the clerks worked from microfilm readers).

For 2000 the Bureau decided to contract for the adaptation of commercial optimal mark recognition and optical character recognition (OMR/OCR) technology in which questionnaires would be scanned directly into the computer and read by the OMR/OCR software. Clerks would supplement the process by keying data from computer images when the OCR technology could not make sense of the responses.

The new technology performed well enough in a 1995 test to be used in the 1998 dress rehearsal and well enough in the dress rehearsal to be recommended for use in 2000 (see Conklin, 2003:1). There remained, however, a considerable element of risk in the decision to use OMR/OCR in the 2000 census because of limited testing on the necessary scale prior to late 1999. Indeed, during testing that was conducted in early 2000, some problems were identified in the accuracy of the optical mark/character recognition, and changes were made to improve the accuracy rate and reduce the number of items that had to be keyed or rekeyed from images by clerks (U.S. General Accounting Offce, 2000b). In addition, the large number of different census forms and delays in finalizing questionnaires and making prototypes available to the OMR/OCR contractor made development of the technology and the associated flow management software more difficult and costly (Brinson and Fowler, 2003).

Finally, the data capture system was redesigned at the last minute to postpone capture of long-form content that required keying by clerks. Both short forms and long forms were run through the OMR/OCR scanning technology and keying was performed, when necessary, for the basic items. However, keying of additional long-form responses that the automated technology could not process

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

was performed in a second, separate operation. This change was made on the basis of operational tests of keying from images, which demonstrated that keying could not occur fast enough to process the additional long-form items and keep to the schedule for producing data for reapportionment and redistricting (U.S. General Accounting Office, 2000d).

Evaluation for Timeliness

During actual production, the new technology performed on schedule; there were no delays that affected field or other operations. Top census officials have attested that the decision to complete the long-form data capture process at a later stage was essential to timely completion of the basic census counts. Without the ample funding that was ultimately appropriated for the census (see Section 3-C.4), it would have been difficult to make this decision because of the increased costs needed to handle the long-form questionnaires twice.9

Evaluation for Accuracy

An important issue is the accuracy of the new technology in capturing responses. Citing a study by the Rochester Institute of Technology Research Corporation (2002), an overall assessment of the 2000 data capture system concluded that the system—which processed over 152 million questionnaires—exceeded specified performance goals for both mail and enumerator forms (Titan Corporation, 2003:11). Conklin (2003) provides detailed data on error rates for the new technology. In this study, the images and data capture information from the 2000 census processing were obtained for 1.5 million forms (divided equally among short and long forms, including English mailed-out/mailed-back forms, Spanish mailed-out/ mailed-back forms, English update/leave/mailed-back forms, and English enumerator-obtained forms, plus four form types for Puerto Rico). Clerks then reprocessed the information by keying all of the responses from the computer images of the questionnaires. The 2000 data capture system was evaluated against the results of the independent rekeying operation to determine how well the system captured the intended response as determined by the clerks. For

9  

Kenneth Prewitt, former Census Bureau director, personal communication.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

check-box responses, the content captured had to agree exactly with the clerical determination of the intended content to count as correct; for write-in responses, the agreement did not have to be letter for letter—a matching algorithm determined when the correspondence was within specified limits.

Overall findings of the evaluation (Conklin, 2003:6) include:

  • Optical mark recognition had an error rate of 1.2 to 1.5 percent (97 percent confidence interval) for all check-box responses that the technology considered readable;

  • Optical character recognition had an error rate of 1.0 to 1.1 percent (97 percent confidence interval) for all write-in responses that the technology considered readable (79 percent of such responses); and

  • Key from image (KFI) had an error rate of 4.8 to 5.3 percent (97 percent confidence interval) for the responses that the OMR or OCR technology rejected as unclear.

No data are available for comparing the accuracy of the 2000 data capture technology with the 1990 technology, but a study for the dress rehearsal (cited in Conklin, 2003) gave a performance standard for errors using the 1990 system of 2 percent. The OMR and OCR error rates in 2000 were well below that rate (1.3 percent and 1.1 percent, respectively).

The Conklin (2003) analysis for 2000 examined OMR, OCR, and KFI error rates by type of form, data capture center, and regional census center. Excluding forms for Puerto Rico, key findings include:

  • Averaged across type of technology (OMR, OCR, KFI), error rates for short-form content grouped into five categories (name, demographic, race, ethnicity, housing) were 3 percent or lower for all categories for all types of forms in the evaluation, with the exception of name, for which error rates were 4 percent or more on three of the eight form types: enumerator-obtained short forms (4.2 percent median error rate), Spanish mailed-out/mailed-back long forms (4.6 percent median error rate), and Spanish mailed-out/mailed-back short forms (7.1 percent median error rate).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
  • Averaged across type of technology, mailed-back forms had slightly higher error rates than enumerator-obtained forms for some important items, such as race and ethnicity, perhaps because enumerators were instructed on how to write answers to facilitate accurate data capture.

  • Averaged across type of technology, error rates for long-form content grouped into five categories (occupation, income, education, military, disability) were 3 percent or lower for all categories for all types of long forms in the evaluation, with the exception of the military items on enumerator-obtained long forms, which had a median error rate of 3.4 percent.

  • There were no significant differences in error rates across the four data capture centers. This finding is important because it indicates that data capture site was not a source of geographic variation in the quality of the census data.

  • Forms originating from regional census center areas with above-average concentrations of immigrants had high error rates for name fields.

  • Analysis of individual items (defined as a specific response for a specific person line number on a specific form—2,996 in all) identified 150 person-items that had high error rates ranging from 8 percent to 91 percent. These 150 person-items appeared on at least 500 records in the sample; person-items with high error rates based on smaller sample sizes were excluded, as were person-items with high error rates on the Puerto Rico forms (Conklin, 2003:Table 8). Conklin (2003) recommends that these items be the subject of further investigation to improve data capture technology or questionnaire design or both for 2010.

The overall assessment of the 2000 data capture system in Titan Corporation (2003) has many recommendations for improvements in the system for 2010. They include changing the questionnaires to facilitate automated data capture, better and more timely specification of contractual requirements for data capture systems and management, and integrating the development of the 2010 data capture system with questionnaire design and printing (e.g., choosing a

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

background color that makes it easier to distinguish responses than the color used in 2000).

4–C.3 Aggressive Recruitment of Enumerators and Implementation of Follow-Up

Critical to the success of the 2000 census was the ability to field a timely, complete follow-up of nonresponding households to enumerate them, to determine that the address was a vacant unit (and obtain some information about the type of housing), or to determine that the address should not have been included in the MAF (because the structure was commercial, had been demolished, or another reason). Nonresponse follow-up was a major problem in the 1990 census because the mail response rate not only dropped below the rate in 1980, it also dropped 7 percentage points below the budgeted rate. The Bureau had to seek additional funding, scramble to hire enough enumerators, and take much more time to complete the effort than planned.

Recruitment Strategy

In 2000, fears of a tight labor market that could make it difficult to hire short-term staff led the Bureau to plan aggressive recruitment of field staff from the outset. Generous funding made it possible for the Bureau to implement its plans, which included directing local offices to recruit twice as many enumerators as they expected to need; this funding also allowed the Bureau to offer part-time work schedules and above-minimum wages (which differed according to prevailing area wages).

Pressures to Expedite Follow-Up

The Census Bureau not only encouraged local offices to hire more enumerators than they expected to need, it also encouraged them to expedite nonresponse follow-up efforts. The U.S. General Accounting Office (2002a:23) documented that the Bureau “developed ambitious interim stretch goals. These goals called on local census offices to finish 80 percent of their nonresponse follow-up workload within the first 4 weeks of the operation and be completely finished

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

by the eighth week,” even though the master schedule allowed 10 weeks for completion.

Evaluation for Timeliness

The Bureau’s recruitment strategy and imposition of stretch goals for nonresponse follow-up were very successful in terms of timely completion of the workload. Every local office attracted at least three applicants for each enumerator position to be filled, and about 80 percent of the offices achieved their recruiting goal, which was to hire twice as many enumerators as they were likely to need. Pay for enumerators that exceeded locally prevailing wages together with effective regional office management were strong determinants of local office recruiting performance (Hough and Borsa, 2003:8,9, citing Westat, 2002b). A midstream assessment of nonresponse follow-up concluded that it was going well in most offices (U.S. General Accounting Office, 2000c). A large percentage of offices finished within the stretch-goal target of 8 weeks, and all offices finished within 9 weeks—a week ahead of schedule. The timely completion of nonresponse follow-up was a major achievement of the 2000 census, which an evaluation commissioned by the Census Bureau attributed primarily to the success in recruiting sufficient numbers of qualified applicants and retaining enumerators as long as they were needed (Hough and Borsa, 2003:12, citing Westat, 2002a).

Evaluation for Accuracy of the Population Count

One might expect that the success in completing nonresponse follow-up ahead of schedule, and, similarly, in fielding a more focused coverage improvement follow-up effort than in 1990 (see Appendix C.3), would contribute to fewer duplicates and other types of erroneous enumerations in 2000 than in 1990. In 1990, questionnaires for occupied housing units with later check-in dates (the date of entering the Census Bureau’s processing system) were more likely to include erroneous enumerations than were returns checked in earlier. Specifically, the percentage of erroneous enumerations increased from 2.8 percent for questionnaires checked in through April 1990 (largely mail returns), to 6.6 percent, 13.8 percent, 18.8 percent,

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

and 28.4 percent, respectively, for those checked in during May, June, July, and August or later (largely, enumerator-obtained returns).10

Although the correlation between timing of receipt and accurate coverage of household members on a questionnaire may be spurious, there are several plausible reasons to support such a relationship. For example, people who moved between Census Day and follow-up could well be double-counted—at both their Census Day residence and their new residence (e.g., people in transit from a southern winter residence to a northern summer residence or college students in transit between home and dormitory around spring or summer vacation). More generally, the later a household was enumerated, the less accurately the respondent might have described the household membership as of Census Day.

Given the delays in nonresponse follow-up in 1990, it appears that as much as 28 percent of the workload was completed after June 6, when erroneous enumeration rates were 14 percent or higher. Almost 10 percent of the nonresponse follow-up workload was completed in July or August. (These workload percentages do not include coverage improvement follow-up.) We have only limited information on the relationship of erroneous enumerations to the timing of enumeration in the 2000 census. Estimates from the 2000 E-sample indicate that nonresponse follow-up enumerations overall had a higher percentage of erroneous enumerations (15.5 percent) than did mail returns overall (5.3 percent).11 Also, in the A.C.E. Revision II estimation, late enumerator returns (defined as those received after June 1) had higher estimated erroneous enumeration rates than did early enumerator returns (see U.S. Census Bureau, 2003c:Table 6). Furthermore, 90 percent of the nonresponse follow-up workload was completed by June 6 and 98 percent by June 20 (Moul, 2003:App.G). Some returns for occupied units (2.4 million) were obtained through coverage improvement follow-up (which began at the end of June), but these were less than 10 percent of the total

10  

These estimates were derived from the 1990 Post-Enumeration Survey—see Ericksen et al. (1991:Table 2).

11  

Tabulations by panel staff from U.S. Census Bureau, E-Sample Person Dual-System Estimation Output File, provided to the panel February 16, 2001; weighted using the median value of TESFINWT within households. TESFINWT is the final weight, adjusted for the targeted extended search, for the E-sample cases that were included in the dual-systems estimation (see Appendix E.5).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

number of returns obtained for occupied units in nonresponse and coverage improvement follow-up (see Moul, 2002, 2003). Hence, although we cannot be sure, it is possible that the speedier completion of nonresponse follow-up in 2000 helped reduce erroneous enumerations.

Intensive analysis of the Accuracy and Coverage Evaluation (A.C.E.) data demonstrated a large number of duplicate enumerations in 2000 (see Chapter 6), which could have been partly due to errors in follow-up processes. Research on the effects of both nonresponse follow-up and coverage improvement follow-up on correct versus erroneous enumerations would be useful.

Evaluation for Effects on Differential Coverage of Population Groups

One evaluation of both the nonresponse follow-up and coverage improvement follow-up efforts in 2000 found that enumerator returns included higher percentages of traditionally hard-to-count groups (children, renters, minorities) compared with mail returns (Moul, 2003:Tables 18–22). Therefore, as was also true in 1990, these operations were important for helping to narrow differences in net undercount rates among groups. Thus, for household members enumerated in 2000:

  • 45 percent of nonresponse and coverage improvement enumerations were of people living in rented units, compared with 25 percent renters on mail returns;

  • 51 percent of nonresponse and coverage improvement enumerations were of men, compared with 48 percent men on mail returns;

  • 59 and 57 percent of nonresponse and coverage improvement enumerations, respectively, were of people under age 35, compared with 45 percent people under age 35 on mail returns;

  • 18 percent of nonresponse and coverage improvement enumerations were of Hispanics, compared with 12 percent Hispanics on mail returns;

  • 18 and 17 percent of nonresponse and coverage improvement enumerations, respectively, were of blacks, compared with 10 percent blacks on mail returns.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Evaluation for Completeness of Data Content

The U.S. General Accounting Office (2002a) conducted a study to determine whether the pressure from headquarters on local census offices to speed up the completion of nonresponse follow-up may have resulted in higher percentages of incomplete enumerations. The GAO examined data on when local offices finished their workload to see if early closers had higher percentages of “partial” or “closeout” cases (as recorded by enumerators on the questionnaires). Partial interviews were defined as having less than the minimum amount of information for a complete interview but at least housing unit status and, for occupied units, the number of residents (see Moul, 2003:3 for the requirements for a complete interview). Closeout interviews were those obtained once a crew leader’s assignment area within a local census office reached a 95 percent completion rate at which time enumerators made a final visit to each missing address to try to obtain a complete interview, or, at a minimum, the unit status and number of residents.

The GAO analysis found no relationship between the week of completion and the percentage of partial or closeout interviews. Nor did the analysis find a relationship between the week of completion and the percentage of residual workload,12 or between week-to-week spikes or surges in production and the percentage of closeout or partial interviews.

These findings suggest that the pace of the follow-up work did not reduce the quality of the data. However, it would be useful to supplement this analysis with a study of the possible relationship between the speed of completion and the percentage of proxy interviews conducted with a neighbor or landlord and between the speed of completion and missing data rates, particularly for long-form items.

4–C.4 Timeliness: Summary of Findings

The Census Bureau employed a variety of innovative strategies in 2000 to facilitate the timely execution of the census, including not

12  

The residual workload consisted of addresses in the original nonresponse follow-up workload for which questionnaires were not processed through data capture; local offices had to conduct residual nonresponse follow-up on these cases (122,000 of an original workload of 42 million) at a later date.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

only strategies to improve mail response (see Section 4-B), but also strategies to facilitate data capture and ensure an adequate work force for nonresponse follow-up, as discussed in Section 4-C. A concern with an aggressive strategy for completion of nonresponse follow-up is that it could have led to higher rates of missing and erroneous data. The evidence to date suggests that the use of new data capture procedures and technology and aggressive goals for enumerator recruitment and work completion were important innovations that had positive effects on timeliness while not impairing data quality.

Finding 4.2: Contracting for selected data operations, using improved technology for capturing the data on the questionnaires, and aggressively recruiting enumerators and implementing nonresponse follow-up were significant innovations in the 2000 census that contributed to the timely execution of the census.

4–D TIMELINESS VERSUS COMPLETENESS: GREATER RELIANCE ON COMPUTERS TO TREAT MISSING DATA

4–D.1 Strategy

The 2000 census used computers whenever possible to replace tasks that were previously performed in part by clerks or enumerators. Notably, questionnaires went directly to one of the four processing centers for data capture instead of being reviewed first by clerks in local census offices, as occurred for much of the workload in 1990. Editing and imputation of individual records to supply values for missing responses to specific questions or reconcile inconsistent answers were handled entirely by computer; there was no clerical review or effort to telephone or revisit households to obtain more content information as occurred in 1990 and previous censuses. Mail returns for households with more than six members and some other returns that appeared not to have not filled out the basic information for one or more members were followed up by telephone to collect the information for the missing members. However, in contrast to 1990, there was no attempt to collect missing items for already

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

enumerated household members, and there was no field follow-up when telephone follow-up was unsuccessful (see Appendix C.5).

After completion of all follow-up procedures, computer routines were used as in previous censuses, not only to impute responses for individual missing items but also to supply census records for household members that were missing all basic characteristics and for whole households when not even household size was known for the address. These imputation routines used records from neighboring households or people who matched as closely as possible whatever information was available for the household or individual requiring imputation (see Appendixes F, G, and H). The advantages expected from greater computerization of data processing included savings in cost and time to complete the data records. Also, it was expected that computer systems for editing and imputation would be better controlled and less error-prone than clerical operations.

4–D.2 Evaluation

Evaluation of Computer Data Processing Systems for Timeliness

The 2000 census computer systems for data processing appear to have worked well. Although delays in developing specific systems occurred because of the delays in determining the final census design, completion of software at the last minute appears to have had little adverse effect on the timing of other operations. Software development problems did delay the implementation of the coverage edit and telephone follow-up operation by a month (the operation began in late May instead of late April—see Appendix C.5.b). Moreover, the requirements for data processing operations were not often fully specified in advance, and much of the software was developed without adequate testing and quality assurance procedures, which put many data processing steps at risk (see Alberti, 2003:32–33; U.S. General Accounting Office, 2000a; U.S. Department of Commerce, Office of Inspector General, 1997). Fortunately, there were no actual breakdowns in the performance of critical software systems, although errors occurred in specific routines, such as the processing of information on forms obtained from enumerators about occupancy status and write-in entries for race, which are discussed below.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Evaluation of Imputation Procedures for Accuracy of the Population Count

The 2000 census, like previous censuses, included records for people who were wholly imputed because they lacked even basic information. In some cases, every household member lacked basic information, and the household required “whole-household imputation” or “substitution” (the Census Bureau’s term), which was performed by duplicating the record of a nearby household.

Box 4.2, under subheadings “B” and “C,” defines the five types of situations in which whole-person imputation (as distinct from individual item imputation, “A”) may be required in the census (see also Appendix G). Table 4.1 shows the number and percentage of the population of each of these types of imputations for the 1980, 1990, and 2000 censuses. In total, wholly imputed people represented a small percentage of the population in each year: 1.6 percent (3.5 million people), 0.9 percent (2 million people), and 2.2 percent (5.8 million people), respectively, in 1980, 1990, and 2000. The larger numbers of wholly imputed people of all types in 2000 compared with 1990 help explain part of the reduction in net undercount rates between children and adults, renters and owners, and minorities and non-Hispanic whites. If these people had not been imputed into the census, then there would have been a net undercount of about 1.7 percent overall (0.5 percent estimated net overcount minus 2.2 percent whole-person imputations), and differences in net undercount rates among population groups would have been wider (see Section 6-C.1).

The largest percentage of wholly imputed people in 2000 were those in mail return households that did not provide characteristics for all of their members (type 1). These people were genuine household enumerations for which the computer was used to impute their basic characteristics, item by item, on the basis of knowing a great deal about the household. The much larger number of type 1 imputations in 2000 compared with 1990 and 1980 was a direct outcome of the decision in 2000 to reduce the space for reporting household member characteristics on the questionnaire and to follow up households with more members than room to report them by telephone only, with no field follow-up as occurred in 1990. Type 1 imputations were not required for large households enumerated

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Box 4.2
Imputation Types for Basic (Complete-Count) Characteristics

Following data capture from census questionnaires and the various levels of nonresponse follow-up, the Census Bureau uses editing and imputation procedures to fill in apparent gaps. The level of imputation required depends on the number of household members who are data defined—that is, whose census records have at least two basic data items reported (counting name as an item). In descending order of known information—and ascending order of required imputation—the various types of imputation performed in the census are:

  1. Item imputation. All members of a household are data defined, but some basic items are not reported or are reported inconsistently; these missing values are supplied through “hot-deck” imputation (termed allocation by the Census Bureau), or through procedures called assignments or edits. Broadly speaking, edit and assignment procedures make use of other information provided by the person; imputation procedures make use of information from other household members or a similar individual in a similar, nearby household.

  2. Whole-person imputation. At least one member of a household is data defined as in (A), but not all members are so defined.

  1. Individual person(s) imputed in an enumerated household. For the members of the household who are not data defined, all basic information is imputed or assigned, item-by-item, on the basis of information about the other household members. An example in 2000 would be a household of seven members that had data reported for six members, and the telephone follow-up failed to obtain information for the seventh person on the household roster (the mail questionnaire allowed room to report characteristics for only six members instead of seven as in 1990). This type 1 imputation is called whole-person allocation by the Census Bureau.

  1. Whole-household imputation There is no data-defined person at the address. Imputation is performed using information from a similar, nearby household or address. Collectively, types 2–5 below are termed whole-household substitution by the Census Bureau.

  1. Persons imputed in a household for which the number of residents is known (perhaps from a neighbor or landlord), but no characteristics are available for them.

  2. Persons imputed in a housing unit known to be occupied for which there is no information on household size.

  3. Persons imputed in a housing unit for which occupancy status and household size have to be imputed first (from among housing units for which occupancy or vacancy status is not known).

  4. Persons imputed in a housing unit for which housing unit status and occupancy status have to be imputed first (from among addresses for which not even status as a housing unit is known).

Types 3–5 were the focus of legal action by the state of Utah. In June 2002, the U.S. Supreme Court declined to characterize such imputation as “sampling,” and hence permitted its use to contribute to state population counts for congressional reapportionment (see Box 2.2).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 4.1 People Requiring Imputation of All Basic Characteristics by Type of Imputation, 2000, 1990, and 1980 Censuses

 

2000a

1990b

1980b

As Percentage of Household Population:

Whole-Person Imputations in Enumerated Households (type 1)

0.90

0.20

0.10

Whole-Person Imputations in Wholly Imputed Households

 

Characteristics (type 2)

0.83

0.64

1.17

Count, Occupancy Status, Housing Status

0.43

0.02

0.34

Count (type 3)

0.18

Occupancy Status (type 4)

0.10

Housing Status (type 5)

0.15

c

Subtotal, types 2–5

1.26

0.66

1.51

Total, types 1–5

2.16

0.86

1.61

Number of Persons (millions):

Whole-Person Imputations in Enumerated Households (type 1)

2.330

0.373

0.152

Whole-Person Imputations in Wholly Imputed Households

 

Characteristics (type 2)

2.270

1.547

2.580

Count, Occupancy Status, Housing Status

1.171

0.054

0.761

Count (type 3)

0.496

Occupancy Status (type 4)

0.260

Housing Status (type 5)

0.415

c

Subtotal, types 2–5

3.441

1.601

3.341

Total, types 1–5

5.771

1.974

3.493

NOTES: See Box 4.2 for definitions of imputation types.—, not available separately.

a Tabulations by panel staff of U.S. Census Bureau, File of Census Imputations by Poststratum, provided to the panel July 30, 2001 (Schindler, 2001).

b Calculated from Love and Dalzell (2001). Whole-person imputations in enumerated households include a small number of whole-person imputations for group quarters residents.

c Housing status imputation (type 5) was not used in 1980.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

in person because of the use of continuation forms (see Section 3-B.2). However, in list/enumerate areas, because of a data processing error, the continuation forms for large households were lost, and the imputation process may not have imputed all of the people on the lost forms back into the census (Rosenthal, 2003b:6).

The numbers of people requiring imputation in households for which only occupancy status and household size were reported (type 2) were roughly similar in all three censuses. The larger number of wholly imputed people in households of types 3–5 (for which, at a minimum, household size had to be imputed) in 2000 compared with 1990 and even 1980 is difficult to explain, although processing errors may have contributed to this outcome (see end of this section). Information is not available with which to evaluate the reasonableness of the numbers of type 4 and 5 whole-household imputations as percentages of the numbers of addresses for which imputations could have been possible; that is, the Census Bureau has not provided the denominators with which to calculate imputation rates for these two categories.

Whole-person and whole-household imputations varied by geographic area. In particular, the small number of people imputed in 2000 when it was not even clear whether the address was a housing unit (type 5) were concentrated in rural list/enumerate areas (e.g., the Adirondacks region of New York State, rural New Mexico).13 In these areas, enumerators developed an address list and enumerated the units at the same time. Although there was a follow-up operation to recheck the status of units that the enumerators classified as vacant (Hough and Borsa, 2003:21), the status of some units was apparently not resolved.

A question is how many of the imputed people in households for which a household size had to be imputed (types 3–5) represented correct enumerations—some of them may have been erroneous in thatthehouseholddidnotcontainasmanypeopleaswereimputed into it. Alternatively, not enough people may have been imputed into some households. A related question is whether the basic characteristics assigned to imputed people in all of the imputation types accurately reflected the true distributions.

13  

From analysis by panel staff of U.S. Census Bureau, Census Tract Imputation File, provided to the panel April 4, 2002.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

There are limited pieces of evidence that suggest some problems in the imputation of whole persons. An administrative records experiment conducted in five counties in which national records were matched with 2000 census addresses found that, when the linked census record was not imputed, 51 percent of the matches agreed on household size, compared with only 32 percent agreement when the linked census record was a whole-household imputation. Moreover, while the household size discrepancies involving links of administrative records to nonimputed census households were symmetric, those involving links to imputed census households were asymmetric. Thus, 41 percent of imputed census households were larger in size than the linked administrative households, while 27 percent of imputed census households were smaller in size than the linked administrative households (Bauder and Judson, 2003:24). What fraction of the discrepancies were due to errors in the imputations or in the administrative records is not known.

In addition, Alberti (2003) documents two data processing problems that resulted in larger numbers of households requiring occupancy status imputation (type 4) and housing status imputation (type 5). First, because of an error in processing enumerator forms, about 145,000 housing units were classified as occupancy status unknown, when they were all most probably vacant at the time of thecensusandnoneofthemshouldhavebeenimputedasoccupied (Alberti, 2003:34–35). These units represented 74 percent of all of the units eligible for occupancy imputation (type 4 imputation; Alberti, 2003:26). Second, because over 2.3 million group quarters responses that indicated a usual residence elsewhere were erroneously sent to the process for verifying these and other newly identified addresses (see Section 4-F.2), the verification process encountered delays. As a consequence, the data for 207,000 housing units were not included in the census and their status as a housing unit (or as a record to be deleted from the census) had to be imputed. These units represented 70 percent of all of the units eligible for housing status imputation (type 5 imputation; Alberti, 2003:33–34).

Evaluation of Imputation Procedures for Accuracy of Basic Data

Response to the basic data items for enumerated (data-defined) people on the short and long forms—age, sex, race, ethnicity,

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

household relationship, and housing tenure—was good in 2000. Rates of missing data for individual basic items were low (1–4 percent), as was also generally the case in 1990 (although some population groups and geographic areas had higher basic item imputation rates—see Sections 7-B and 8-C.2). Moreover, it was often possible to infer a missing value from other information on the household’s own questionnaire instead of having to use information from neighboring households. The decision to capture names for all enumerations helped in this regard, as names could be used in assigning sex and, in some instances, ethnicity. The Bureau made additional improvements in its editing and imputation routines for missing and inconsistent basic items (see Appendix G), although an error in the processing of write-in entries for race led to an overestimate of people reporting more than one race (see Chapter 8). Assuming that the Bureau otherwise maintained good quality control of the basic data editing and imputation specifications and implementation in 2000, the use of computer routines to provide values for specific missing items should have had little adverse effect on the quality of the basic items. The resulting data products are more complete and therefore useful for a broader range of purposes.

Evaluation of Imputation Procedures for Accuracy of Additional Long-Form Data

In contrast to the responses for basic items, and of considerable concern, is that missing data rates for the additional long-form-sample population and housing items were high in many cases and generally higher than the comparable rates in 1990 (see Section 7-C and Appendix H). The Census Bureau relied for imputation of these items on procedures that it has used for many censuses with little evaluation of their appropriateness or effectiveness. We recommend research on imputation procedures in Chapter 7 (see also Appendix F). Below we recommend tests of the effects on response accuracy of the 2000 strategy of relying entirely on computerized imputation for missing values compared with the intensive telephone and field follow-up efforts used in 1990 to reduce the extent of missing data. A third alternative of following up in the field a sample of households with missing data should also be investigated, as it could balance

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

considerations of cost and timing and the need for accuracy better than the two extremes.

4–D.3 Reliance on Imputation: Summary of Findings and Recommendations

A major design strategy for the 2000 census was to limit as much as possible costly, time-consuming follow-up (particularly in-person follow-up) for missing data and instead to rely more heavily than in 1990 on computer-based imputation. Two considerations initially led to the adoption of this strategy. The first was the desire to contain census costs. The second was the desire to allow enough time in the schedule for completion of all coverage evaluation activities so that the state population totals for reapportionment could incorporate the results of a large independent survey under the originally proposed Integrated Coverage Measurement design (ICM—see Section 3-C.2). Even after the ICM design was discarded, the Census Bureau retained its plan to limit follow-up in favor of more extensive use of imputation.

In contrast, the 1990 census design emphasized the role of repeated telephone and field follow-up to reduce not only the extent of missing content but also the number of addresses for which no information was obtained about occupancy status or the number and characteristics of residents. In this way, the Census Bureau hoped to defuse possible arguments on whether the use of whole-household imputation constituted statistical adjustment (see Sections 3-A.1 and 6-C.1).

A concern about greater reliance on imputation in place of follow-up efforts is that the imputed data would contain more error than the information that could have been obtained by recontacting the household. To date there is little evidence on this point. Given the higher rates of imputation in 2000 than in 1990, particularly for individual people and for many long-form items, the Census Bureau should conduct experiments to test the relative costs and accuracy of more imputation versus more follow-up before deciding whether to continue the 2000 strategy in 2010 (see also Appendix F on imputation methods). Assessing relative accuracy is not easy because of the problem of establishing a reference data source that can serve as a “gold standard.” A possible method for tackling this

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

problem is to use multiple data sources, including household surveys, administrative records, and content reinterviews (see Chapter 9). Such research should investigate the cost-effectiveness of field follow-up of a sample of households and people with missing data. The data from such a follow-up effort should permit the development of imputation models that much more accurately reproduce the characteristics of nonrespondents, compared with models that are limited (as in 2000) to using respondents’ characteristics for imputation to nonrespondents.

Finding 4.3: The greater reliance on imputation routines to supply values for missing and inconsistent responses in 2000, in contrast to the greater reliance on telephone and field follow-up of nonrespondents in 1990, contributed to the timely completion of the 2000 census and to containing the costs of follow-up. It is not known whether the distributions of characteristics and the relationships among characteristics that resulted from imputation (particularly of long-form content) were less accurate than the distributions and relationships that would have resulted from additional follow-up.

Recommendation 4.2: Because the 2000 census experienced high rates of whole-household nonresponse and missing responses for individual long-form items, the Census Bureau’s planning for the 2010 census and the American Community Survey should include research on the trade-offs in costs and accuracy between imputation and additional field work for missing data. Such research should examine the usefulness of following up a sample of households with missing data to obtain information with which to improve the accuracy of imputation routines.

4–E MASTER ADDRESS FILE DEVELOPMENT: FLAWED EXECUTION

4–E.1 Strategy and Implementation

With the bulk of the population enumerated by mailout/mailback or update/leave/mailback techniques, the quality of the 2000 address

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

list was essential to the completeness and accuracy of population coverage. The Census Bureau early in the 1990s made a decision to partner with other organizations and use multiple sources to develop the MAF. Contributing to the 2000 census MAF (termed the Decennial MAF or DMAF by the Census Bureau) were the 1990 address list augmented by updates from the U.S. Postal Service (in mailout/mailback areas), a full block canvass by Census Bureau staff, input from localities that participated in the Local Update of Census Addresses (LUCA) Program, and census field operations (see Vitrano et al., 2003a:3–5).

The goal of using multiple sources to build as complete a list as possible was a worthy one. However, because many of the procedures were new, implementation was not always smooth:

  • The decision to conduct a complete (instead of targeted) block canvass in urban areas in 1998 was made late in the decade and required substantial additional funding to implement. The change became necessary when the Census Bureau learned that the U.S. Postal Service Delivery Sequence File (DSF) was not accurate enough or sufficiently up to date to serve as the primary means of updating the 1990 census address list (see Section 3-B.1)

  • The decision to provide localities in city-style-address areas an opportunity to add addresses for units newly constructed in January–March 2000 was made even later, in March 1999.

  • Original plans for a sequential series of steps in the LUCA Program, involving back-and-forth checking with localities, had to be combined under pressures of time, and many LUCA components experienced delays by the Census Bureau; see Table 4.2. Except for the stage of appealing to the U.S. Office of Management and Budget, localities were not given additional time for their review.

  • Questionnaire labeling had to occur before the Bureau had the opportunity to check most of the addresses supplied by LUCA participants, increasing the opportunity for erroneous enumerations, such as duplications.

  • Integration of the address list for special places (group quarters) with the housing unit address list was delayed, and, con-

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 4.2 Original and Actual Timelines for the Local Update of Census Addresses (LUCA) Program

Planned Dates

Actual Dates

Activity

LUCA98a

November 1997

February 1998

Census Bureau sent invitation letters to eligible localities

April–August 1998

May 1998–March 1999

Bureau sent initial materials (address list, maps) to participants

May–December 1998

May 1998–June 1999

LUCA participants conducted initial review of materials from Bureau

Not part of original plan

January–May 1999

Bureau conducted full (instead of targeted) block canvassing

May–October 1999

July–December 1999

Bureau verified participants’ addresses in the field (reconciliation) (original plan was to send results to localities to obtain feedback before sending final determination materials)

March–November 1999

October 1999–February 2000

Bureau sent detailed feedback/final determination materials to participants

Original deadline, January 14

November 1999–April 3, 2000

LUCA participants filed appeals (addresses were visited in coverage improvement follow-up)

April–August 1998

December 1999–April 2000

Special Places LUCA (Bureau did not complete Special Places list until November 1999)

Added operation

January–March 2000

Participating localities submitted new construction addresses (addresses were visited in coverage improvement follow-up)

LUCA99b

July 1998–February 1999

July 1998–February 1999

Census Bureau field staff listed addresses

September–October 1998

September–October 1998

Bureau sent invitation letters to eligible localities

January–April 1999

January–August 1999

Bureau sent initial materials (block counts, maps) to participants

January–April 1999

January–October 1999

LUCA participants conducted review of initial materials from Bureau

March–May 1999

May–October 1999

Bureau verified participants’ block counts in the field (reconciliation)

March–June 1999

September 1999–February 2000

Bureau sent detailed feedback/final determination materials to participants

Original deadline, January 14

October 1999–April 3, 2000

LUCA participants filed appeals for specific addresses

January–April 1999

December 1999–April 2000

Special Places LUCA

a This program was conducted in areas with mostly city-style addresses.

b This program was conducted in areas with mostly rural route and post office box addresses.

SOURCE: Adapted from Working Group on LUCA (2001:Figure 1).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

sequently, local review of the special places address list was delayed and often did not occur.

  • Significant numbers of errors in assigning individual group quarters to geographic areas occurred, perhaps because they were coded to the address of the special place (e.g., a university) and not to the location of the specific group quarters.

  • In some mailout/mailback areas Postal Service workers returned several million questionnaires as undeliverable because the address label used a city-style street name and house number when the Postal Service delivery address was to a post office box. The Census Bureau had to arrange for enumerators to redeliver the questionnaires to these addresses, which should have been included in the update/leave operation. They were instead included in the mailout/mailback operation because of inappropriate designation of the “blue line” separating mailout/mailback from update/leave areas (see Appendix C.1.a). Inaccuracies in designating the blue line also caused problems for LUCA participants (Working Group on LUCA, 2001:Ch. 4).

The Bureau recognized early on that the MAF was at risk of including duplicate and other erroneous addresses. The risk of omitting valid addresses was also present, but MAF procedures were expected to reduce the level of omissions from previous censuses. An increased risk of including duplicate addresses in the 2000 MAF resulted not only from the planned use of multiple sources but also from the operational problems just reviewed. To minimize duplication, the Bureau used a combination of field checking and internal consistency checks of the MAF file (see Appendix C.1.d). One computer check was performed prior to nonresponse follow-up; another check—not included in the original plans—was performed in summer 2000.

The special summer unduplication operation resulted from evaluations of the MAF conducted by Population Division staff between January and June 2000, in which MAF housing unit counts were compared to estimates prepared from such sources as building permits. The results led the Census Bureau to conclude that there were probably still a sizable number of duplicate housing unit addresses

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

on the MAF despite prior computer checks. Field verification carried out in June 2000 in a small number of localities substantiated this conclusion.

Consequently, the Bureau mounted an ad hoc operation to identify duplicate MAF addresses and associated census returns. Housing unit and person records flagged as likely duplicates were deleted from the census file and further examined. After examination, it was determined that a portion of the deleted records were likely to be separate housing units not already included in the census, and they were restored to the census file. At the conclusion of the operation, 1.4 million housing units and 3.6 million people were permanently deleted from the census; 1 million housing units and 2.4 million people were reinstated (see Table 4.3, which charts additions to and deletions from the MAF due to census operations in 2000).

4–E.2 Overall Assessment of MAF Development

We assess the MAF development process as sound in concept but flawed in execution. On the plus side, all elements of the MAF development process were completed despite the various implementation problems noted. Moreover, delays in component MAF operations did not appear to affect other census operations, such as mailout, field follow-up, and data processing.14

With regard to accuracy, however, errors detected in the MAF required an ad hoc unduplication operation at the height of census processing—a laudable effort, but one that subsequent evaluation determined to contain some flaws (Mule, 2002a). Of the 1.4 million housing unit records deleted from the census, 0.3 million were not in fact duplicates; in contrast, of the 1 million potential duplicate housing unit records that were reinstated, 0.7 million were in fact duplicates that should not have been reinstated.15 On net, the unduplication operation failed to delete an additional 0.4 million duplicate housing unit records. However, some of the reinstated duplicates were retained to compensate for omissions in small mul-

14  

However, the special unduplication effort complicated estimation for the Accuracy and Coverage Evaluation Program, as reinstated people could not be included in the A.C.E. matching process (see Section 6-A.4).

15  

This estimate is derived from estimates by Mule (2002a) of people who were erroneously reinstated from the special unduplication operation.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 4. 3 Additions to and Deletions from the 2000 MAF from Major Census Operations in 2000

Addresses Added from Field Operations

 

During questionnair edelivery (update/leave, update/enumerate, list/enumerate areas)

2.3 million

During nonresponse and coverage improvement follow-up

1.7 million

Total addresses added from listed operations

4.0 million adds

Addresses Deleted as Nonexistent or Duplicative from Computer Checks Prior to Nonresponse Follow-Up

 

Nonexistent deletions (mainly LUCA addresses that had not been verified)

2.5 million

Duplicate deletions (from LUCA and other sources)

1.1 million

Addresses Deleted as Nonexistent or Duplicative from Nonresponse and Coverage Improvement Follow-Up

5.4 million

Addresses Deleted as Duplicative from Ad Hoc Computerized Unduplication Operation in Summer 2000

 

Addresses deleted originally

2.4 million

Addresses restored upon further examination

1.0 million

1.4 million net deletions

Total deletions from listed operations

10.3 million deletions

MAF at Completion of Census

115.9 million housing unit addresses

NOTE: The housing unit coverage study (Barrett et al., 2001, 2003) estimated additional duplicates and other erroneous enumerations, as well as omissions, in the MAF; see text.

SOURCE: Farber (2001b: Tables 1, 2); Miskura (2000a); Nash (2000); Mule (2002a); for details of operations, see Appendix C.1.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

tiunit structures that lacked individual apartment addresses. For example, an apartment designated as unit “A” on the MAF by the Census Bureau may have picked up and completed the questionnaire for unit “B,” while the “B” household failed to respond. In follow-up the Census Bureau would return to apartment “A” and obtain a second interview. Reinstating the second questionnaire for “A” would effectively result in a whole-household imputation for “B” (see Hogan, 2000b).

Duplicates and other erroneous housing unit addresses in the MAF that were not involved in the special unduplication operation were probably present in the MAF, and the MAF probably omitted some valid addresses as well. A housing unit coverage study, which developed dual-systems estimates for housing units from the A.C.E. data (Barrett et al., 2001:Table 2), estimated that 1.5 percent of occupied housing units in the A.C.E. E-sample of census enumerations (excluding reinstated records) were erroneously enumerated (about 1.6 million units, weighted up to national totals) and that 2.6 percent of occupied housing units in the independent A.C.E. P-sample were omitted from the MAF (about 2.7 million units, weighted). The dual-systems estimate of occupied units, when compared with the census count (including reinstated records), produced an estimated net undercount of 0.3 percent, or about 0.4 million net occupied units omitted from the census.16

By type of structure, the housing unit coverage study estimated that small multiunit structures (those with two to nine housing units) had greater percentages of erroneous enumerations and omissions than either single-family units or large multiunit structures (Barrett et al., 2001:Table 6; see also Jones, 2003a; Ruhnke, 2003). This finding accords with evidence from the special unduplication study and the experience of LUCA participants.

16  

The estimated net undercount of occupied housing units has not been fully reconciled with the estimated net overcount of people. See Robinson and Wolfgang (2002:i–ii), who find broad agreement among coverage patterns for regions and housing tenure between Revision II A.C.E. and the housing unit coverage study, but differences for some race/ethnicity groups. Revision II A.C.E. also estimates greater reductions in differential coverage errors between 1990 and 2000 than does the housing unit coverage study (which was based on the original A.C.E. and never reestimated).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

All of these results are subject to sampling and nonsampling error and should be interpreted cautiously. Nonetheless, assuming that the estimates are roughly accurate, it appears that the 2000 MAF may have had a negligible percent net undercount of occupied units. A small net coverage error, however, masked large numbers of gross errors of omission and erroneous inclusion—as many as 2.3 million erroneously enumerated occupied units (an estimated 0.7 million units that should have been deleted in the special unduplication operation, but were not, and 1.6 million erroneously enumerated units estimated from the A.C.E.), and as many as 2.7 million omitted units. The omitted units included not only 1.4 million units that were never on the MAF but also 0.4 million units that were incorrectly dropped from the MAF before Census Day, 0.6 million units that were incorrectly deleted from the MAF on the basis of such operations as nonresponse follow-up, and an estimated 0.3 million units that were incorrectly deleted from the MAF in the special unduplication operation (Vitrano et al., 2003b:Table 33). Without the special unduplication operation, even with its problems, the gross errors in the MAF would have been larger yet.17

Among the erroneously enumerated occupied housing units identified in the Housing Unit Coverage Study were census units in A.C.E. block clusters that were incorrectly geocoded to a block in a ring of surrounding blocks; such geocoding errors were estimated as 0.4 percent of total housing units (Barrett et al., 2001:Table 10). A study that looked for geocoding errors for census housing units in individual A.C.E. blocks by searching the entire tract and one ring of surrounding census tracts estimated that 4.8 percent of total housing units were misgeocoded (Ruhnke, 2003:iv). Most or all of these geocoding errors, however, would not contribute to gross errors for larger geographic areas (e.g., towns, cities, counties, states).

We do not know whether the errors in the MAF contributed more or less to population coverage errors than did omissions and erroneous inclusions of people within correctly identified housing units. We also know little about the variability in the accuracy of the

17  

Even higher percentages of errors of erroneous enumeration and omission occurred for vacant housing units (Barrett et al., 2001:Table 2). See Vitrano et al. (2003b:65–73) for an analysis of gross errors in the MAF.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

MAF across population groups and geographic areas that could be due to the LUCA Program and other factors.

4–E.3 Variable Participation in LUCA

All counties, places, and minor civil divisions on the Census Bureau’s list of functioning governmental units—39,051 units in all—were eligible to participate in the Local Update of Census Addresses (LUCA) Program; either or both LUCA 98 (conducted in city-style-address areas) or LUCA 99 (conducted in areas with large numbers of rural route and post office box addresses). However, preliminary data show that only 25 percent of eligible units fully participated and that rates of full participation varied across several dimensions (see Table 4.4; Working Group on LUCA, 2001:Ch.2). By full participation, we mean that they informed the Census Bureau of needed changes to the address list for their area; see Box 4.3.

Factors that relate to participation include:

  • Geographic region. Jurisdictions in some areas of the country participated at higher rates than those in other areas (56 and 37 percent, respectively, of jurisdictions in the Pacific and Mountain states participated, compared with only 18 and 19 percent, respectively, of jurisdictions in the New England and West North Central states).

  • Population size. Jurisdictions with larger populations participated at higher rates than those with smaller populations (75 percent of those with 1 million or more people participated, compared with only 14 percent of those with 1,000 or fewer people).

  • Type of government. Places and counties participated at higher rates (37 and 27 percent, respectively) than minor civil divisions (15 percent).

  • Type of program. Areas eligible for LUCA 98 or both LUCA 98 and LUCA 99 participated at higher rates (42 and 37 percent, respectively) than areas eligible only for LUCA 99 (14 percent).

A multiple regression analysis found that, among counties and places that signed up to participate in LUCA, the estimated 1990

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table 4.4 Participation of Local Governments in the 2000 Local Update of Census Addresses (LUCA) Program

 

LUCA 98 Only

LUCA 99 Only

Both LUCA 98 and LUCA 99

 

 

Percent Participated in

 

Category

Number Eligible

Percent Participated

Number Eligible

Percent Participated

Number Eligible

LUCA 98 Only

LUCA 99 Only

LUCA 98 and 99

Total Number Eligible

Percent Participated, One or Both

Total

9,044

41.6

21,760

14.2

8,247

17.7

7.6

12.0

39,051

25.4

Geographic Division

 

New England

518

41.5

1,047

6.7

66

12.1

3.0

1.5

1,631

18.1

Middle Atlantic

2,034

43.3

2,133

16.2

662

19.2

7.0

12.7

4,829

30.7

East North Central

3,944

35.0

3,187

10.6

3,483

15.4

5.3

5.1

10,614

24.6

West North Central

548

40.3

9,437

13.3

1,414

17.4

10.7

19.1

11,399

18.8

South Atlantic

697

52.7

1,681

24.1

585

18.1

11.5

21.4

2,963

36.1

East South Central

411

35.5

1,046

11.0

427

14.8

7.3

11.7

1,884

21.5

West South Central

241

37.8

1,914

14.1

886

13.5

9.8

14.0

3,041

22.7

Mountain

113

68.1

966

23.7

328

35.7

6.4

22.3

1,407

36.7

Pacific

538

72.1

349

20.1

396

33.6

9.6

21.0

1,283

55.5

Population Size (1998 est.)

 

1,000 or fewer

1,743

26.0

15,100

12.1

1,436

14.2

10.5

6.3

18,279

14.8

1,001–10,000

4,550

40.9

6,080

18.9

4,044

17.8

6.6

11.7

14,674

30.4

10,001–50,000

2,157

50.2

563

21.1

1,827

17.6

8.1

12.6

4,547

41.8

50,001–100,000

364

64.7

17

23.5

444

19.6

7.4

16.9

825

52.7

100,001–1,000,000

217

58.5

0

469

25.2

6.4

23.2

686

56.0

1,000,001 or more

13

69.2

0

27

25.9

7.4

44.4

40

75.0

Government Type

 

County

122

46.7

982

17.1

1,956

10.3

9.3

10.6

3,060

26.7

Minor civil division

3,624

29.8

9,887

7.7

3,082

13.7

4.2

5.2

16,593

15.4

Place

5,298

49.6

10,891

19.8

3,209

25.9

9.9

19.4

19,398

33.8

NOTES: See Box 4.3 for definition of participation in LUCA 98 and 99. Not all regions have minor civil divisions. The analysis excludes tribal governments, the county-level municipios of Puerto Rico, and three places for which 1998 population estimates were unavailable.

SOURCE: Tabulations by panel staff from preliminary U.S. Census Bureau data (LUCA 98 and LUCA 99 spreadsheets, June 2000), modified by assigning county codes to minor civil division and place records and augmenting the file with 1998 population estimates and variables from the Census Bureau’s 1990 Data for Census 2000 Planning (1990 Planning Database, on CD-ROM) (see Working Group on LUCA, 2001:Tables 2-2, 2-3).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Box 4.3
Defining Participation in the Local Update of Census Addresses (LUCA) Program

It is not straightforward to determine participation in LUCA. On the preliminary data records available to the Working Group on LUCA (2001:Ch.2), the Census Bureau coded eligible jurisdictions into one of four categories:

  • The government did not participate at all (0);

  • The government signed a confidentiality agreement, which entitled it to receive materials from the Census Bureau for review (1);

  • The government signed a confidentiality agreement and “returned materials” to the Census Bureau (2);

  • The government signed an agreement, returned materials, and, in the submission for a LUCA 98 jurisdiction, provided at least one address action record (addition, correction, deletion, or notation that the address was outside the jurisdiction or nonresidential), or, in the submission for a LUCA 99 jurisdiction, challenged the address count for at least one block (3).

In an analysis of the LUCA 98 Program with final Census Bureau data, Owens (2003:v, 9) used a broad definition of participation, under which 53 percent of eligible governments participated by signing the required confidentiality agreement. These are governments in categories 1–3; they include approximately 92 percent of 1990 housing units in eligible areas. However, about one-third of the number “participating” did not return any address changes to the Census Bureau, so that only 36 percent participated fully in LUCA 98 as the Working Group on LUCA defined participation—that is, category 3 only. Some governments in categories 1 and 2 that did not provide address changes may have been satisfied with the MAF for their areas, but, more likely, they did not have time or resources to conduct a full review (see Owens, 2003:14).

In Table 4.4, we define participation in LUCA 98 and LUCA 99 in the same manner as the Working Group on LUCA—that is, category 3 above, excluding categories 1 and 2.

For estimating the housing unit coverage of participating areas, a further complication relates to overlapping areas of jurisdictions. Specifically, for counties participating in LUCA, there is no information on whether they reviewed all of the addresses in the county, only those addresses not covered by the constituent places and minor civil divisions, or only some of the addresses not covered by constituent places and minor civil divisions. Case studies conducted by the Working Group on LUCA (2001:Ch.4) indicated that county participation varied in the extent of address coverage.

Because of ambiguities about the nature and extent of LUCA participation, the panel cautions against using the broader definition in Owens (2003) to estimate the percentage of LUCA participants or the percentage housing unit coverage of participating jurisdictions.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

census net undercount rate was a strong predictor that a jurisdiction would participate fully. Case studies also identified instances in which a vigorous coordination effort by a state or regional government facilitated participation by local jurisdictions (Working Group on LUCA, 2001:Ch.4).

The governments that participated in LUCA appeared to cover a higher proportion of the nation’s housing stock than the proportion of participating governments to eligible governments would suggest. From preliminary data, places that participated fully in LUCA 98 accounted for 67 percent of the 1990 housing stock in eligible places, even though they included only 48 percent of eligible places. (It is not possible with the available data to construct reliable estimates of housing coverage for all fully participating governments, given such problems as lack of information about the portions of a county covered by county review [see Box 4.3], nor is it possible to construct reliable estimates of housing coverage for the two programs [LUCA 98 and LUCA 99] combined.) Even though coverage was higher for housing than for governments, which would be expected given the greater propensity of larger-size areas to participate, substantial portions of the MAF were not accorded local review.

A recent evaluation of the LUCA 98 program (Owens, 2003) estimated that fully participating jurisdictions (counties, places, minor civil divisions) submitted 6.3 million additional addresses (6.5 percent of their housing stock), of which 5.1 million addresses were not already on the MAF. The census enumeration process identified 3.1 million of these added addresses as valid occupied or vacant housing units, but only 506,000 valid housing unit addresses (0.4 percent of the final census MAF) could definitely be attributed to LUCA 98 only and not also to some other address source. Of the addresses provided uniquely by LUCA 98, 62 percent came from local governments in six states—New York (31 percent, principally New York City); Illinois (9 percent); California (7 percent); Georgia (6 percent); Michigan (5 percent); and Florida (4 percent) (Owens, 2003:App.K).

Problems with determining sources of addresses because of overlapping operations and hard-to-interpret source codes on the MAF records complicate the assessment of the unique contribution of any one source. Consequently, the figure of 506,000 housing units contributed uniquely by LUCA 98 may be an underestimate. Indeed,

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Vitrano et al. (2003b:Table 13) estimate that 659,000 housing units were contributed uniquely by LUCA 98. The same study reports that another 512,000 housing units were contributed both by LUCA 98 and the block canvass, another 396,000 housing units were contributed by both LUCA 98 and the September 1998 DSF, and 289,000 housing units were contributed uniquely by LUCA 99. A thorough assessment of the contribution of LUCA to MAF is needed, including not only the effects of LUCA on the completeness of the census count in participating areas but also the possible effects on the counts in other areas from not having had a LUCA review.

4–E.4 2000 MAF Development: Summary of Findings

The concept for building the 2000 MAF from many sources, including the previous census address file, was an important innovation that the Census Bureau plans to fully implement for the 2010 census. For 2000 the newness of many MAF-building operations led to significant problems of implementation. In turn, these problems not only contributed to errors in the MAF but also complicated a full evaluation. The variables and codes originally included on the MAF records made it difficult to trace address sources. To facilitate analysis, Census Bureau evaluation staff subsequently developed an “original source” code for each address record and used it to identify sources that contributed the highest numbers of addresses to the final 2000 MAF (Vitrano et al., 2003b). However, they could not always identify a single source or even any source for some addresses. In addition, there are no data with which to determine the contribution of each source to correct versus erroneous addresses on the final MAF. Other problems in the MAF coding, such as overstatement of the number of units at a basic street address in some instances and omission of this information for non-city-style addresses, hampered evaluation (see Vitrano et al., 2003a:5). Finally, very little analysis has been done to date of geographic variations in the effectiveness of various sources for building the 2000 MAF.

Finding 4.4: The use of multiple sources to build a Master Address File—a major innovation in 2000—was appropriate in concept but not well executed. Problems included changes in schedules and operations, variabil-

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

ity in the efforts to update the MAF among local areas, poor integration of the address list for households and group quarters, and difficulties in determining housing unit addresses in multiunit structures. Changes were made to the MAF development: a determination late in the decade that a costly complete block canvass was needed to complete the MAF in mailout/mailback areas and a determination as late as summer 2000 that an ad hoc operation was needed to weed out duplicate addresses. Problems with the MAF contributed to census enumeration errors, including a large number of duplicates.

Finding 4.5: The problems in developing the 2000 Master Address File underscore the need for a thorough evaluation of the contribution of various sources, such as the U.S. Postal Service Delivery Sequence File and the Local Update of Census Addresses Program, to accuracy of MAF addresses. However, overlapping operations and unplanned changes in operations were not well reflected in the coding of address sources on the MAF, making it difficult to evaluate the contribution of each source to the completeness and accuracy of the MAF.

4–E.5 2010 MAF Development: Recommendation

The Census Bureau plans to update the MAF over the 2000 decade. It has embarked on a major effort to reengineer its TIGER geocoding and mapping database to make it more accurate and easier to maintain.18 For the TIGER reengineering, the Bureau plans to the extent possible to work with local governments, many of which have geographic information systems that are of better quality than TIGER. In addition, although plans are not yet well developed, the Bureau intends to conduct a LUCA Program as part of constructing the 2010 census MAF. We discuss three areas for improvement in the 2010 MAF development process below.

18  

TIGER—Topologically Integrated Geographic Encoding and Reference System—is used to assign MAF addresses to the proper geographic areas; see National Research Council (2003a) for an assessment of the MAF/TIGER reengineering project.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Housing Unit Addresses

The Bureau must recognize that obtaining an accurate address for a structure (single-family home, apartment building) is not sufficient for a complete, accurate MAF. It is also critical to develop accurate information on the number of housing units that exist within structures. Several reports, including the housing unit coverage study (Barrett et al., 2001, 2003) and the case studies prepared by the Working Group on LUCA (2001:Ch.4), document the problems in developing complete, correct address lists for small multiunit structures, which often have a single mail drop. For example, in neighborhoods with a stock of single-family or two-family residences and heavy inmigration since the last census, additional, hard-to-identify housing units may have been carved out of an apparently stable housing stock. Conversely, some structures with two or three units may have been turned into bigger single-family residences (e.g., as the result of gentrification). Indeed, in areas in which it is hard to keep track of housing units, as opposed to structures, it may be necessary to consider another enumeration procedure, such as update/enumerate, in place of mailout/mailback, in order to obtain an accurate count of households and people.

LUCA Redesign

The Bureau needs to redesign its cooperative programs with state and local governments for MAF/TIGER work. The LUCA effort in the late 1990s was one-sided. The Bureau offered each local government the opportunity to participate but did not actively encourage participation or provide financial or other support beyond training materials. Also, to protect confidentiality, the Bureau required participating localities to destroy all MAF-related materials after the review was completed. Consequently, localities could not benefit from the efforts they made, which were often substantial, to update the MAF. Because of TIGER inaccuracies, localities with better geographic information systems had to force their data to fit the TIGER database before they could begin their local review.

We believe that a successful federal-state cooperative program involving the level of effort of LUCA must be a two-way street in which there are direct benefits not only to the Census Bureau but also

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

to participating localities. The Census Bureau could adapt features of other federal-state cooperative statistical programs toward this end. For example, the Bureau should consider paying for a MAF/ TIGER/LUCA coordinator in each state, who would be a focal point for communication with localities.

The Bureau should also give serious consideration to providing localities with updated MAF files, which would not only facilitate continuous updating of the MAF for the Bureau’s purposes but would also provide a useful tool for local planning and analysis. An issue for concern would be that sharing of MAF files might violate the confidentiality of individuals—for example, by disclosing overcrowding of housing units in violation of local codes. However, our view is that the confidentiality issues could be resolved; street addresses do not, of themselves, identify information about individual residents or even indicate whether an address is occupied. For structures that may have been divided into multiple housing units illegally, the Bureau could protect confidentiality through the use of data perturbation or masking techniques. For example, the Bureau could provide a street address for the structure and code the number of units as, say, 1 to 3, or 2 to 4, in a way that preserves confidentiality; alternatively, it could randomly assign a value for the number of units in a structure for a small sample of structure addresses. A further protection would be to provide the MAF to localities under a pledge to use it for statistical purposes only, not for enforcement purposes, and not to provide it to others. However, Title 13 of the U.S. Code would probably require amendment similar to the 1994 legislation that authorized LUCA, since U.S. Supreme Court precedent views the MAF as covered under Title 13 confidentiality provisions.19 There is national benefit in having an accurate address list for statistical uses that can be continuously maintained in a cost-effective manner. This national benefit should challenge the Census Bureau to think creatively about ways to share the MAF with localities while protecting the confidentiality of individual residents.

19  

In Baldridge v. Shapiro, 455 U.S. 345 (1982), the U.S. Supreme Court ruled that the Census Bureau’s “address list … is part of the raw census data intended by Congress to be protected” under the confidentiality provisions of Title 13. Accordingly, the court concluded that the bureau’s address list is not subject to disclosure under the Freedom of Information Act or under the discovery process in civil court proceedings.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Evaluation

The Bureau needs to build a continuous evaluation program into its 2010 MAF/TIGER operations from the outset. That program should specify clear codes for tracing the source(s) of each address to facilitate evaluation. It should also include ongoing quality control and review procedures to determine what is working well and what is not so that corrective actions can be taken in a timely manner. Built into the MAF quality control and evaluation efforts should be a capability to analyze geographic variations in the effectiveness of address sources and in the completeness and accuracy of the list.

Recommendation 4.3: Because a complete, accurate Master Address File is not only critical for the 2010 census but also important for the 2008 dress rehearsal, the new American Community Survey, and other Census Bureau surveys, the Bureau must develop more effective procedures for updating and correcting the MAF than were used in 2000. Improvements in at least three areas are essential:

  1. The Census Bureau must develop procedures for obtaining accurate information to identify housing units within multiunit structures. It is not enough to have an accurate structure address.

  2. To increase the benefit to the Census Bureau from its Local Update of Census Addresses (LUCA) and other partnership programs for development of the MAF and the TIGER geocoding system, the Bureau must redesign the program to benefit state and local governments that participate. In particular, the Bureau should devise ways to provide updated MAF files to participating governments for statistical uses and should consider funding a MAF/TIGER/LUCA coordinator position in each state government.

  3. To support adequate assessment of the MAF for the 2010 census, the Census Bureau must plan evaluations well in advance so that the MAF records can be assigned appropriate ad-

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

dress source codes and other useful variables for evaluation.

4–F GROUP QUARTERS ENUMERATION

4–F.1 Strategy

The population that resides in group quarters (e.g., college dormitories, prisons, nursing homes, juvenile institutions, long-term care hospitals and schools, military quarters, group homes, shelters, worker dormitories), and not in individual apartments or homes, is a small fraction of the total population—about 2.7 percent in 1990 and 2000.20 Individual group quarters often contain large numbers of people, however, so that group quarters residents can represent a significant component of the population of particular small areas (e.g., college students living in dormitories in a college town). The census is the only source of data on group quarters residents in addition to household members, so enumeration of this population, even though small, is important.

Mailout/mailback techniques and the questionnaires used for household enumeration are not appropriate for many group quarters, so there was a separate operational plan for group quarters enumeration in 2000, as in censuses since 1970. This plan used procedures similar to those for previous censuses, with a few exceptions (see Citro, 2000c; U.S. Census Bureau, 1999b). Group quarters residents were asked to fill out individual (one-person) questionnaires called Individual Census Reports; somewhat different questionnaires were used for enumeration of four groups: armed forces personnel in group quarters, military and civilian shipboard residents, people enumerated at soup kitchens and mobile food vans (part of service-based enumeration), and all other group quarters residents.

There was no effort to enumerate the homeless population not encountered at designated locations, which included some nonshel-

20  

In Census Bureau terminology, one or more group quarters make up a special place. Such places (e.g., a university) are administrative units, the individual group quarters (e.g., dormitories) are where people sleep. A structure that houses a group quarters may also include one or more housing units (e.g., the apartment for a resident faculty member in a dormitory).

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

tered outdoor locations in addition to shelters, soup kitchens, and regularly scheduled mobile food vans. The original design for 2000 included a plan to estimate the total homeless population by using information for people enumerated at shelters and soup kitchens on how many nights each of them used the service. The actual responses to questions on service usage exhibited high nonresponse and response bias, so plans were dropped to include adjusted counts for the homeless in any adjusted population totals (Griffin and Malec, 2001; see also U.S. General Accounting Office, 2003b). Such adjusted counts could not be used for reapportionment totals, in any case, because of the U.S. Supreme Court decision in January 1999 that precluded the use of sampling for census counts for reapportionment.

4–F.2 Implementation Problems

For reasons that are not clear, the design and implementation of enumeration procedures for group quarters residents experienced numerous problems (see Jonas, 2002, 2003b):

  • There was no definition of a group quarters as distinct from a large household of unrelated individuals. In 1980 and 1990 any residential unit with 10 or more unrelated people was tabulated as a group quarters; no such rule was set for 2000. Moreover, some group quarters (e.g., halfway-homes) are located in structures that appear to be housing units. Such ambiguities contributed to problems in developing an accurate, nonover-lapping MAF for housing unit and group quarters addresses.

  • The development of an inventory of special places was handled by the Census Bureau’s Population Division staff,and special places addresses were not integrated with the main MAF (maintained by the Bureau’s Geography Division) until November 1999. Consequently, the Special Places LUCA operation was delayed by 18 months from the original schedule, and many localities could not participate given demands on their resources for census outreach and other activities close to Census Day.

  • A number of group quarters, including long-established prisons and college dormitories, were assigned incorrect

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

geographic codes. For example, a prison would be geocoded to a neighboring town or county. While not affecting the total population count, these errors significantly affected counts for some small geographic areas. Such errors account for a large fraction of the population counts that local jurisdictions challenged in the Census Bureau’s Count Question Resolution Program.21

  • There was no system, such as a preprinted group quarters identification code, for tracking individual questionnaires from group quarters residents. Instead, a total count of questionnaires received was recorded on a control sheet for each group quarters at several steps in the process, and discrepancies and errors crept into the processing as a result. The lack of a good tracking system also impeded evaluation.

  • A small but undetermined number of group quarters questionnaires were never returned to a local census office and never included in the census. Also, some questionnaires were assigned to a different group quarters after receipt by a local office.

  • In May 2000 the National Processing Center at Jeffersonville, Indiana—which was solely responsible for capturing the information on group questionnaires—reported that many questionnaires were not properly associated with a “control sheet” and therefore did not have a group quarters identification number on them. A team of census headquarters staff reviewed an estimated 700,000 group quarters questionnaires to resolve this problem (no official records were kept of this special operation).

  • In July 2000 two special telephone operations were implemented to follow up group quarters with no recorded

21  

This program occurred mainly in 2001. It permitted local governments to challenge their census population counts for geocoding errors and other errors that could be well documented. The Census Bureau reviewed such challenges and issued letters to local jurisdictions when their population count was increased (or decreased) after review. Localities could cite the letters for such uses as obtaining federal program funds; however, the revised counts could not be used for reapportionment or redistricting.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

population (presumably refusals) and group quarters for which the number of data-captured questionnaires fell far below the population count obtained in advance visits to special places conducted in February–March 2000. Population counts were obtained for these group quarters, and the results used to impute group quarters residents as needed. More than 200,000 group quarters records (almost 3 percent of the group quarters population) were wholly imputed as a consequence of the telephone follow-up and the reconciliation of multiple population counts on the group quarters control sheets.

  • Some group quarters residents were mailed a housing unit questionnaire. If they returned it and the address was matched to a group quarters address, they were added to the appropriate group quarters count, but there was no provision to unduplicate such enumerations with enumerations obtained through the group quarters enumeration procedure. From a clerical review of a sample of cases in selected types of group quarters (excluding prisons, military bases, and service-based facilities such as soup kitchens), an estimated 56,000 group quarters enumerations were duplicates of other enumerations within the same group quarters.

  • Residents at some types of group quarters (e.g., soup kitchens, group homes, worker dormitories) could declare a “usual home elsewhere” and be counted at that address if the address could be verified. This procedure was not implemented as designed, and the result was that a net of 150,000 people were moved erroneously from the group quarters to the housing unit universe. Of these, 31,000 people were erroneously omitted from the census altogether.

  • A higher-than-expected proportion of group quarters enumerations were obtained from administrative records of the special place. Of the 83 percent of enumerations for which enumerators indicated the source of data, 59 percent were filled out from administrative data, 30 percent were filled out by the resident, and 12 percent were filled out by an enumerator interviewing the resident. Types of group quarters with high percentages of enumerations obtained from administrative data included

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

nursing homes, hospitals, group homes, and prisons. These types of group quarters had especially high rates of missing data for long-form items (see Section 7-D).

  • Because of poor results in the 1990 Post-Enumeration Survey Program, a deliberate decision was made to exclude group quarters residents from the 2000 A.C.E. Program (see Section 5-D.1 for the basis of this decision). As a consequence, there was no way to assess omissions of group quarters residents or the full extent of erroneous enumerations for this population.

4–F.3 Group Quarters: Summary of Findings and Recommendations

Overall, we conclude that the procedures for enumerating group quarters residents and processing the information collected from them were not well controlled or carefully executed. There is evidence of errors of omission, duplication, and miscoding by geography and type of group quarters, although there are no data with which to conduct a definitive evaluation, partly due to the lack of procedures to track individual enumerations. The extent of imputation required for group quarters residents was high, not only in terms of the number of whole-person imputations required, but also in terms of item imputations, particularly for long-form-sample items (see Section 7-D).

Finding 4.6: The enumeration of people in the 2000 census who resided in group quarters, such as prisons, nursing homes, college dormitories, group homes, and others, resulted in poor data quality for this growing population. In particular, missing data rates, especially for long-form-sample items, were much higher for group quarters residents than for household members in 2000 and considerably higher than the missing data rates for group quarters residents in 1990 (see Finding 7.3). Problems and deficiencies in the enumeration that undoubtedly contributed to poor data quality included: the lack of well-defined concepts of types of living arrangements to count as group quarters; failure to integrate the development of the group quarters address list with the devel-

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

opment of the Master Address File; failure to plan effectively for the use of administrative records in enumerating group quarters residents; errors in assigning group quarters to the correct geographic areas; and poorly controlled tracking and case management for group quarters. In addition, there was no program to evaluate the completeness of population coverage in group quarters.

Enumeration procedures for group quarters residents need re-thinking and redesign from top to bottom if the 2010 census is to improve on the poor performance in 2000. Questionnaire content and design also need to be rethought to obtain higher quality data (see Section 7-D), including the possibility of asking residents for alternate addresses to facilitate unduplication (e.g., home addresses for college students and prisoners). Given the increase in population of people in some types of group quarters and the lack of other data sources for group quarters residents, improvement in the enumeration of group quarters should be a high-priority goal for the 2010 census. Tracking systems should be built into the enumeration not only to facilitate a high-quality operation but also to support subsequent evaluation.

Recommendation 4.4: The Census Bureau must thoroughly evaluate and completely redesign the processes related to group quarters populations for the 2010 census, adapting the design as needed for different types of group quarters. This effort should include consideration of clearer definitions for group quarters, redesign of questionnaires and data content as appropriate, and improvement of the address listing, enumeration, and coverage evaluation processes for group quarters.

Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page97
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page98
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page99
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page100
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page101
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page102
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page103
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page104
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page105
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page106
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page107
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page108
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page109
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page110
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page111
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page112
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page113
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page114
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page115
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page116
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page117
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page118
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page119
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page120
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page121
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page122
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page123
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page124
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page125
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page126
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page127
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page128
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page129
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page130
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page131
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page132
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page133
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page134
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page135
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page136
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page137
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page138
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page139
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page140
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page141
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page142
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page143
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page144
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page145
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page146
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page147
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page148
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page149
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page150
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page151
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page152
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page153
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page154
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page155
Suggested Citation:"4 Assessment of 2000 Census Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page156
Next: 5 Coverage Evaluation: Methods and Background »
The 2000 Census: Counting Under Adversity Get This Book
×
Buy Hardback | $80.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The decennial census was the federal government’s largest and most complex peacetime operation. This report of a panel of the National Research Council’s Committee on National Statistics comprehensively reviews the conduct of the 2000 census and the quality of the resulting data. The panel’s findings cover the planning process for 2000, which was marked by an atmosphere of intense controversy about the proposed role of statistical techniques in the census enumeration and possible adjustment for errors in counting the population. The report addresses the success and problems of major innovations in census operations, the completeness of population coverage in 2000, and the quality of both the basic demographic data collected from all census respondents and the detailed socioeconomic data collected from the census long-form sample (about one-sixth of the population). The panel draws comparisons with the 1990 experience and recommends improvements in the planning process and design for 2010. The 2000 Census: Counting Under Adversity will be an invaluable resource for users of the 2000 data and for policymakers and census planners. It provides a trove of information about the issues that have fueled debate about the census process and about the operations and quality of the nation’s twenty-second decennial enumeration.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!