National Academies Press: OpenBook

Reengineering the Survey of Income and Program Participation (2009)

Chapter: 3 Expanded Use of Administrative Records

« Previous: 2 SIPP's History, Strengths, and Weaknesses
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 37
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 38
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 39
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 40
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 41
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 42
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 43
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 44
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 45
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 46
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 47
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 48
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 49
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 50
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 51
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 52
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 53
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 54
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 55
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 56
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 57
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 58
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 59
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 60
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 61
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 62
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 63
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 64
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 65
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 66
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 67
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 68
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 69
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 70
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 71
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 72
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 73
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 74
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 75
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 76
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 77
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 78
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 79
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 80
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 81
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 82
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 83
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 84
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 85
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 86
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 87
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 88
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 89
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 90
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 91
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 92
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 93
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 94
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 95
Suggested Citation:"3 Expanded Use of Administrative Records." National Research Council. 2009. Reengineering the Survey of Income and Program Participation. Washington, DC: The National Academies Press. doi: 10.17226/12715.
×
Page 96

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

3 Expanded Use of Administrative Records I n reengineering the Survey of Income and Program Participation (SIPP), the Census Bureau has from the outset envisioned a role for administra- tive records. Although the bureau backed away from the notion of using administrative records to replace a large portion of the SIPP questionnaire content (see Chapter 2), it has continued to stress the contribution that administrative records could make to improving the quality of SIPP data (see Johnson, 2008). This chapter addresses the role that administrative records can play in a reengineered SIPP. The chapter first outlines a framework for evaluating the benefits and costs of different uses of administrative records for SIPP. Using the framework as a guide, the chapter reviews the uses of administra- tive records in SIPP’s history to date, along with other uses of administrative records at the Census Bureau that are relevant to SIPP. It then addresses the feasibility of acquiring and linking different federal and state administrative records and the benefits and costs of the following seven ways of using such records in a reengineered SIPP: 1. evaluating the accuracy of survey responses in the aggregate by comparison with aggregate estimates from administrative records; 2. evaluating the accuracy of survey responses at the individual respondent level by comparison with exactly matched administra- tive records; 3. improving the accuracy of imputation routines used to supply v ­ alues for missing survey responses and of survey weighting factors used to improve coverage of the population; 37

38 REENGINEERING THE SURVEY 4. providing values directly for missing survey responses; 5. adjusting survey responses for underreporting or overreporting; 6. using administrative records values instead of asking survey ques- tions; and 7. appending administrative records values to survey records. The first three uses we term “indirect,” in that administrative data are never actually recorded on SIPP data files; the last four uses are “direct,” in that administrative data become part of the SIPP data files to a greater or lesser extent. Following the discussion of uses, the chapter considers methods of confidentiality protection and data access that would be appropriate for a reengineered SIPP. Our conclusions and recommendations are presented at the end of the chapter. A FRAMEWORK FOR ASSESSING USES OF ADMINISTRATIVE RECORDS SIPP’s primary goal—which is to provide detailed information on the short-term dynamics of economic well-being for families and households, including employment, earnings, other income, and program eligibility and participation—requires a survey as the main source of data. There are no administrative records from federal or state agencies that, singly or in combination, could eliminate the need for survey data collection, even if it were feasible to obtain all relevant records and the custodial agencies did not object to their use for this purpose. Consider the following examples of shortcomings in administrative records: • Records for programs to assist low-income people, such as the Supplemental Security Income (SSI) Program or the Food Stamp Program (since 2008 termed the Supplemental Nutrition Assistance Program or SNAP), contain information only for beneficiaries and not also for people who are eligible for the program but do not apply for or are erroneously denied benefits. Being able to estimate the size of the eligible population, including participants and non- participants, is important to address the extent to which an eligible population’s needs are being met, what kinds of people are more or less likely to participate in a program, and other policy-relevant questions. • Program records do not always accurately distinguish new recipients of benefits from people who received benefits previously, had a spell of nonparticipation, and are once more receiving benefits. One of

EXPANDED USE OF ADMINISTRATIVE RECORDS 39 SIPP’s important contributions to welfare program policy analysis has been to make possible the identification of patterns of program participation over time, including single and multiple spells. • Federal income tax records on earnings and other income exclude some important income sources that recipients do not have to report, such as Temporary Assistance for Needy Families (TANF) and pretax exclusions from gross wage and salary income. Pretax employer-sponsored health insurance contributions, which are a growing share of wage and salary income, do not have to be reported on Internal Revenue Service (IRS) 1040 individual income tax returns, nor are they always reported on W-2 wage and tax statements. • Federal income tax records do not define some income sources in the manner that is most useful for assistance program policy analy- sis. Thus, self-employment income is reported to tax authorities as gross income minus expenses, including depreciation of buildings and equipment, which can result in a net loss, even when the busi- ness provided sufficient income to the owner(s) for living expenses. In contrast, the SIPP questionnaire asks for the “draw” that self- employed people take out of their business for their personal living expenses. • The recipient or filing unit that is identified in administrative records often differs from the family or household unit that is of interest for policy analysis. For example, minor children may be claimed as dependents on the income tax return of the noncustodial parent, and unmarried cohabitors will be two distinct income tax filing units but only one survey household and (assuming they share cooking facilities) one food stamp household. (It is not always pos- sible to accurately identify tax and transfer program filing units in survey data, either.) Despite these and other problems, it is clearly the case, as we demon- strate in later sections, that administrative records can be helpful to SIPP in a number of ways, as they have been helpful in the past (see “SIPP’s History with Administrative Records” below). Indeed, the Census Bureau hopes that significantly greater use of administrative records can be achieved in a reengineered SIPP to improve the quality of reporting of income and program participation. The benefits and costs of using administrative records for a reengi- neered SIPP must be carefully assessed, and each of the possible seven uses identified above implies a different mixture of benefits and costs. We provide below a cost-benefit framework for considering alternative uses of administrative records for SIPP, including not only records from federal

40 REENGINEERING THE SURVEY agencies, but also records that state agencies use to administer such pro- grams as the Children’s Health Insurance Program (CHIP), food stamps, general assistance, Medicaid, school lunch and breakfast programs, the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), SSI (in states that supplement federal benefits), TANF, unemploy- ment insurance (UI), and workers’ compensation (WC) (referred to as “state records” in this chapter). Benefits There are potentially two types of benefits for a reengineered SIPP from using administrative records, such as Social Security payments to beneficia- ries or food stamp allotments to families: (1) providing higher-quality data in comparison to survey reports (the one benefit specifically identified by the Census Bureau) and (2) providing additional data that would be more difficult or expensive to obtain in interviews. For improving data quality, administrative records may also have the advantage that the ongoing costs of using them for this purpose are modest—at least once an initial invest- ment has been made in acquiring and processing them—compared with efforts to improve the quality of survey reporting (see “Costs” below). Improved Data Quality There is substantial evidence, summarized in Chapter 2, that survey reports of program participation and sources of income are often incom- plete and inaccurate—despite considerable efforts to improve the quality of reporting by redesigning questions, adding probes, and the like. SIPP, with its detailed, probing questionnaire, historically has a record of obtaining more complete reporting of program participation than other surveys, but its reporting of program participation still falls short of administrative benchmarks. Moreover, the amounts reported by acknowledged partici- pants often differ from administrative benchmarks in the aggregate and on an individual basis. There are both underreporting and overreporting errors, typically with a net underreporting on balance. Consequently, administra- tive records have the potential to provide significantly more accurate data on many sources of income and types of programs. In assessing the benefits of improved data quality from using a particu- lar administrative records source, such as Social Security or food stamp records, it is important not to take at face value that the administrative record is always of better quality than the corresponding survey response.   General assistance, or general relief, is a name for state programs to provide cash benefits to adults without dependent children.

EXPANDED USE OF ADMINISTRATIVE RECORDS 41 In this regard, it is important to distinguish among the data items recorded on administrative records. On one hand, for example, in the case of a record for a food stamp recipient, it is highly likely that the amount provided to a beneficiary is accurately recorded (even though, in some cases, the pay- ment may have been made to someone who was not in fact eligible for the program or an erroneous amount may have been provided to an eligible recipient). On the other hand, the ancillary information on the record, such as the person’s employment, income, and family composition, may have contained errors when it was collected or may have become out of date. Moreover, for some programs, records for people who no longer receive benefits may be comingled with records for current beneficiaries, and, for most if not all programs, the program unit of one, two, or more people is typically not the same as the survey unit of a family or household. The information in administrative records, even when accurate, may differ sufficiently in definition from the information sought by the survey designer as to make the administrative information unusable for the survey’s purpose. As noted earlier, self-employment income from federal income tax records is an example—although the gross and net income amounts from tax records may be of interest for some analyses, they do not satisfy SIPP’s purposes of understanding the economic resources available to individuals and families. Additional Data Some administrative records may contain valuable information that would be difficult to obtain in a survey context. For example, the Social Security Administration (SSA) has records not only of benefits paid to retirees, people with disabilities, and others, but also histories of earnings received each year for everyone who is or has been in covered employment, which SSA receives annually from W-2 and Self-Employment Income forms filed with the IRS. Such earnings histories, which may extend back for decades of an individual’s work life, would be difficult to collect in a survey unless it began following individuals from an early age, but they could be valuable for some types of research, such as research on the determinants of the decision to retire. Costs The use of administrative records for a reengineered SIPP cannot be cost free. Staff time and other resources must be expended for acquisition   Prior to 1978, SSA files contain quarterly indicators of covered employment in addition to annual earnings.

42 REENGINEERING THE SURVEY and processing of records. Moreover, the use of some kinds of records could potentially incur two other types of costs: (1) increased delays in releasing data products due to delays in obtaining records from the cognizant agen- cies and (2) increased risks of disclosure of individuals in SIPP, which in turn could necessitate more restricted conditions for use of the data. Additional Resources The strictly monetary costs of using administrative records for a r ­ eengineered SIPP would include staff and other resources for acquisition of records, data quality review and associated cleaning of records, and process- ing of records for the particular application, such as evaluation or imputa- tion. In some cases, the costs of acquisition could be substantial, at least initially. For example, time-consuming negotiations could be required to draw up acceptable memoranda of understanding and other legal documents to obtain an agency’s records, although once agreed-upon procedures were in place, the marginal costs of acquiring records in subsequent years could be minimal. There could also be significant costs when an agency’s records are not well maintained, requiring Census Bureau staff to engage in substantial back-and-forth with agency staff to clean up the data. Processing costs would vary with the type of application. For example, aggregate comparisons of survey responses with administrative records are likely to be considerably less costly than the use of administrative records in imputation models. In its original concept for a new Dynamics of Well-being System (DEWS), the Census Bureau had hoped that administrative records could be used directly to supply so much of the needed content as to make possible a signifi- cant reduction in the costs of the system compared with the current SIPP. The cost savings would come from reduced frequency of interviews and reduced content of each interview, with the remaining needed content obtained by matching administrative records for individuals to the corresponding survey records. However, users were concerned that such a major role for adminis- trative records would not only be unfeasible, given the difficulties of acquir- ing all of the needed records from state and federal agencies, but also would curtail the bureau’s ability to release public-use microdata files because of increased disclosure risk. These concerns led the bureau to scale back its plans in this regard. The Census Bureau now plans to achieve cost savings by conducting annual interviews with event history calendars to obtain intrayear information and by requiring agencies to pay for supplements with variables not included in the core questionnaire (see Chapter 4). Reducing the frequency of interviews assuredly reduces the costs of a survey, but whether reducing the content of a particular interview by sub- stituting administrative records reduces costs is not clear. The main cost of an interview is making contact with the respondent; moreover, acquiring

EXPANDED USE OF ADMINISTRATIVE RECORDS 43 and processing administrative records adds costs. Hence, we think that the use of administrative records to replace survey content should be judged primarily on criteria other than cost savings, such as the effects on data quality, timeliness, and accessibility. Increased Delays Administrative records systems are managed first and foremost to facili- tate the operation of assistance programs. The Census Bureau’s need for timely information from records systems for statistical purposes is of sec- ondary importance, at best, for program agencies. Consequently, while it may be possible for the Census Bureau to obtain and process some records with little delay, the acquisition of other records may lag the survey data collection by significant periods of time (see “Statistical Administrative Records System” below). One response to this situation could be to further delay the data products from SIPP in order to be able to use the adminis- trative information to improve imputations or substitute for questionnaire content. This outcome would be distressing to users. Other responses could be to project the administrative information from a prior year forward to the survey data year, to issue preliminary and revised data products, or to confine the use of administrative information to evaluation of the survey content, which would not be as time sensitive. Increased Disclosure Risks On one hand, because the data collected in SIPP is of great interest to policy analysts, researchers, and others users, it is essential to make the data in some form available to these varied constituencies. On the other hand, the Census Bureau is ethically and legally obligated to protect the confi- dentiality of SIPP participants’ identities and attributes. Thus, unfettered access to all collected SIPP data is not likely to be achievable. Rather, as recommended in previous National Research Council panels on data access (2005, 2007), an appropriate strategy for the Census Bureau is to provide access to data of differential detail, and hence differential disclosure risk, depending on the goals for data use and the trustworthiness of the likely data users (see Box 3-1 for a summary of the risk and utility trade-off in data dissemination).   We do not discuss confidentiality threats that might originate from inside the Census Bureau. The bureau has sufficient expertise on internal confidentiality protection that it does not need our panel to comment. Evidence of its dedication to confidentiality protection is the practice adopted for its Statistical Administrative Records System of substituting personal identification keys for Social Security numbers on matched files.

44 REENGINEERING THE SURVEY BOX 3-1 The Risk and Utility Trade-Off in Data Dissemination The Census Bureau and other disseminators of data collected under a pledge of confidentiality for statistical purposes strive to release data that are (1) safe from attacks by ill-intentioned data users seeking to learn respondents’ identities or sensitive attributes, (2) informative for a wide range of statistical ­analyses, and (3) easy for users to analyze with standard statistical methods (Reiter, 2004). These goals are often in conflict. For example, releasing fine details about indi- viduals enables accurate analyses, but it also provides ill-intentioned users with more and higher quality resources for linking records in released data sets to records in other databases. Releasing highly aggregated summaries of data protects confidentiality, but it severely limits the analyses that can be done with the data. Data disseminators usually choose policies that lie in between these two extremes, sacrificing absolute protection (possible only when not releasing any data) and perfect data usefulness (possible only when releasing all data as collected) for a compromise. Most data disseminators are concerned with two types of disclosures. One type is identity disclosure, which occurs when ill-intentioned users correctly iden- tify individual records using the released data. Efforts to quantify identity disclo- sure risk in microdata (records for individual respondents) generally fall into two broad categories: (1) estimating the number of records in the released data that are unique records in the population and (2) estimating the probabilities that users of the released data could determine the identities of the records in the released data by using the information in that data. The other type is attribute disclosure, which occurs when ill-intentioned users learn the values of sensitive variables for individual records in the data set. Quantification of attribute disclosure risk is often folded into the quantification of identity disclosure risk, since ill-intentioned users SIPP’s great value for policy analysis and research on short-term d ­ ynamics of economic well-being requires that users have access to micro- data and not only aggregate summaries. Administrative records could potentially add valuable information to SIPP microdata, but the more information that is added, the greater the risk that individuals in the SIPP sample could be identified in public-use microdata files. Disclosure risk is also increased because people in the agency supplying the administrative data have knowledge that could be used to identify individuals in SIPP files. Countering such increased risk could require the use of disclosure protection techniques that would diminish the value of the public-use microdata products and compel users who require the confidential data for their research to seek access to one of the Census Bureau’s Research Data Centers (RDCs). Yet for policy analysis that is in any way time sensitive,

EXPANDED USE OF ADMINISTRATIVE RECORDS 45 typically need to identify individuals before learning their attributes. Other types of disclosures include perceived identification disclosure, which occurs when intruders incorrectly identify individual records in the database, and inferential disclosure, which occurs when intruders accurately predict sensitive attributes in the data set using the released data. For a discussion of metrics for quantifying identification and attribute disclosure risks, see Duncan and ­ Lambert (1989), Federal Committee on Statistical Methodology (1994), Lambert (1993), National Research Council (2005, 2007), and Reiter (2005). Agencies must also consider the usefulness of the released data, often called data utility. Existing utility measures are of two types: (1) comparisons of broad differences between the original and released data and (2) comparisons of spe- cific estimates computed with the original and released data. Broad difference measures are based on statistical distances between the original and released data, for example, differences in distributions of variables. Comparison of specific models is often done informally. For example, data disseminators look at the simi- larity of point estimates and standard errors of regression coefficients after fitting the same regression on the original data and on the data proposed for release. Ideally, the agency releasing data optimizes the trade-off between disclosure risk and data utility when selecting a dissemination strategy. To do so, the agency can make a scatter plot of the quantified measures of disclosure risk and data utility for candidate releases. This has been termed the “R-U confidentiality map” in the statistical literature (Duncan, Keller-McNulty, and Stokes, 2001). Making this map can enable data disseminators to eliminate policies with risk-utility profiles that are dominated by other policies (e.g., between two policies with the same disclosure risk, select the one with higher data utility). the ­ alternative of accessing microdata in an RDC is daunting because it adds delays in making a successful application to the delays that are already incurred in release of the files from the Census Bureau. A related risk of directly using administrative data in SIPP could be a decline in the willingness of people to participate in the survey once they were made aware of the planned uses of their administrative records. How- ever, when 2004 SIPP panel respondents were informed halfway through the panel that administrative records might be used to reduce the need to ask them so many questions, less than one-half of 1 percent requested that record matches not be made for them (David Johnson, chief, ­Housing and Household Economic Statistics Division, U.S. Census Bureau, per- sonal communication to panel, February 3, 2009; see also “Direct Uses” below).

46 REENGINEERING THE SURVEY Trading Off Benefits and Costs Different types of uses of administrative records in a reengineered SIPP will present different pictures of the likely benefits and costs. For a given use, the benefits and costs may also differ by the type of record or even by the agency responsible for the record. For example, program agencies in some states may be more willing to share records with the Census Bureau for use with SIPP than with agencies in other states. In determining when a particular use of a specific type of record war- rants the investment, it is important always to bear in mind the goals of SIPP and that it cannot be all things to all users. For example, while SSA records of past earnings histories would be useful for research on life- time patterns of employment and related issues, they might not contribute greatly to SIPP’s primary focus on the short-term dynamics of economic well-being. Moreover, the addition of earnings histories to SIPP would substantially increase the risks of disclosure and consequently the need to restrict the use of data products containing them (see “SIPP Gold Standard Project” below). Some of the trade-offs involved in working with different types of administrative records for different purposes become evident in reviewing the history of uses of administrative records in SIPP and other Census Bureau programs. SIPP’s HISTORY WITH ADMINISTRATIVE RECORDS In order to achieve SIPP’s goals of improving information on the eco- nomic well-being of the population and short-term changes in income and program participation, the survey’s designers at the outset envisioned at least three major roles for administrative records (see National Research Council, 1993:31-33): 1. to increase sampling efficiency by providing supplementary frames of participants in specific assistance programs or persons with other specified characteristics; 2. to provide additional data (e.g., by matching with Social Security earnings records to obtain longitudinal earnings histories to add to the SIPP files); and 3. to compare and validate specific items common to both SIPP and administrative records by means of record-check studies. ISDP Use of Records The Income Survey Development Program (ISDP) used administra- tive records extensively to evaluate the quality of survey responses and

EXPANDED USE OF ADMINISTRATIVE RECORDS 47 to improve question wording and interviewer training procedures (see Kasprzyk, 1983; Logan, Kasprzyk, and Cavanaugh, 1988). The primary method used was the forward record check, in which people included in independent samples from administrative sources (including IRS and federal and state program records) were administered the ISDP interviews. This method eliminates the need to match survey and administrative records, but it permits only identifying false-negative responses (people with an admin- istrative record of program participation who say they did not participate in the particular program) and not also false-positive ones, which a full record-check study would support. Aggregate comparisons of income and program participation reported in the 1979 ISDP panel with administra- tive records sources were also conducted. These comparisons necessitated, in many cases, extensive adjustments of one or both sources (SIPP or the applicable administrative records source) for comparability of the popula- tion and income concept and reporting period covered. The ISDP also drew supplementary samples from administrative records to augment the 1978 and 1979 ISDP panel main samples. However, the data were never analyzed, because data files that included the main and supplementary samples with appropriate weights could not be produced before the ISDP was shut down in 1981 (see Kasprzyk, 1983). SIPP’s Use of Records, 1983-1993 During SIPP’s first decade, the Census Bureau was hard-pressed to operate the survey in full production mode and to accommodate budget reductions that necessitated cutbacks in sample size or number of interview waves or both for most panels (see Chapter 2). Bureau staff had limited time and resources to exploit the potential value of administrative records. Consequently, no supplementary sampling frames were developed from administrative records for SIPP during this period although some work went forward on evaluation and related uses of administrative records. The Census Bureau carried out a handful of matches of SIPP panels with administrative records, which were facilitated by a successful program to obtain Social Security numbers from SIPP respondents and match them to SSA files for validation purposes. These matches included (1) a match of the 1984 SIPP panel with SSA records conducted for SSA under an agreement that limited its use to SSA analysts for a 2-year period; (2) a match of a small number of variables in IRS tax records with the 1984 panel conducted as part of an effort (which did not come to fruition) to develop weighting factors from IRS tax records for reducing the variance of income estimates from SIPP (Huggins and Fay, 1988); and (3) a match of IRS tax records with the 1990 panel conducted as part of an effort to develop a simulation model for estimating after-tax income in SIPP (which also did not come

48 REENGINEERING THE SURVEY to fruition). An analysis of the 1990 panel-IRS match for married couples with earnings highlighted the contribution of imputation procedures to the long-standing pattern by which SIPP estimates of earnings have fallen short of IRS estimates in the aggregate (Coder, 1992). Another analysis used the 1990 matched file to estimate the extent to which eligible families applied for and received the earned income tax credit (Scholz, 1994). Census Bureau staff also performed aggregate comparisons of selected estimates from administrative records sources and SIPP. Such comparisons were made for the 1984 panel for aggregate income amounts for nine sources (Jabine, King, and Petroni, 1990:Tables 10.1, 10.2, 10.3); for the 1986-1987 panels for the value of several types of assets and liabilities (Eargle, 1990: Table D.2); and for the 1990 panel for recipients and aggregate amounts for about 20 sources of income (Coder and Scoon-Rogers, 1996). Bureau staff carried out a single full record-check study, which matched SIPP records in four states for the first two waves of the 1984 panel with records from eight federal and state programs—Aid to Families with Depen- dent Children (AFDC, the predecessor to TANF), food stamps, unemploy- ment insurance, workers’ compensation, federal civil service retirement, Social Security, SSI, and veterans’ pensions and compensation. The study was designed to identify both false-negative and false-positive reports of program participation and benefit amounts in SIPP (Marquis and Moore, 1989, 1990a, 1990b). It encountered serious delays because of the time required to negotiate the acquisition of records from state agencies (indeed, the Census Bureau was never able to obtain the requested records from one state) and also because of problems in conducting the matches and prepar- ing analysis files. Almost 5 years elapsed from the study’s initiation in 1984 to the publication of detailed results, and many potentially useful analyses were never undertaken—in particular, the study did not examine discrepan- cies in benefit amounts but only in program participation. Nonetheless, the SIPP record-check study made important contribu- tions. Regarding the seam bias problem (see Chapter 2), it found that, in general, SIPP nonseam change estimates tended to underestimate true change, and change estimates at the seam tended to be too high. Regard- ing reporting bias, it confirmed the results of aggregate comparisons that participation in most programs was underreported (although there were overreports as well). It also found confusion among programs on the part of respondents, such as confusing AFDC with general assistance or Social Security with SSI. These findings stimulated research on questionnaire design to improve reporting in the survey.   Hotz and Scholz (2002:275-315), in a comprehensive review of surveys and administrative records sources for measuring employment and income of low-income populations, discuss some of the studies cited in the text.

EXPANDED USE OF ADMINISTRATIVE RECORDS 49 SIPP’s Use of Records, 1996-2006 Census Bureau staff have performed only one study comparing SIPP aggregates with independent estimates in the past decade. In this study, Roemer (2000) compared aggregate amounts for 1990-1996 from the 1990, 1991, 1992, 1993, and 1996 panels for about 16 sources of income with benchmarks derived from the National Income and Product Accounts and from the Current Population Survey (CPS). Recently, Meyer, Mok, and Sullivan (2009), researchers at the Universities of Chicago, Northwestern, and Notre Dame, compared aggregate amounts for nine income assistance programs for five surveys, including SIPP, the American ­Community ­Survey (ACS), the Consumer Expenditure Survey, the CPS, and the Panel Study of Income Dynamics (PSID), for years extending as far back as data were avail- able (1983-2005 for SIPP). They also compared average monthly participa- tion for eight of the nine programs for SIPP, the ACS, CPS, and PSID. The SSA sponsored SIPP interviews in January 2003 and January 2005 for systematic samples drawn from its records of SSI recipients and Dis­ ability Insurance beneficiaries (supplementary sampling frames). The ­Census Bureau conducted the interviews with these beneficiaries and processed the interviews using standard SIPP procedures. These supplemental SIPP interviews are for use by SSA only and are not publicly available from the Census Bureau (see DeCesaro and Hemmeter, 2008, for a description of characteristics for SSI recipients and disability insurance beneficiaries from the SIPP January 2003 supplemental sample interviews). The Census Bureau has continued to provide exact matches of SIPP and SSA records for use by SSA staff for research and modeling of the Old- Age, Survivors, and Disability Insurance (OASDI) and SSI Programs. The availability of these files not only has enabled SSA staff to conduct policy research that contributes to planning for OASDI and SSI Program needs (see, e.g., Butricia, Iams, and Smith, 2003; Iams and Sandell, 1996), but also has supported studies on the quality of SIPP reporting of OASDI and SSI benefits. Thus, Huynh, Rupp, and Sears (2001) used OASDI and SSI administrative records matched to the 1993 and 1996 SIPP panels to assess discrepancies in SIPP reports of benefit receipt and benefit amounts for four sample months. They found quite accurate reporting by people who received only Social Security and by people who received no benefits from either Social Security or SSI. However, there was substantial underreporting by people who received only SSI and by people who received both Social Security and SSI. They also found confusion between Social Security and SSI and much higher errors for imputed compared with reported benefit amounts for both programs. Roemer (2002) used an exact match of the Detailed Earnings Record (DER) file from SSA with SIPP and CPS data for 1990, 1993, and 1996

50 REENGINEERING THE SURVEY to study the accuracy of annual estimates of wages from both surveys. He found net underreporting in SIPP along with reporting errors in both posi- tive and negative directions: Fully 75 percent of SIPP respondents in the study reported wages that differed from the DER amount by more than 5 percent, although the correspondence was substantially better when com- paring percentile ranks—only 37 percent of SIPP respondents differed in percentile rank between the two sources. Stinson (2008) developed a model of measurement error in both SIPP and the DER by analyzing matched cases from the 1996 SIPP panel. Use of Records for Reengineering SIPP The ongoing effort to reengineer SIPP at the Census Bureau is making use of two administrative records projects. They are the SIPP “gold stan- dard” project and planned matches of administrative records from Illinois, Texas, and possibly other states with survey data. SIPP Gold Standard Project Begun in 2002 with funding from the Census Bureau, SSA, and the National Science Foundation, the goal of the SIPP Gold Standard project is to develop a rich resource for retirement income and disability analysis that can be widely used. The gold standard file, which can be analyzed only at the Census Bureau, includes variables from the 1990, 1991, 1992, 1993, and 1996 SIPP panels matched with IRS summary earnings records (annual FICA-taxable earnings, 1937-2003), IRS detailed earnings records (annual job-level data, uncapped, 1978-2003), and SSA benefits data through 2002 from the Master Beneficiary Record, Supplemental Security Income Record, and Payment History Update System 831 file (Abowd, 2007). A prototype public-use Version 4.1 of the gold standard file is available for researcher use as a beta test file through application to SSA, with the promise that the Census Bureau will run the researcher’s application on the gold standard file for comparison purposes. Version 4.1 contains all person- level SIPP and IRS variables from the Gold Standard Version 4.0, plus the benefit and type of benefit for a person’s initial SSA benefit (if any), as of April 1, 2000. There are 16 replicates of the 4.1 file, representing four dif- ferent sets of imputations for missing data and four different syntheses of selected variables for each set of imputations to protect confidentiality. Each   The gold standard project refers to earnings histories as IRS records because they are pro- vided to IRS as well as to SSA by employers. The very stringent confidentiality provisions of Title 26 and related regulations apply to the earnings data whether they are obtained from SSA or from IRS.

EXPANDED USE OF ADMINISTRATIVE RECORDS 51 replicate has a consistent panel weight for the civilian, noninstitutionalized population as of April 1, 2000. The ultimate goal of this work is to create public-use files that not only are useful for research on retirement and disability, but also protect against identifying SIPP respondents in the already available public-use files by applying state-of-the-art synthesizing techniques to selected variables. Such techniques perturb or alter specified variables according to specified statis- tical models that are designed to preserve key univariate and multivariate distributions to the extent possible. The challenge for synthetic techniques is whether they can fully protect confidentiality and at the same time per- mit inferences from the data that are as valid as would be obtained from a gold standard file. Initial evaluations of Version 4.1, which incorporated synthesized values for the vast majority of variables, showed excellent results for estimates of earnings histories for white men and women, not quite-so-good results for estimates of earnings histories for black men and women, and underestimation of early retirement and also of retirement at age 65 compared with other years (Abowd, 2007). SIPP reengineering staff are interested to learn about the experiences and reactions of researchers who work with Version 4.1 to determine if this approach would be acceptable to the SIPP user community as a way to pro- vide SIPP public-use files that are enriched with administrative records data. To date, half a dozen researchers are working with the beta file, and SSA commissioned an in-depth evaluation of it, which was completed in spring 2009 (see Urban Institute/NORC Evaluation Team, 2009). The conclusions of this evaluation are discussed in “Adding New Variables” below. Illinois and Texas Matching Project Work with Illinois and Texas program records began prior to the SIPP reengineering effort with a project to match 1999-2003 subsidized child care and TANF files from the two states with Census Bureau survey data as part of a study funded by the U.S. Department of Health and Human Services. In June 2008, the Census Bureau entered into agreements with both states to obtain administrative records for a Demonstration of Administrative Records Improving Surveys (DARIS) project. The goals of DARIS (see University of Texas and U.S. Census Bureau, 2008:3) are to “demonstrate methods of integrating data from surveys and administra- tive records, produce data sets that more accurately represent the target population’s characteristics than survey data alone, conduct experiments in disclosure-proofing hybrid data sets, and document feasibility.” The files provided by Illinois and Texas include the previously provided child care and TANF files for 1999-2003, extended through 2007, and food stamp participation files for 2004-2007.

52 REENGINEERING THE SURVEY The SIPP reengineering effort is taking advantage of the DARIS ­project to evaluate the quality of the responses obtained for a sample of 2004 SIPP panel members in Illinois and Texas who were interviewed in spring 2008 using a paper and pencil event history calendar to obtain informa- tion for calendar year 2007. This evaluation sample includes SIPP panel members who were dropped from the survey for budgetary reasons as well as continuing panel members who provided responses covering 2007. The administrative records data for 2007 for the evaluation sample are being compared with the responses in regular SIPP interviews covering 2007 (for the continuing panel members) and with the responses obtained in event history calendar test interviews (for continuing and dropped panel mem- bers). In a subsequent evaluation of an electronic event history calendar test, scheduled for early 2010, the Census Bureau hopes to compare the survey results with administrative records data not only from Illinois and Texas, but also from other states including Maryland (with which the ­ Census Bureau already has an arrangement for obtaining program records—see below) and California, Massachusetts, New York, and Wisconsin. We dis- cuss the event history calendar tests in Chapter 4. Although not directly related to SIPP, we note that two other ­projects have demonstrated the value of exact matches of state administrative records with survey responses for evaluation purposes. For one project, the Census Bureau exactly matched administrative records from Maryland’s Client Automated Resource and Eligibility System (CARES), which contains records for beneficiaries of food stamps, TANF, and several other public assistance programs, with the 2001 test version of the American Commu- nity Survey. Analysis of the matched files documented significant under- reporting of program participation in the 2001 ACS (Lynch et al., 2008; Taeuber et al., 2004). The work was sponsored by the Economic Research Service, U.S. Department of Agriculture, and the Maryland Department of Human Resources. For the other project, the Census Bureau exactly matched administrative records from Illinois, Maryland, and Texas, including TANF records, child care records, and employment and earnings records, with the 2001 test version of the ACS. The analysis examined child care subsidy participation and the effects on employment among low-income families in the three states (Goerge, 2009). This work was funded by the Child Care   Illinois,Maryland, and Texas, along with six other states (California, Florida, Georgia, Missouri, Ohio, and Washington) participate in the Administrative Data Research and Evalu- ation (ADARE) alliance. ADARE is a partnership among research organizations, which have developed data-sharing agreements with their respective states to obtain administrative records databases for the TANF, Unemployment Insurance, Workforce Investment Act, and other employment-related programs for employment- and welfare-related research and evaluation. ADARE is funded by the Employment and Training Administration, U.S. Department of L ­ abor, and managed by the Jacob France Institute at the University of Baltimore (see http:// www.ubalt.edu/jfi/adare/about-ADARE.cfm).

EXPANDED USE OF ADMINISTRATIVE RECORDS 53 Bureau of the Administration for Children and Families, U.S. Department of Health and Human Services. OTHER CENSUS BUREAU USES OF ADMINISTRATIVE RECORDS The Census Bureau has increasingly made use of administrative records in other programs, and many of these uses are relevant to a reengineered SIPP. Three major programs are briefly described below: the Longitudinal Employer-Household Dynamics (LEHD) Program; the Small-Area Income and Poverty Estimates/Small-Area Health Insurance Estimates (SAIPE/SAHIE) Programs; and the Statistical Administrative Records System (StARS). Longitudinal Employer-Household Dynamics LEHD is a 10-year-old program, supported by the Census Bureau, the National Science Foundation, the National Institute on Aging, and the Sloan Foundation, which seeks to link the Census Bureau’s household and business surveys in ways that can advance knowledge of the dynamic relationships of workers, jobs, households, and businesses. A component of LEHD is the Local Employment Dynamics Program, in which the Census Bureau obtains quarterly employment and earnings information from state employment security agencies and, in return, provides quarterly workforce indicators (QWI) for labor market areas in each state. The states collect employment and earnings from almost all employers in order to manage their unemployment insurance programs; the QWI data are developed by the Census Bureau by merging local demographic information with the employment and earnings information (see http://lehd.did.census.gov/led/ led/led.html). As of early 2009, 47 states (excluding only Connecticut, Massachusetts, and New Hampshire), the District of Columbia, and Puerto Rico are or are about to be part of the LEHD Program through separate memoranda of understanding between each state and the Census Bureau. Researchers have made extensive use of LEHD information linked across time and other LEHD data sets for innovative analyses that have enriched understanding of labor markets (see, e.g., Brown, Haltiwanger, and Lane, 2006). Not only does the LEHD Program provide information on employment and earnings that could potentially be used to evaluate or augment SIPP data, but also the history of the initiation and growth of the program from a handful of states to its present almost-complete coverage may hold lessons for a reengineered SIPP (see “Acquisition of Administrative Records, State Records” below).

54 REENGINEERING THE SURVEY Small-Area Income and Poverty Estimates/ Small-Area Health Insurance Estimates The Census Bureau, with support from other federal agencies, created the Small Area Income and Poverty Estimates Program in the mid-1990s to provide more current estimates of selected income and poverty statistics (e.g., poor, school-age children) than those from the most recent ­decennial census for small geographic areas. The program creates estimates for school districts, counties, and states using statistical models that incorporate data from the ACS (beginning with the 2005 estimates; previously, CPS data were used), together with administrative records data on food stamp ­recipients and federal income tax filers at the county level. The estimates are used in allocation of federal education funds to local jurisdictions. More recently, the Census Bureau began the Small-Area Health Insurance Estimates Pro- gram to provide state and county estimates of health insurance coverage using similar statistical models with CPS data and administrative records data for counties on food stamp recipients, federal income tax filers, and enrollees in Medicaid and CHIP. SAIPE and SAHIE are examples of estimates that do not rely on a single source—survey or administrative records—but instead combine data from multiple sources in statistical models to reduce sampling and non­sampling errors in the estimates (see http://www.census.gov/did/www/saipe/ and http://www.census.gov/did/www/sahie/; see also National Research Coun- cil, 2001). It is possible that statistical models could be used to develop “best estimates” of selected key indicators, such as poverty rates, from SIPP (or other surveys), but we do not discuss this approach further. Statistical Administrative Records System In the early 1990s the Census Bureau began a program to develop an integrated set of administrative records that could be used for a variety of purposes to reduce reporting burden and to minimize the cost of obtaining needed information. The bureau inventoried potentially available adminis- trative records files and created an administrative records research staff. The staff built a prototype of a combined and unduplicated set of administrative records (StARS 1999) that would include basic demographic information (age, race, ethnicity, and gender) similar to the decennial census short-form content. One of the 2000 census experiments compared census counts with estimates of population and demographic characteristics for census tracts and blocks in five counties derived from the 1999 StARS (National Research Council, 2004b:199-202). Following the 2000 census, Census Bureau staff developed a model for imputing race and ethnicity from 2000 census data to improve on the

EXPANDED USE OF ADMINISTRATIVE RECORDS 55 available information in the Census Numident (numeral identification) file, which in turn is used to input demographic information to StARS. The Census Numident file is an edited version of the SSA Numident file that stores information contained in applications for Social Security numbers (SSNs), including the name of the applicant, place and date of birth, and other information for all SSNs since the first number was issued in 1936. In addition, Census Bureau staff built a person validation system (PVS) that can match and verify records containing SSNs against the Census Numident file, or, if the records do not contain SSNs, determine a valid SSN either by matching on address against the geokey reference file, or by matching on name and date of birth against the name reference file. The geokey reference file is generated from StARS and contains all addresses for each SSN; the name reference file is also generated from StARS and contains all combinations of alternate names and dates of birth for each SSN. The PVS replaces the SSNs with person identification keys (PIKs) to enhance the level of confidentiality protection. The PVS system is very important for SIPP, which stopped collect- ing SSNs midway through the 2004 panel because of increasingly poor response. An evaluation of the PVS using CPS 2001 records—47 percent of which lacked SSNs—found that the PVS achieved a verified matching rate for the total CPS sample of 93 percent, using address, name, and date of birth, compared with a rate of 94 percent when SSNs were also used in the match when available (Wagner, 2007:slide 14—the match excluded CPS records with no name and refusals). Once a set of records, such as SIPP survey responses, has been matched via the PVS, it is then possible to use the resulting PIKs to match the survey records with other records that the Census Bureau has acquired as part of its initiative to integrate and make better use of administrative records. The core StARS, which is designed to contain short-form-only content, at pres- ent includes over 300 million person records and over 150 million address records developed by merging and unduplicating seven national files. These files are IRS 1040 records, IRS 1099 records, and Medicare Part B records, along with two sets of records from the U.S. Department of Housing and Urban Development, a set of records from the Indian Health Service, and the Selective Service System registration file. In addition, the Census Bureau regularly acquires Master Beneficiary Record files from SSA for survey records to which it has assigned PIKs (and could acquire the complete files if so desired), Medicaid files from the Centers for Medicare and Medicaid Services, and SSI record files from SSA, along with quarterly wage records from states that participate in the LEHD Program and counts of food stamp recipients by county. Applications of StARS and other administrative records acquired by the Census Bureau to date include

56 REENGINEERING THE SURVEY • research on using StARS records to assign age, race, gender, and Hispanic origin for census respondents who fail to report one or more of these characteristics; • research on using StARS records to determine the demographic characteristics of households that do not respond to the CPS; • work to develop near-real-time population estimates for areas that experienced disasters, such as a devastating hurricane—for this pur- pose, the Census Bureau acquired the U.S. Postal Service’s National Change of Address File and the Federal Emergency Management Administration’s emergency management and flood insurance files; and • work to match Medicare and Medicaid files to CPS ASEC and National Health Interview Survey data to understand the reasons for discrepancies in survey reports of health insurance coverage under these programs. The work on StARS and the other administrative records acquired by the Census Bureau to date represents an excellent start on building the infrastructure to support widespread use of administrative records in C ­ ensus Bureau programs and in exploring uses of different kinds of records. The bureau’s administrative records program, both now and in the future as it adds new sets of records and analysis capabilities, will be an impor- tant resource for applications of administrative records in a reengineered SIPP. Beginning with the acquisition of records through data linkage, types of uses, confidentiality protection, and data access, we address issues to consider for SIPP’s use of records and outline a goal-oriented approach to identifying the most fruitful applications of administrative records in SIPP for the short and longer terms. ACQUISITION OF ADMINISTRATIVE RECORDS The first hurdle for the use of administrative records in a reengineered SIPP is to determine the feasibility and costs of acquiring records from agen- cies that have custody of them. This hurdle turns out to be much higher for records held by state agencies than for records held by federal agencies. Federal Records Through its StARS Program, described in the preceding section, the Census Bureau already has arrangements in place to acquire, update, link, unduplicate, and evaluate information from a large number of administra- tive records systems from federal agencies. These records provide national

EXPANDED USE OF ADMINISTRATIVE RECORDS 57 coverage for the programs to which they apply. They vary in timeliness. For example, the 2008 StARS file contains the following: • IRS 1040 records filed any time in 2008, pertaining to 2007 income, which are provided to the Census Bureau in two waves—in Octo- ber for weeks 1-39 and in January for weeks 40-52. • IRS 1099 records filed in weeks 1-41 of 2008, pertaining to 2007 income (the bureau does not acquire 1099 records filed in weeks 42-52). • Medicare Part B enrollment records filed any time in 2008. • U.S. Department of Housing and Urban Development, Indian Health Service, and Selective Service System records provided to the Census Bureau in May 2008. SSA files are provided to the Census Bureau with very little delay. The longest time lag is for Medicaid files, which the Census Bureau does not receive from the Centers for Medicare and Medicaid Services (CMS) until 3 years after the reference date. In addition to the files enumerated above, the Census Bureau is seek- ing to acquire files from the U.S. Department of Veterans Affairs (VA), and it has access to, but has not used, the Free Application for Student Aid (FAFSA) files from the U.S. Department of Education. The bureau to date has not attempted to, but presumably could, obtain records for Medicare Part D (prescription drug coverage). The Census Bureau’s program to acquire federal administrative records demonstrates a high level of professionalism and competence in negotiating data acquisition and use agreements specific to each provider agency; devel- oping and refining procedures for accurate matching, unduplication, and imputation of missing demographic characteristics; and building systems to enhance the level of confidentiality protection. The research and develop- ment work that underlies the StARS Program should greatly facilitate the reengineering process for SIPP, in both the short and longer terms. In concert with its work to develop StARS and associated records as a Census Bureau–wide resource, we encourage the bureau to systemati- cally outline a plan for acquiring additional federal agency administrative records that are germane to SIPP’s goal of providing detailed information on the short-term dynamics of economic well-being for families and house- holds, including employment, earnings, other income, and program eligibil- ity and participation. Acquisition and use of the VA and Medicare Part D files mentioned above should be part of the bureau’s plan. Another possibly   The FAFSA files are of limited use because of the limited nature of the population that applies for this aid.

58 REENGINEERING THE SURVEY useful source of information is the Federal Case Registry of Child Support Orders maintained by the Administration for Children and Families (ACF) in the U.S. Department of Health and Human Services, although access to this file is difficult to obtain (see http://www.acf.hhs.gov/programs/cse/ newhire/fcr/fcr.htm). The ACS Office of Child Support Enforcement also maintains the National Directory of New Hires (NDNH), which was mandated by the 1996 Personal Responsibility and Work Opportunity Reconciliation Act to assist state child support agencies in locating parents and enforcing child support orders. The NDNH includes quarterly reports from states of new hires in the state (information reported on W-4 forms by employers), quarterly reports of employment and earnings from state workforce agencies (the same data obtained by the Census Bureau’s LEHD Program), and quarterly reports from state workforce agencies of unemployment insurance claimants. Federal agencies also report new hires and employment and earnings to the NDNH. The authorizing legislation lists several entities that are entitled to request NDNH information for specific purposes, such as the secretary of education for collection of student loans. The Office of Child Support Enforcement requires a memorandum of understanding and cost-reimbursement for each request of NDNH data. The Census Bureau is not listed as an authorized user of the NDNH; however, “researchers/others” may request NDNH informa- tion for research purposes “found by the Secretary of HHS to be likely to contribute to achieving the purposes of Part A or Part D of the Social Secu- rity Act” (see http://www.acf.hhs.gov/programs/cse/newhire/library/ndnh/ background_guide.htm). The Census Bureau should investigate this source to determine if it could provide employment, earnings, and unemployment benefits information for all states for use in SIPP and other bureau programs without the necessity to negotiate with individual states. For purposes of a reengineered SIPP, some federal records are more use- ful than others, not only because they are available with a relatively short time lag, but also because the provisions governing their use are more flex- ible. In contrast, some federal records, such as IRS records, are very tightly restricted, so that they could be used indirectly but not directly in SIPP. Even for indirect uses, such as evaluation, the available federal records are not a comprehensive resource for SIPP. They do not cover some important sources of income, such as income from some state-administered programs, as well as detailed components of asset income, including dividends and interest by specific asset types (e.g., savings accounts versus money market funds). State Records The picture for state administrative records is much less promising. At present, the Census Bureau has records for selected programs for specific

EXPANDED USE OF ADMINISTRATIVE RECORDS 59 years for a few states, including TANF records for Illinois, Maryland, and Texas; food stamp records for Illinois, Maryland, Minnesota, and Texas; general assistance records for Illinois and Maryland; and child care sub- sidy records for Illinois, Maryland, and Texas. These records have all been acquired for specific research and evaluation purposes (described above). In addition, the Census Bureau has quarterly employment and earnings records from 47 states and the District of Columbia on an ongoing basis through the LEHD Program. The Census Bureau can also in some instances of state-administered programs (e.g., food stamps) obtain counts of recipi- ents by state or county. To determine the feasibility of acquiring state agency administrative records for a reengineered SIPP, the panel commissioned a study of state laws on confidentiality and access for all 50 states for the TANF, Medicaid, UI, WC, and other cash benefit (principally general assistance) programs. The study was able to find applicable statutes (or determine that the state had no statutes about confidentiality and access for any of these programs) for all 50 states. The study classified states into three categories (Sylvester, Bardin, and Wann, 2008:5; the category names are the panel’s): 1. Ready access—states or state agencies for which the authors could either find (a) specific enactments empowering a state agency to provide access to program records for purposes that could include their use for a reengineered SIPP or (b) no statute or administra- tive section that applied to the confidentiality or use of program records. 2. Restricted access—states or state agencies for which the authors could find specific enactments allowing the release of records for purposes that could include their use for a reengineered SIPP but that contain codified restrictions on access, disclosure, or use that the Census Bureau would need to agree to in a memorandum of understanding. 3. No access—states or state agencies for which the authors could find either (a) general (constitutional, judicial, or statutory) laws prohibiting access to state-held program records for a purpose such as their use for a reengineered SIPP or (b) specific laws prohibiting a state agency from releasing program records for a purpose such as their use for a reengineered SIPP. Table 3-1 shows how Sylvester, Bardin, and Wann (2008) classi- fied states among categories 1, 2, and 3 for four programs the authors s ­ tudied—TANF, Medicaid, UI, and other assistance (e.g., general assis-

TABLE 3-1  Classification of State Legal Codes Regarding Access to Records of Four Programs—Medicaid, Temporary 60 Assistance for Needy Families (TANF), Unemployment Insurance (UI), and Other Cash Benefits—for Possible Use by the Census Bureau for SIPP State Category 1 Programs Category 2 Programs Category 3 Programs Alabama All four programs Alaska All four programs Arizona All four programs Arkansas All four programs California All four programs Colorado All four programs Connecticut UI* Medicaid, Other cash benefits, TANF Delaware All four programs Florida All four programs Georgia All four programs Hawaii UI Medicaid, Other cash benefits, TANF Idaho UI Medicaid, Other cash benefits, TANF Illinois Medicaid, UI Other cash benefits, TANF Indiana Other cash benefits* Medicaid, TANF, UI Iowa All four programs Kansas Medicaid* Other cash benefits, TANF, UI Kentucky All four programs Louisiana All four programs Maine All four programs Maryland UI* Medicaid, Other cash benefits, TANF Massachusetts All four programs Michigan Other cash benefits, TANF UI Medicaid Minnesota All four programs Mississippi All four programs Missouri UI Medicaid, Other cash benefits, TANF Montana All four programs

Nebraska Other cash benefits,* TANF,* UI* Medicaid Nevada Other cash benefits, TANF* UI Medicaid New Hampshire UI* Medicaid, Other cash benefits, UI New Jersey Medicaid, Other cash benefits, TANF UI New Mexico Other cash benefits, TANF, UI Medicaid New York All four programs* North Carolina UI* Medicaid, Other cash benefits, TANF North Dakota UI* Medicaid, Other cash benefits, TANF Ohio UI* Medicaid, Other cash benefits, TANF Oklahoma All four programs Oregon UI Medicaid, Other cash benefits, TANF Pennsylvania Other cash benefits, TANF, UI Medicaid Rhode Island All four programs* South Carolina UI* Medicaid, Other cash benefits, TANF South Dakota Other cash benefits,* TANF,* UI* Medicaid Tennessee All four programs* Texas Other cash benefits,* TANF,* UI* Medicaid Utah Medicaid* Other cash benefits, TANF, UI Vermont All four programs* Virginia Medicaid Other cash benefits, TANF, UI Washington UI Medicaid, Other cash benefits, TANF West Virginia All four programs Wisconsin TANF* Other cash benefits, UI Medicaid Wyoming UI* Medicaid, Other cash benefits, TANF NOTES: Other cash benefits include such programs as general assistance; the categorization of UI also applies to workers’ compensation. Category 1 includes states or state agencies with (a) specific enactments empowering a state agency to provide access to program records for purposes that could include their use for a reengineered SIPP or (b) no statute or administrative section that applied to the confidentiality or use of program records. 61 continued

TABLE 3-1  Notes continued 62 Category 2 includes states or state agencies with specific enactments allowing the release of records for purposes that could include their use for a reengineered SIPP but with codified restrictions on access, disclosure, or use that the Census Bureau would need to accept in a memorandum of understanding. Category 3 includes states or state agencies with either (a) general (constitutional, judicial, or statutory) laws prohibiting access to state-held pro- gram records for a purpose such as their use for a reengineered SIPP or (b) specific laws prohibiting a state agency from releasing program records for a purpose such as their use for a reengineered SIPP. * Indicates that Sylvester, Bardin, and Wann (2008) could not find any statute about confidentiality or use of program records. SOURCE: Classification by Sylvester, Bardin, and Wann (2008:8-9, 12-13, 15-17); panel staff resolved several inconsistent classifications by examin- ing the statutory language provided in Sylvester, Bardin, and Wann (2008:Appendix C).

EXPANDED USE OF ADMINISTRATIVE RECORDS 63 tance). In total, with 50 states and 4 programs, there are 200 state- p ­ rogram combinations. Of these, the study classified 45 state-program combinations (in 22 states) in Category 1 (ready access)—because the state either explicitly permits or does not prohibit the use of program records by an agency such as the Census Bureau for statistical purposes (mainly the latter). The study classified another 42 state-program combi- nations (in 18 states) in Category 2 (restricted access) because the state would permit the use of program records by the Census Bureau under more or less restricted conditions. Finally, the study classified 113 state- program combinations (in 38 states) in Category 3 (no access) because the state generally or specifically prohibits the use of program records by an agency such as the Census Bureau for statistical purposes. The classifications in Table 3-1 represent the authors’ judgments based on their review of state constitutions and legal codes, excluding regula- tions, executive orders, and other possible kinds of interpretations that might allow access to records by the Census Bureau for statistical purposes. Indeed, members of the panel are aware of instances in which records from some of the states listed in the “no access” category have been used for research, although some of these instances may have applied to records not covered by Sylvester, Bardin, and Wann (e.g., food stamps) and to uses within the state and not by a federal agency. Nonetheless, it would appear from the analysis of Sylvester, Bardin, and Wann (2008) to be impossible for the Census Bureau to acquire records for all programs of interest for all 50 states and difficult for it to acquire records for more than a handful of states. Not only do 38 states in their analysis apparently preclude access to records for at least 1 of the 4 pro- grams studied (Category 3), but another 18 states place restrictions on access (Category 2). Some of the legislative provisions for Category 2 states are relatively benign, such as requiring access to be “in the public interest” or for official purposes. The legislative provisions for other states in this cat- egory are more onerous, such as requiring advance notification and consent from individuals in a program. Finally, of the 22 states that would appear able to provide records to the Census Bureau for a reengineered SIPP for at least 1 of the 4 programs studied (Category 1), only 2 states (Arizona for all 4 programs and Michigan for TANF and other cash benefits) have statutes   The categorization shown in Table 3-1 for UI also applies to workers’ compensation— S ­ ylvester, Bardin, and Wann treated UI and WC as a single program in their review because both programs are administered by the same office in each state.   For example, Hotz, Mullin, and Scholz (2003, 2005) have analyzed matches of California public assistance, unemployment insurance, and tax records, but the matches were performed by the California Tax Franchise Board, which delivered aggregated results to the researchers. The ADARE alliance provides access to state records by authorized researchers but not by federal agencies.

64 REENGINEERING THE SURVEY that explicitly allow for data sharing with a federal agency. The remaining states are in Category 1 because Sylvester and his coauthors could not find any statutes pertaining to confidentiality and access, yet such states may well have regulations that limit access. The situation is far from hopeless, however. One pattern that emerges from the data collected by Sylvester and his coauthors is that access to UI records—and perhaps WC records—may be possible in many states, either because of a statute that permits access (albeit often with restrictions) or because there appears to be no applicable statute that would prohibit access. Moreover, many states are statutorily allowed to provide records to other states or even federal agencies for purposes of program administra- tion. Although the Census Bureau is not a program administration agency, the data from SIPP could be useful to states for program evaluation and improvement. Just as the Census Bureau provides quarterly workforce indicators in return for access to state employment security agency employ- ment and earnings records for the LEHD Program, it might be possible to develop an appropriate quid pro quo that would benefit state agencies that provide records for a reengineered SIPP (see “Strategic Planning for Acquisition” below). Finally, it is important to note that the distribution of program benefits is not uniform across the states, which means that coverage of a significant proportion of the caseload for such programs as TANF and food stamps could be obtained by acquiring records from a relatively small number of states. For example, the TANF records for the two states of Illinois and Texas currently available to the Census Bureau cover about 8 percent of TANF recipients nationwide. If it were possible to acquire TANF records from just five more states, including California, Michigan, New York, Ohio, and Pennsylvania, coverage could be extended to one-half of TANF recipi- ents nationwide, greatly facilitating indirect uses of administrative records in SIPP, such as evaluation and improved imputation procedures. Strategic Planning for Acquisition We applaud the Census Bureau’s work on acquiring federal adminis- trative records, which have great potential value for a reengineered SIPP in addition to many other bureau programs. The Census Bureau should continue that work and seek to acquire additional federal records to the extent possible, such as VA records. For federal records, the costs of acqui- sition, matching, and editing appear to be low compared with the benefits and have the advantage that they can be spread over many Census Bureau programs. In contrast, the costs for the Census Bureau in attempting to acquire and use state program records would be substantial. These costs would

EXPANDED USE OF ADMINISTRATIVE RECORDS 65 include the time and effort to make contact with appropriate state agen- cies, verify the provisions of state statutes and regulations that pertain to confidentiality and data access, and develop acceptable memoranda of understanding. In addition, there would be costs, subsequent to acquiring records, to clean and edit the data, which would probably necessitate time- consuming interactions with state agency staff, or with research organiza- tions that are knowledgeable of the state files, to answer questions and resolve discrepancies. Moreover, some attempts to acquire records would be likely to come to naught, even with the expenditure of substantial time and resources to develop a mutually acceptable memorandum of understanding for data acquisition. Given these challenges, the Census Bureau will need to think strategi- cally about acquisition of state records and develop a well-thought-out plan for acquisition in the short and longer terms. By “think strategically,” we mean that the Census Bureau will need to develop priorities for acquisition of state records in light of the goals of SIPP and the importance of differ- ent kinds of program records for those goals. Three criteria for establishing priorities include the importance of the income source for lower income households, particularly in times of economic distress; the relative ease of acquiring the records; and the ability to cover a large proportion of the program caseload by acquiring records from a relatively small number of states. As an example, consider UI benefits. Subsequent to the enactment of welfare reform in 1996, more low-income single mothers with children entered the workforce and so were able to turn to UI benefits when they lost a job. By 2002, more single mothers with children received UI than TANF benefits (Assistant Secretary for Planning and Evaluation, 2005:Figure C). Such findings, coupled with the importance of being able to analyze the contribution of UI benefits to ameliorating recessionary economic condi- tions and the fact that UI records may be easier to obtain than other kinds of state records, suggest that UI records could be a target of opportunity for the Census Bureau. Moreover, the relationships built by the LEHD Pro- gram with state employment security agencies may facilitate obtaining not only employment and earnings records, as is done in the LEHD Program, but also UI records. It may also be possible, as noted above, to acquire UI records for all states from the federally maintained National Directory of New Hires, which could be an efficient, low-cost source for acquiring these data, providing the Census Bureau could obtain permission to use the data for improving SIPP. In contrast, a program such as WC contributes less to aggregate income than the UI Program. Moreover, duration of benefit receipt tends to be longer, while aggregate amounts of benefits paid show no particular trends over time. These factors suggest that obtaining WC records is of lower pri-

66 REENGINEERING THE SURVEY ority for SIPP’s primary purpose of supporting policy analysis and research on intrayear dynamics of program participation and income. Of course, it if were readily possible to acquire WC records at the same time and under the same provisions as UI records, the Census Bureau should not hesitate to do so. In addition to setting priorities among program records for acquisi- tion, the Census Bureau will need to take account of acquisition issues in determining the types of uses to which it will put the records it acquires. In the short term, promising to restrict use of records to indirect uses, such as evaluation and perhaps improvement of imputation methods, could facili- tate acquisition because the threats to confidentiality would be substantially lower than if the records were to be used directly in a reengineered SIPP. In the longer term, it may be possible to move toward direct uses once ongoing relationships have been built with state agencies and by developing ways to provide states with useful information, as has been done in the LEHD Program. For example, sample size might be added for states that are very cooperative about providing program records, so that the SIPP data for those states would be statistically reliable for analysis at the state level. Adding sample could significantly increase SIPP’s costs, but there could be substantial benefits of higher quality data given that the survey historically produces net underestimates of many sources of income. Overall, the reengineering of SIPP will need to proceed on the assump- tion that significant use of state administrative records cannot be part of the plan in the short and medium term. Nevertheless, a strategy for the acquisi- tion of high-priority types of state records and their use for such purposes as evaluation of SIPP data quality should be developed and implemented as resources permit. In addition, the reengineering plan should envision a wide variety of uses of federal records. LINKAGE OF ADMINISTRATIVE AND SURVEY RECORDS Many applications of administrative records in a reengineered SIPP require matching of the administrative and survey data for individuals and households. Fellegi and Sunter (1969) provided the first formal math- ematical model for probabilistic record linkage techniques, building on ideas introduced by Newcombe and colleagues (1959). Beginning in the late 1980s, Census Bureau staff have been leaders in the development and continuous improvement of computer-based record linkage software of the kind that underlies the StARS database (see Winkler, 2006, for a review article on research and development in the record linkage field). Census Bureau staff and ­ others have addressed such challenges as standardizing names and addresses across data files to reduce rates of false negatives (failure to find a match when one exists); developing algorithms to com-

EXPANDED USE OF ADMINISTRATIVE RECORDS 67 pare strings of characters (e.g., names) among data files that allow for typographical errors in one or both files (even after standardization) being matched; forcing one-to-one matches to reduce rates of false positives (matching two records that are not for the same individual); developing methods to block or group records in ways that make the searching and matching processes more efficient; and developing methods to use auxiliary data files to improve the match between two files. The Census Bureau clearly knows how to conduct efficient, high-­quality matches of data files, even when SSNs are not available, as has been the case with SIPP responses since about 2006 (midway through the 2004 panel). While never perfect, such matching has been shown to achieve good results. For example, as noted above, a match of the 2001 March CPS with the Numident file using the person verification system was success­ful 94 percent of the time using SSNs (available for about 53 percent of the CPS records) and 93 percent of the time using only name, address, and date of birth. (The universe for matching excluded refusals and records lacking a name.) Extensive review estimated the false match rate to be very low—between 0.13 and 0.20 percent. The estimated false nonmatch rate was higher—4.65 percent (Wagner, 2007). Another evaluation compared the demographic composition of records from the 2001 ACS that matched and did not match the Numident file on the basis of name, address, and date of birth (SSNs are not collected in the ACS). The matched cases (91 percent of the total eligible for match- ing) were very similar in distribution by gender, race, Hispanic origin, age group, and income group to the full ACS file. The not-matched cases (9 percent of the total) differed significantly in composition: Compared with the full ACS file, the not-matched cases included higher proportions of minorities, younger people, and lower income groups. These results could reflect not only that minorities, younger people, and lower income groups are less likely to have SSNs, but also that the information on name, address, and date of birth for these groups is more likely to differ between the Numident and other files. Matching errors should not be ignored, particularly false negatives that underestimate the true match rate and negate the possibility of using admin- istrative records for people who should be but are determined not to be a match. However, the error rates evident in the evaluations of which we are aware appear to be smaller than the missing data rates that surveys often experience in reports of income, employment, and other characteristics. We encourage the Census Bureau to view the errors in administrative records and in matches of them with survey records in the same manner that the bureau and other statistical agencies have commonly viewed nonresponse and reporting errors in surveys—namely, as problems to address but not a brick wall. Some of the same techniques that are used to evaluate survey

68 REENGINEERING THE SURVEY reporting errors, such as reinterviews of samples of respondents and efforts to track down nonrespondents, could well be applied to evaluating and perhaps correcting data quality problems with administrative records and matching. INDIRECT USES We now come to the question of the kinds of uses that administrative records can play in a reengineered SIPP. We begin with indirect uses, in which the data from administrative records never replace or add to the data in SIPP public-use microdata files. The advantage of indirect uses of admin- istrative records is that they do not increase (in the case of evaluation), or only minimally increase (in the case of their use in imputation models), the risk of identification of SIPP respondents in public-use files. Consequently, these uses do not necessitate much if any in the way of additional confi- dentiality protection procedures. The disadvantage is that indirect uses of administrative records may not improve data quality to the extent possible with direct use. Aggregate Comparisons The history of SIPP’s uses of administrative records outlined above notes several examples of using aggregate estimates from administrative records to evaluate corresponding aggregate estimates from the survey, such as aggregate benefits received from an assistance program or average monthly participation in a program. This use of administrative data is rela- tively inexpensive; the major difficulty lies in making appropriate adjust- ments to the administrative data estimates or survey estimates or both to make them as comparable as possible with regard to the universe of people covered, the time period covered, and the definition of participation and income. Also, this use of administrative data, given that comparisons are at the aggregate level, is only the starting point of work to evaluate and improve the quality of the survey data. Yet aggregate comparisons are an important first step, one which we think the Census Bureau should put on a regular schedule and routinize to the extent possible. The reason that aggregate comparisons should be made on a regu- lar basis is evident from examining some of the comparisons that have been performed. For example, Meyer, Mok, and Sullivan (2009:Table 2) found that SIPP estimates of aggregate dollar benefits from AFDC and its s ­ uccessor TANF as a ratio of program estimates have fluctuated over time, with a pronounced downward trend beginning in 1998. In contrast, SIPP estimates of average monthly participation in AFDC/TANF have not shown a time trend up or down (Meyer, Mok, and Sullivan, 2009:Table 11). These

EXPANDED USE OF ADMINISTRATIVE RECORDS 69 disparate findings suggest avenues of research for the Census Bureau to explore, such as evaluating individually matched records for the states that have provided them to the bureau and engaging in questionnaire design research to try to make the reporting of benefit amounts at least as accurate as the reporting of program participation. Most aggregate-level comparisons have to be made at the national level for the population as a whole given the limitations of available data. Of course, when the Census Bureau has access to 100 percent of program records, as in the case of such federal programs as SSI, it can perform comparisons at any level of aggregation that is desired, including at the individual record level (see “Individual-Level Comparisons” below). For state-administered programs, it may be possible in some instances to obtain more disaggregated estimates for comparison. For example, the Food and Nutrition Service provides state and county counts of monthly food stamp recipients to the Census Bureau for its SAIPE/SAHIE ­programs. These estimates could be used to develop ratios of monthly participants in SIPP versus the monthly program counts by geographic areas that could illuminate differences in reporting patterns that warrant research. For example, ratios of SIPP reporting to administrative totals in central-city counties (e.g., Chicago, Los Angeles) may differ from the ratios in ­suburban and rural counties. The SIPP data would need to be combined to form groups of counties for which SIPP estimates were sufficiently reliable for comparison purposes. (Aggregating more than 1 year of data could be help- ful in this regard.) The Employment and Training Administration in the U.S. Department of Labor makes available weekly counts of unemployment benefit claims by state, which could be analyzed in a similar fashion (see http://workforcesecurity.doleta.gov/unemploy/finance.asp). Some state-administered programs, such as TANF, food stamps, and unemployment insurance, also provide periodic reports to the relevant fed- eral agencies on characteristics of benefit recipients, most often drawn from samples of state administrative records (see, for example, http://aspe.hhs. gov/HSP/alt-outcomes00/app_d.htm and http://workforcesecurity.doleta. gov/unemploy/chariu.asp). These statistics could be useful for compari- son purposes, although they would be subject to sampling error and also n ­ onsampling error, in that reports of characteristics of program caseloads, such as other sources of income, may be less accurate than benefit amounts. Nonetheless, some characteristics in the administrative statistics, such as type of TANF recipient unit—single adult and children, two-parent family, or children only—may be deemed accurate enough to be useful for com- parison with SIPP estimates. To facilitate a program of regular aggregate comparisons, which should include not only SIPP, but also the CPS and perhaps other surveys that ask about income and program participation, the Census Bureau should

70 REENGINEERING THE SURVEY explore with the Office of Management and Budget Statistical and Science Policy Office the establishment of an interagency technical working group to support the effort. Staff from such agencies as the Administration for Children and Families (which oversees TANF), the Food and Nutrition Ser- vice (which oversees the food stamp, school meal, and WIC programs), the Internal Revenue Service (which oversees income reported on tax forms), and other agencies could be detailed to work with Census Bureau staff to develop the most comparable estimates possible for their programs. In this way, aggregate comparisons could be prepared on a recurring basis that would make use of the program knowledge in the agencies and the survey research knowledge in the Census Bureau to ensure the highest quality and most useful comparisons. Such comparisons, regularly disseminated, should be very useful to policy analysts and other data users in the public and private sectors. The members of the interagency technical working group could also contribute to the use of administrative records for other purposes, such as evaluating and improving imputation models for missing data. (See Chapter 4 for a related recommendation on obtaining assistance from researchers and policy analysts with regard to aggregate comparisons, imputation models, and other applications of administrative records.) Individual-Level Comparisons In addition to aggregate comparisons, individual-level comparisons of matched administrative and survey records are important to carry out because they make it possible to estimate the extent of gross errors—that is, overreporting and underreporting—whereas aggregate comparisons make it possible to estimate only net errors. Individual-level comparisons can shed light on whether reporting errors are random or systematic, and, if the l ­atter, whether they relate to other characteristics of respondents in ways that could suggest improved questionnaire design or other aspects of a sur- vey. Examples of systematic error are the confusion among Social Security and SSI benefit receipt found by Huynh, Rupp, and Sears (2001) and also their finding that imputed benefits are much less accurate than reported benefits. If the gross errors for an income source are very large, then that may suggest giving serious consideration to using the administrative data to correct the survey reports. For evaluation of income sources and program participation for state- administered programs, it is not necessary to acquire records for all or a large proportion of states in order to generate useful findings. The compari- sons currently under way of TANF and food stamp reporting from adminis- trative records with SIPP survey and event history calendar reports for 2007 for a subsample of the 2004 SIPP panel in the two states of Illinois and Texas should yield useful findings that suggest further avenues for fruitful

EXPANDED USE OF ADMINISTRATIVE RECORDS 71 research. (See Chapter 4 for a discussion of the limitations of the compari- sons with the event history calendar reports, which are paper based.) Resources permitting, the Census Bureau should not stop with the Illinois-Texas comparisons for TANF and food stamps—and, indeed, the bureau is endeavoring to obtain administrative records from other states for use in evaluating the results of its electronic event history calendar test in early 2010 (see Chapter 4). Working from a strategic plan, developed in consultation with SIPP data users, which considers the importance of an income source for low-income households and the feasibility of acquiring records for a significant proportion of program participants, the Census Bureau should identify priority programs and states to pursue for the purpose of acquiring records under mutually acceptable memoranda of understanding. In addition to following a targeted strategy for the acquisition of selected state records, the Census Bureau should carry out individual-level evaluations for federal records that it already holds as part of its StARS database and for the state records of employment and earnings that it acquires for the LEHD Program. Again, the bureau should plan strategi- cally for which programs to evaluate in the short and longer terms. Use of Administrative Records in Weighting Like other surveys, SIPP assigns weights to each person in the sample so that estimates from the data, obtained by applying the appropriate weights, represent the survey universe. The Census Bureau provides cross-sectional and longitudinal (panel) weights on SIPP data records to facilitate different uses of the data. SIPP weighting routines, as in other Census Bureau surveys, not only make use of the inverse of the sampling probability and adjustment fac- tors for whole-household nonresponse, but also include adjustment factors to bring estimates for age, gender, and race and ethnicity categories into agreement with independently estimated population control totals for these groups. The use of population controls is essential in the weighting pro- cess because without them the survey would significantly underrepresent important demographic groups, such as young minority men (see, e.g., U.S. Census Bureau, 1998:Tables 3-4, 3-5, 3-6). The Census Bureau develops population controls from the decennial census updated with administrative records on births, deaths, and net international migration. However, demographically based controls do not take account of other characteristics that may distinguish well-represented from underrepresented groups in the survey. In this regard, we encourage the Census Bureau to revisit its earlier research on using IRS tax record data in SIPP weighting to reduce the variance of income estimates to see if

72 REENGINEERING THE SURVEY that research could be worth pursuing for a reengineered SIPP (see “SIPP’s History with Administrative Records” above). Improving Imputations Imputation Methods in SIPP SIPP, like other surveys, has missing data, which the Census Bureau processes so that the resulting data file represents the population that was sampled and has values for every item for every person and household in the file. There are three main types of missing information: 1. Whole-household nonresponse, which is handled by a non­ response adjustment in the calculation of weights for the respond- ing households. 2. Partial household nonresponse, in which a member of an otherwise responsive household fails to respond or provides too few items of information. Called Type Z noninterviews, these cases are typi- cally handled by a procedure in which the entire record of another respondent that is similar to the nonrespondent on demographic characteristics that are available for both is substituted for the nonrespondent. 3. Item nonresponse, in which a respondent answers some but not all questions. Values for missing items are supplied through edits based on other information in the person’s own record or, more often, from hot-deck imputation, which also is used for some Type Z noninterviews. Hot-deck imputation for item nonresponse has a long history at the Census Bureau, beginning with the 1960 decennial census (see National Research Council, 2004b:458-459), and is widely used in the bureau’s c ­ ensus and survey programs. To explain, but oversimplifying: the records in a data file are sorted, usually by geographic area of residence; valid responses for a variable are continually entered into the cells of an appro- priate imputation matrix as the data file is processed; and the most recent (hottest) valid value is substituted for a missing response. The geographic sort helps ensure that responses are imputed from a person living in the same or nearby area. The imputation matrix for a variable or a collection of related variables usually includes demographic characteristics, such as age category, gender, race, and ethnicity, and may also include other variables. The intent is to supply a hot-deck value from a donor record that is very similar to the respondent; when this is not possible, the matrix categories are collapsed as necessary to find a donor. As a last resort, the starting,

EXPANDED USE OF ADMINISTRATIVE RECORDS 73 or cold, value, for the variable, which is prespecified, is used to supply a response. A problem with the hot-deck method, as it has been employed for SIPP, is that the variables that define the categories in a particular matrix are often not carefully tailored to the variable being imputed. Without careful tailoring, program participation, for example, may be imputed to people whose incomes from other sources would render them ineligible to receive benefits, or, alternatively, too high income amounts for, say, wages or property income may be imputed to people who report that they are participating in a means-tested assistance program (see Appendix A; see also McKee and McBride, 2008). Yet the more variables that are included in the matrix, the harder it may be to find a donor, and the more often that a single record may be used to supply values for large numbers of records with missing responses. Collapsing matrix cells provides more donors but at the cost of greater heterogeneity of the donor pool. Model-Based, Multiple Imputations The Census Bureau could better handle missing data in SIPP with modern, flexible model-based imputation techniques, which take account of more information than the hot-deck method. In fact, bureau staff are begin- ning research on model-based approaches for SIPP imputation ­ (Stinson, 2008). To illustrate how a model-based approach might be useful in SIPP, we suppose there are missing values of program participation status (assumed to be a binary indicator variable) for only one particular month; no other variables are missing. To handle missing values for multiple variables simul- taneously, the Census Bureau can use the multivariate imputation approach of Raghunathan et al. (2001). This approach relies on a collection of imputation models for each variable with missing values, so that the gen- eral principles for the one variable scenario are useful for multivariate scenarios. The first step of the process is to fill in any missing values that are determined by program rules. For example, if program participation is con- tingent on income not exceeding some threshold, all people whose incomes exceed that threshold are imputed to be nonparticipants (i.e., status = 0).10 10  Program eligibility rules, in practice, are more complicated than a simple income threshold; they may involve not only income level, but also family composition, citizenship status, the value of certain types of assets, work expenses, out-of-pocket medical care expenses, shelter expenses, and the like. To the extent the Census Bureau can mimic the eligibility rules for a particular program in an imputation model, the better; however, applying even a simple income threshold is preferable to allowing program participation to be imputed to any record with a missing value.

74 REENGINEERING THE SURVEY Or, if participants in one program, such as SSI, cannot also participate in another program, such as TANF, all people reporting, or imputed, to receive SSI would be imputed to be nonparticipants in TANF. Such checks can be automated in an imputation software routine. The second step is to impute values for people eligible for participa- tion (i.e., status ≠ 0). To do so, the Census Bureau could estimate a logistic regression of the participation status indicator on predictors associated with program participation. Only records eligible for participation are used to fit the regression. The predictors might include demographic variables such as age and gender, economic variables such as income, participation status from other months and other programs, and even data from other waves of SIPP. In general, it is prudent to include all variables thought to be associated with participation status, as this improves the chances that important relationships will be preserved in the completed data sets. If the Census Bureau suspects that the regression coefficients differ by population group, it could split the sample by these population groups and estimate the regression separately for each. Once the model is estimated, the Census Bureau would compute the resulting predicted probabilities and randomly sample missing participation status values from Bernoulli distributions with these probabilities. In addition, proper imputations would also use ­Bayesian methods to account for the uncertainty in the predicted probabilities. Stan- dard imputation software incorporates this uncertainty automatically. If using model-based imputation, the Census Bureau should strongly consider creating multiple imputations rather than single imputations for each missing datum. Multiple imputations allow users to incorporate esti- mates of the uncertainty introduced by imputation into calculations of stan- dard errors, by using standard complete-data methods and simple rules for combining estimates from the multiple data sets. For details on the benefits of multiple imputation, see Rubin (1987) and Schafer (1997). For examples of the use of multiple imputation in large federal surveys, see Schenker and colleagues (2006), which describes multiple imputation of income and earnings data in the National Health Interview Survey (see also Parker and Schenker, 2007), and Kennickell (2006), which describes multiple imputa- tion of assets and liabilities data in the Survey of Consumer Finances, which was implemented when the survey was redesigned in 1989. Use of Administrative Records in Model-Based Imputations As Stinson (2008:7) notes, “all imputation methods that use survey data exclusively are built on the assumption that the relationships between survey variables are the same for everyone, regardless of missing data.” This is the “missing at random” (MAR) assumption. However, if the relationship between a variable such as program participation and variables that are pre-

EXPANDED USE OF ADMINISTRATIVE RECORDS 75 dictive of participation differs when program participation is not reported, then an imputation that uses survey data alone will be flawed. Administrative records could be used to evaluate and improve model- based imputations in this regard.11 For example, the Census Bureau recently conducted an evaluation of earnings responses and imputations in the 2004 SIPP panel compared with earnings information reported on W-2 records to which it has access from IRS (Stinson, 2008:9-13). For this evaluation, the Census Bureau divided SIPP respondents into 4 groups on the basis of the number of months in which earnings were imputed for one or more jobs reported for calendar 2004: no months of imputed or missing data; 1-4 months of imputed data; 5-8 months of imputed data; 9-12 months of imputed data. Regressing the W-2 earnings on SIPP demographic character- istics for each of the four groups, predicting earnings for each group using the coefficients from each of the four regression equations, and averaging the differences of the W-2 earnings from the predicted earnings should give results of about zero for each group if the missing data are MAR. How- ever, the evaluation results indicated that for Group 2 (1-4 months imputed data), the imputed earnings appear to be too high on average, while for Group 4 (9-12 months of imputed data), the imputed earnings appear to be too low. Similarly, the work cited earlier by Huynh, Rupp, and Sears (2001:Table 7) documented that the current hot-deck model for imputing SSI and OASDI benefits does not do a good job—it imputes benefits that are too high, on average, compared with program records, particularly for SSI, indicating that nonrespondents differ from respondents in ways that are not captured in the hot-deck matrix. On the basis of these kinds of evaluations, the Census Bureau could profitably revise its imputation models to include administrative records in order to improve the accuracy of the imputed values. An advantage of this use of administrative records is that timely availability of the records would not be critical. Presumably, imputation models would be developed on the basis of the most recent data available and reestimated with newer data only every few years. Specific ways in which SIPP imputation models could use administra- tive records will vary, depending on such factors as whether the Census Bureau has access to a particular set of records nationwide or only for some states, whether it has access to the individual administrative records or only to aggregated information, and whether participation or benefits or both together are being imputed. As just one example, consider a federal 11  Using administrative records in hot-deck imputation matrices, while possible, does not make sense unless the administrative values are also used to substitute for or adjust the survey responses; otherwise the imputed values will be inconsistent with the reported survey values given the net underreporting of participation and benefits for many programs.

76 REENGINEERING THE SURVEY program such as SSI, for which the bureau has access to 100 percent of the records from the SSA. Instead of a hot-deck imputation, the Census Bureau could match the survey and SSI program records and then develop a model to jointly predict actual SSI participation and benefit amounts from characteristics reported in SIPP. Because there is relatively little net under­ reporting of SSI participation or benefits in SIPP (even though respondents often confuse SSI with OASDI receipt; see Huynh, Rupp, and Sears, 2001: Table 2), there would be no need to adjust the predictions from the imputa- tion model for consistency with the actual SIPP reporting, as might be the case for income sources for which there is significant reporting bias. For state-administered programs, the same kind of modeling of partici- pation and benefits could be done as described for the federal SSI program, except that the modeling would likely be limited to only a few states given the difficulties described above in gaining access to state records for Census Bureau use. Use of a model developed on a subset of states to impute miss- ing values for other states would have to be undertaken with care because of differences among state program rules and policies. For programs with important state variations, the use of an imputation model developed from selected states would probably not be desirable. In developing, evaluating, and improving model-based imputations in these and other ways, the Census Bureau should be guided by a strategic plan that prioritizes its work according to such criteria as the importance of the income source for key population groups, such as lower income people and the elderly, and the feasibility of acquiring records. In addition, as part of an ongoing program for acquiring and using administrative records in a reengineered SIPP, the Census Bureau should establish a schedule for peri- odic reevaluation and improvement of model-based imputation routines with administrative records. Imputation routines should not be frozen for years and decades at a time, as has happened historically with SIPP. They should be revisited as records become available from more sources for evaluation purposes (e.g., from additional states) and as programs and economic conditions change in ways that suggest the need to revise one or more imputation models. Confidentiality Concerns Including imputations in a file that incorporate information from admin- istrative records introduces far fewer risks to confidentiality protection than does direct substitution of actual values (see “Direct Uses” below). An intruder—namely, someone who tries to reidentify individuals in the file by matching with other data sources, such as data available on the web—­cannot be certain that matches based on imputed values are true, since the imputed values are predicted ones that are not necessarily the true values.

EXPANDED USE OF ADMINISTRATIVE RECORDS 77 In general, for hot-deck or other single imputation strategies, the C ­ ensus Bureau should compute disclosure risks (refer back to Box 3-1) using the SIPP records both before and after imputation. Using the incom- plete records (i.e., with missing responses) mimics an intruder who does not trust the imputations and bases matches only on the values known to belong to the data records. Using the completed records (i.e., with imputed values) mimics an intruder who matches whatever values are released. For multiple imputation strategies, the Census Bureau should match on each completed data set (in multiple imputation, some number, m > 1, data sets are released) and average the risk measures across the data sets, as well as quantify disclosure risks based on just the incomplete data. These kinds of analyses will help the bureau determine whether its current confidentiality protection procedures for SIPP public-use microdata files are unnecessarily stringent, are about right, or need to be enhanced. When administrative records are used to inform model-based multiple imputations, comparisons of disclosure risks for multiply imputed files with and without input from administrative records would indicate what additional confidentiality protection, if any, might be needed for the models that incorporate administrative records. DIRECT USES Direct uses of administrative data are uses in which administrative data are incorporated to a greater or lesser extent into survey records. These kinds of uses include providing values directly for missing survey responses; adjusting survey responses for net underreporting or overreporting; using administrative records values in place of asking one or more survey ques- tions; and appending administrative records values to survey records. Direct uses of administrative records raise confidentiality concerns, which, in turn, could make it more difficult for the Census Bureau to release useful public-use microdata files. Such uses also raise concerns about the possible effects on timeliness and survey response. Administrative records may not be available on a schedule that permits their inclusion with the corresponding survey data on a timely basis. Moreover, most direct uses of administrative records, in contrast to indirect uses, would require that SIPP respondents be informed about such uses, which could increase refusal rates for the survey. Experience with 2004 SIPP panel respondents suggests that the effect on response rates might be small, but that would need to be tested more fully. The custodial agency would also have to agree to direct uses of their records in a reengineered SIPP. On the positive side, direct uses of administrative records promise sig- nificant improvements in the quality of SIPP estimates of income and pro- gram participation. Moreover, should the use of annual interviews with an

78 REENGINEERING THE SURVEY event history calendar for eliciting intrayear information on employment, income, family composition, and program participation prove significantly less effective than desired (see Chapter 4), it could be important to consider ways to use administrative records directly. Otherwise, the quality of data on intrayear dynamics of change would be impaired unless SIPP continues to interview respondents every 4 months and forgoes the cost savings from moving to an annual interview schedule. Replacing Missing Survey Responses with Values from Records (Direct Imputation) For income sources and programs for which the Census Bureau has access to administrative records, they could be used to supply values for missing survey responses on a one-to-one basis—that is, an individual’s record of participation and income amounts would be matched with and directly entered onto his or her SIPP record without using any type of impu- tation procedure. An imputation model would be used only for people who did not match to an administrative record or, in the case of state records, for people in states that did not provide records to the Census Bureau. This use of administrative records seems obviously preferable to model- based imputations that incorporate administrative records, in that the actual values are by definition more accurate than any imputation model could be. Yet direct imputation also raises concerns about the possible adverse effects on timeliness, consistency of reports, and disclosure risk for the resulting public-use microdata files. Direct imputation further assumes that not only the survey respondents but also the cognizant custodial agency officials have agreed to the use of records for this purpose. Timeliness is a concern with direct imputation because directly replac- ing missing responses with actual values requires records that relate to the survey reference period and are available soon enough after it that SIPP processing and data release are not delayed. As discussed above, some records are available on such a schedule, and others are not. Consistency with survey reporting is an issue given the reporting error in SIPP, which most often results in a net underreporting bias. It would be incongruous to have the survey responses reflect biased reporting and the imputed values reflect unbiased reporting, yet it is not clear how to address this problem unless administrative records are also used to adjust the survey reports for net overreporting (or underreporting), as discussed in the next section. Direct imputation must increase the risk of disclosure compared with the use of administrative records in an imputation model unless additional disclosure protection steps are taken. Not only would an intruder know that the imputed values are the administrative records values, but also

EXPANDED USE OF ADMINISTRATIVE RECORDS 79 an intruder could be someone in the custodial agency with access to and knowledge of specific administrative records values. The increased risk would be lessened to the extent that directly imputed values are adjusted for consistency with the survey reporting, assuming that the adjustment is done stochastically and not in a manner that would be transparent to an intruder (e.g., a simple ratio adjustment). If adjustment for consistency with the survey reporting is not needed, then some kind of probabilistic perturbation of the directly imputed values would probably be required to provide sufficient confidentiality protection, in addition to agreements with the custodial agency that include penalties for a breach of confidentiality by that agency’s employees similar to the penalties that are already included in Title 13 of the U.S. Code for Census Bureau staff (see http://uscode.house. gov/download/pls/13C1.txt). Adjusting Survey Responses The evidence of net underreporting of participation and benefit amounts in SIPP (and other surveys) for most income sources suggests that it could be desirable to adjust the survey responses for groups of respondents with similar characteristics so that estimates for the total population and population groups approximate estimates from adminis- trative records. Major microsimulation models that federal agencies use for tax and transfer program policy analysis regularly simulate program eligibility, participation, and benefits on such surveys as the CPS and SIPP. The estimates from the models are therefore much closer to administra- tive records aggregates than the unadjusted survey estimates. For example, Wheaton (2007) reports on work to compare reporting of food stamps, Medicaid, SSI, and TANF in the CPS with estimates from the Transfer Income Model, version 3 (TRIM3), which produces an adjusted CPS. The TRIM3 estimates show a much greater effect of food stamps, SSI, and TANF in lifting program recipients out of poverty compared with the survey estimates (Wheaton, 2007:Tables 4-5). Because SIPP achieves more complete reporting of SSI and food stamps than the CPS (Wheaton, 2007:Table 1), the effects would not be as pronounced for a comparison based on SIPP. Nonetheless, they could still be significant overall and for particular programs, such as TANF, for which estimates from both surveys fall markedly short of administrative records. The approach the Census Bureau would use to adjust survey reports might not be that used by a microsimulation model such as TRIM3. These models not only adjust reported program benefit amounts for individuals that report recipiency on the survey, but also “create” new recipient units and associated benefits from households simulated to be eligible that did not report participation in order to better approximate administrative

80 REENGINEERING THE SURVEY aggregates for program caseloads and benefit dollars. The Census Bureau might not want to alter the SIPP records to that extent. As an alternative, the Census Bureau could follow a three-step process to achieve the same effect: 1. The first step would be to implement model-based imputations of the type described above, in which the model predicts administra- tive records values for respondents with missing data on program participation and benefits. 2. The second step would be to develop adjustment factors to bring the benefit amounts for respondents who report participating in a particular program up to the same percentage of total dollars as the percentage of reporters is of the total caseload (from administrative records). 3. To account for the remaining underreporting, the third step would be to adjust the survey weights—increasing the weights of respon- dents who report or are imputed participation and decreasing the weights of other respondents with incomes below a specified threshold that approximates the threshold for program eligibility. A threshold is used so that higher income respondents are not downweighted. Given multiple program participation, there could be a need to adjust the weights for participants in a single program separately from those in multiple programs. Before implementing such an approach, it would need to be care- fully evaluated, in general and for particular programs and combinations of programs, and taking account of the effects on other possibly related variables. Methodological Considerations The use of administrative records to adjust survey reports in the m ­ anner described requires a high degree of accuracy in achieving com- parability of both sources with regard to the population covered and the definition of participant units and income and benefits. With regard to timeliness, it would be important to have as up-to-date administrative information as possible for programs and income sources for which par- ticipation is growing (or decreasing) rapidly. For programs and income sources for which growth is more predictable, it could be possible to use a simple time trend factor to update older administrative data for use in adjusting more recent survey reports. For programs for which the Census Bureau has access to the adminis- trative records, adjustments could be made for finely stratified groups. For

EXPANDED USE OF ADMINISTRATIVE RECORDS 81 programs for which the Census Bureau has access only to aggregate statis- tics, the adjustments would necessarily have to be made for broad group- ings. For state-administered programs for which the Census Bureau has access to records for some but not all states, a combination of records and aggregate statistics by state could be used to compute adjustment factors. With regard to disclosure risk, the development of adjustment fac- tors to achieve approximate agreement with administrative records should not pose any increased threats to confidentiality beyond those described above in the discussion of using administrative records in model-based imputations. The adjustment factors would pertain to groups and not to individuals. Strategic Considerations Adjusting survey responses for net reporting error would lead the Census Bureau in a direction that it is not often accustomed to taking for household statistics—namely, that of producing a set of best estimates by combining sources of information in contrast to producing the data reported from a survey. The Census Bureau produces a small number of model-based estimates in its SAIPE and SAHIE Programs that use both survey and administrative data, but, in each case, the variable predicted is an estimate from a survey, such as the ACS estimate of poor school-age children. Coming closest to the idea of producing the best estimates is the Census Bureau’s regular practice of adjusting survey weights so that the survey estimates agree with independent population control totals, which themselves are developed from the previous census updated with administrative records on births, deaths, and net international migration. Another example is the Census Bureau’s seasonal adjustment models (see http://www.census.gov/srd/www/x12a/), which it uses to adjust economic time series from business data, and which the Bureau of Labor Statistics uses to adjust monthly unemployment rates from the CPS. A second consideration that could give the Census Bureau pause about the wisdom of adjusting survey reports is the sheer complexity of the adjust- ment process, as outlined above, for the large number of programs and other sources of income, such as earnings, dividends, and interest, that could be candidates for its use. As we recommend throughout this report, the Census Bureau, with input from the user community, would need to take a stra- tegic approach in moving toward a goal of adjusting survey responses. It would need to decide which income sources would be feasible to adjust and which would be most important to adjust in terms of the potential effect on research and policy analysis results. Undoubtedly, it would make sense to proceed step by step and with complete transparency. Thus, instead of providing only adjusted values or weights on SIPP public-use files, it would

82 REENGINEERING THE SURVEY probably be better to provide the reported values and unadjusted weights with separate fields containing the adjustment factors. Users could then make their own evaluations and decisions as to which set of values to use. For example, researchers modeling behavioral responses to tax and transfer policies may prefer to use reported amounts rather than adjusted amounts because respondents’ behavior may be affected more by their belief about the size of a payment than by the actual size of a payment. Arguing in favor of proceeding down this road of adjustment is the checkered history of work to improve survey response. Census Bureau survey researchers and others have made major efforts since the days of the Income Survey Development Program to develop the best possible questionnaire design and interviewer training to elicit accurate reports of income and program participation from survey respondents. Yet these efforts have met with mixed success. While reporting of many types of income and program participation in SIPP is better than in other surveys, SIPP still exhibits significant net reporting errors for key programs, and the quality of reporting for some programs has declined rather than improved over time. Moreover, SIPP captures only about 80 percent as much aggre- gate wages as the CPS, and, given that wages are about 78 percent of total household income, the SIPP estimate of total income suffers significantly as a result. Given users’ needs for data that are as accurate as possible and the seeming inability to obtain better reporting through survey instrumentation alone, we encourage the Census Bureau to actively explore the produc- tion of SIPP public-use microdata files that include adjustment factors for income sources and program participation to produce agreement with the best independent estimates. A prime target of opportunity could be the use of state records of employment and earnings that are provided to the LEHD Program to adjust reported values and in other ways improve the quality of employment and earnings data in SIPP. Alternatively, the National Direc- tory of New Hires may prove to be a feasible source of such data for use in SIPP. Replacing Survey Questions Given the complexity of developing imputation models and adjustment factors as described above, it might seem preferable to simply use adminis- trative records values, when available, for all survey respondents and to drop the particular items from the questionnaire. In fact, when SSA researchers receive matched files of SIPP and SSA records from the Census Bureau for analysis and simulation modeling of their programs, they ­routinely replace the survey values with administrative records values for Social Security and SSI benefits (personal communication from ­Bernard Wixon, Office of

EXPANDED USE OF ADMINISTRATIVE RECORDS 83 Research, Evaluation, and Statistics, Social Security Administration, to the panel, January 8, 2009). Depending on the legal authority of the custodial agency, however, there could be high hurdles to obtaining permission for a direct use of administrative records to take the place of survey questions. There would also be issues of timeliness, informed consent, and increased disclosure risk, and some records would not be suitable for this use because of conceptual inconsistencies with the desired survey responses. Disclosure risk would be greater with using administrative records v ­ alues instead of asking survey questions, even though, functionally, the data are equivalent in that the survey questions are trying to elicit responses that equal the administrative records values for an individual. In prac- tice, as we have seen, survey reports are often erroneous to a greater or lesser extent, which affords added confidentiality protection compared with the actual administrative records values. Moreover, some of the custodial agency’s employees have access to individual administrative records, which could enable them to identify particular people in the survey and inadver- tently (or advertently) make this known. To respond to these concerns, some kind of probabilistic perturbation of the administrative records ­values that are used in place of asking survey questions would be required to pro- vide sufficient confidentiality protection, in addition to agreements with the custodial agency that included penalties for a breach of confidentiality by that agency’s employees. The risks from direct replacement of survey values arise when the substituted administrative records values make the record unusual on quasi-identifiers.12 In general, this is more likely to occur when substitut- ing several items per record rather than one item. For example, substi- tuting program participation status and not also benefit amounts is less risky than substituting both items, and substituting benefit amounts for 1 month is less risky than substituting benefit amounts for 12 or more months because many records have the same status in any 1 month but fewer have the same annual or multiyear history. The Census Bureau can gauge the severity of disclosure risks from intruders who do not have access to the custodial agency’s records by performing experiments that attempt to link SIPP records containing more or fewer substituted items from administrative records with externally available sources. The bureau could also determine the risks from intruders who do have access to the agency records. 12  Quasi-identifiers—as distinct from name, Social Security number, and similar unique identifiers—are combinations of variables, such as gender, birth date, and zip code, which can make it possible to identify all or most individuals in a data set through matching against external sources.

84 REENGINEERING THE SURVEY Nonetheless, the use of administrative records to replace survey ­values for one or more variables in a SIPP panel, when feasible, would have the benefits of reducing respondent burden and improving data quality. We note, in this regard, that Title 13, Section 6, of the U.S. Code, which pertains to the Census Bureau, authorizes the secretary of commerce as follows: a. The Secretary, whenever he considers it advisable, may call upon any other department, agency, or establishment of the Federal Government, or of the government of the District of Columbia, for information pertinent to the work provided for in this title. b. The Secretary may acquire, by purchase or otherwise, from States, counties, cities, or other units of government, or their instrumental- ities, or from private persons and agencies, such copies of records, reports, and other material as may be required for the efficient and economical conduct of the censuses and surveys provided for in this title. c. To the maximum extent possible and consistent with the kind, timeliness, quality and scope of the statistics required, the Secre- tary shall acquire and use information available from any source referred to in subsection (a) or (b) of this section instead of con- ducting direct inquiries [emphasis added]. As an example of the benefits from direct substitution of administrative records values for survey questions, consider Social Security and SSI ben- efits. They are among the best reported income sources in SIPP (and other surveys), with 90-91 percent of aggregate benefits typically reported and even higher percentages of participation reported in the aggregate (Meyer, Mok, and Sullivan, 2009). Yet Huynh, Rupp, and Sears (2001), summa- rized above, identified individual reporting errors for these programs based on a matched SSA-SIPP file. Moreover, Social Security benefits are such an important component of income for the elderly population that adding even as little as 8-10 percent more benefit dollars to SIPP through replac- ing survey reports with values from SSA records could make a significant difference in the poverty status for this group. Adding New Variables A third direct use of administrative records is to add variables to a survey that are not and have never been included in the questionnaire but that could be useful to append for policy analysis and research purposes. The SIPP gold standard project described above is an example. This project involved augmenting SIPP data files with exactly matched administrative

EXPANDED USE OF ADMINISTRATIVE RECORDS 85 records on earnings histories and Social Security benefits. In addition, because the gold standard file can be used only at the Census Bureau, the project is intended to find a way, through state-of-the-art synthesiz- ing techniques, to deliver a useful public-use microdata file for retirement policy analysis that contains the linked survey reports, longitudinal earnings records, and Social Security benefit records. (See “Confidentiality Protection and Data Access” below for a discussion of synthesizing techniques and alternative modes of data access.) This work has involved dedicated effort and leading-edge thinking by Census Bureau staff and academic researchers, but the results to date are mixed. Early, limited analysis by Abowd (2008) found that the synthesized public-use version of the gold standard file adequately represented the pat- terns of earnings histories in the data for some demographic groups but not others and underestimated early retirement and retirement at age 65. The more detailed evaluation commissioned by SSA (Urban Institute/NORC Evaluation Team, 2009) found that many univariate distributions were accurately represented in the synthetic file, but that the results for regression analyses and policy simulations were more mixed. There were many dif- ferences in simulation results that would have led researchers to erroneous conclusions by using the synthetic file. Another important problem in the synthesized file was an overestimation of the duration of marriage, which has implications for analysis of retirement and income security. Other problems found in the synthetic file were present in the gold standard file itself and not produced by the synthesization. Overall, the evaluation team concluded (pp. 1-5) that “the effort to synthesize on a such a large scale was a ‘bridge too far,’ given how early the whole profession is in creating and using synthetic data” but that the work is promising, particularly if undertaken on a smaller scale. In general, synthetic public-use data present the problem that the syn- thesized data are not likely to preserve relationships among variables that are not the focus of the synthesizing effort—for example, the relationship of immigrant parental income and children’s educational attainment in SIPP. Furthermore, like all statistical models, synthesis models are approxima- tions of reality, so that they may not accurately capture some distributional features in the original data. Consequently, some SIPP data users may not find a fully, or almost fully synthesized public-use file, such as the gold standard file, useful for their needs and, to work with the actual linked data, would have to go to an RDC. Some users are averse to RDCs, which can involve extensive and long approval processes (see National Research Council, 2005). We encourage the Census Bureau to consider carefully the benefits and costs of appending administrative records data to SIPP files for public use. When new variables are appended, particularly detailed longitudinal

86 REENGINEERING THE SURVEY h ­ istories, such as longitudinal earnings records, the increase in disclosure risk is likely to be substantial, even when an intruder does not have access to the custodial agency’s records. Alternative approaches are possible, however. One approach is to transform the appended data into categori- cal instead of continuous variables. In the case of earnings histories, for example, categorical variables could represent different patterns of earnings histories (number of lifetime jobs, number of periods out of the labor force, etc.) rather than the detailed histories. Another approach (which could be used in combination with categorization of selected variables) is to use par- tial synthesis of a much smaller set of selected values. Such partial synthe- sization could provide reliable information with satisfactory confidentiality protection, as we discuss below. In any event, the need for appending addi- tional variables to SIPP should be carefully vetted with data users because of the implications for confidentiality protection and data access. CONFIDENTIALITY PROTECTION AND DATA ACCESS As summarized in Box 3-1, the Census Bureau, like other data dissemi- nators that collect individual information under a pledge of confidentiality, strives to release data files that are not only safe from illicit efforts to obtain respondents’ identities or sensitive attributes, but also useful for analysis. In general, strategies for optimizing the risk-utility trade-off fit into two broad categories. Restricted access strategies allow only select analysts to use the data, for example, via licensing or by requiring analysts to work in secure data enclaves. Restricted data strategies allow analysts to use altered versions of the data, for example, by deleting variables from the file, aggregating categories, or perturbing data values (see National Research Council, 2005). The Census Bureau has extensive experience in applying both of these methods. For example, currently, standard public-use files of SIPP data (not linked with administrative records) can be downloaded from the SIPP website, and a version of SIPP data for specific panels linked with earnings histories and Social Security benefits can be used in the RDCs (the gold standard project). Both restricted data and restricted access strategies are likely to be useful for a reengineered SIPP, as described below. Restricted Data for SIPP The Census Bureau releases public-use microdata samples for many of its products, including SIPP, usually with some values altered to protect confidentiality. Typical alterations include • recoding variables, such as releasing ages or geographical variables in aggregated categories;

EXPANDED USE OF ADMINISTRATIVE RECORDS 87 • reporting exact values only above or below certain thresholds, for example, reporting all incomes above $100,000 as “$100,000 or more”; • swapping data values for selected records, for example, switching the quasi-identifiers for at-risk records with those for other records to discourage users from matching, since matches may be based on incorrect data; and • adding noise to numerical data values to reduce the possibilities of exact matching on key variables or to distort the values of sensitive variables. These methods can be applied with varying intensities. Generally, increasing the amount of alteration decreases the risks of disclosures; but, it also decreases the accuracy of inferences obtained from the released data, since these methods distort relationships among the variables. For example, aggregation makes analyses at finer levels impossible and can create ecological inference problems, and intensive data swapping severely attenuates ­correlations between the swapped and unswapped variables. It is difficult—and for some analyses impossible—for data users to determine how much their particular estimation has been compromised by the data alteration, in part because disseminators rarely release detailed informa- tion about the disclosure limitation strategy. Even when such information is available, adjusting for the data alteration to obtain valid inferences may be beyond some users’ statistical knowledge. For example, to ana- lyze properly data that include additive random noise, users should apply measurement error models (Fuller, 1993) or the likelihood-based approach of Little (1993), which are difficult to use for nonstandard estimands.13 Nonetheless, when the amount of alteration is very small, the negative impacts of traditional disclosure limitation methods on data utility could be minor compared with the overall error in the data caused by nonre- sponse and measurement errors. The current SIPP public-use files (without linked administrative records values) are protected mainly by top-coding monetary variables and age and by suppressing geographic detail in areas with fewer than 250,000 people. In addition, some individuals in metropolitan areas are recoded to be in nonmetropolitan areas with too few people in the sample. This can invali- date estimates of characteristics in nonmetropolitan areas. 13  Estimands are types of estimates, such as means, ranges, percentiles, and regression coefficients.

88 REENGINEERING THE SURVEY Protecting Files with Linked SIPP and Administrative Records Data If values available in administrative data are included in SIPP public-use files, top-coding and geographic aggregation may not offer sufficient protec- tion. The Census Bureau probably would need to alter the administrative variables to prevent exact linking, especially if multiple variables for the same person are culled from an administrative database to create a SIPP record. Additional aggregation, such as rounding monetary values, may offer sufficient protection without impairing data utility. Alteration with high intensity, however, such as intense swapping or noise addition, will attenuate relationships and distort distributions so that the released data are no longer useful. If heavy substitution of administrative values is planned, one option is to create multiply imputed, partially synthetic data. These data comprise the units originally surveyed with only some collected values replaced with multiple imputations. For example, the Census Bureau could simulate sen- sitive variables or quasi-identifiers for individuals in the sample with rare combinations of quasi-identifiers, and it might synthesize those values that are available and potentially linkable in external databases. Partial Synthesis To illustrate how partially synthetic data might work in practice, we modify the setting described by Reiter (2004). Suppose a statistical agency has collected data on a random sample of 10,000 people. The data com- prise each person’s race, gender, income, and years of education. Suppose the agency wants to replace race and gender for all people in the sample— or possibly just for a subset, such as all people whose income is below $5,000—to disguise their identities. The agency could generate values of race and gender for these people by randomly simulating values from the joint distribution of race and gender, conditional on their education and income values. These distributions would be estimated using the collected data and possibly other relevant information. The result would be a par- tially synthetic data set. The agency would repeat this process, say, 10 times, and these 10 data sets would be released to the public. The analyst would estimate parameters and their variances in each of the synthetic data sets and combine the results using the methods of Reiter (2003). Several statisticians in the Statistical Research Division of the C ­ ensus Bureau and in academia are working to develop partially synthetic, public-use data for Census Bureau products. These products include the Longitudinal Business Database, the Longitudinal Employer-Household Dynamics data sets, the ACS group quarters, veterans, and full sample data, and the SIPP linked with Social Security benefit information.

EXPANDED USE OF ADMINISTRATIVE RECORDS 89 Partially synthetic data sets can have positive features for data util- ity. When the synthetic data are simulated from distributions that reflect the distributions of the collected data, valid inferences for frequencies can be obtained for wide classes of estimands (e.g., means, ranges, percentile distributions). This is true even for high fractions of replacement, whereas swapping high percentages of values or adding noise with large variance produces worthless data. The inferences are determined by combining standard likelihood-based or survey-weighted estimates; the analyst need not learn new statistical methods or software to adjust for the effects of the disclosure limitation. The released data can include simulated values in the tails of distributions so that no top-coding is needed. Finally, because many quasi-identifiers can be simulated, finer details of geography can be released, facilitating small-area estimation. There is a cost to these benefits—the validity of synthetic data infer- ences depends on the validity of the models used to generate the synthetic data. The extent of this dependence is driven by the nature of the synthesis and the question asked. For example, when all of race and gender are synthesized, analyses involving those variables would reflect only the rela- tionships included in the data generation models. When the models fail to reflect certain relationships accurately, analysts’ inferences also would not reflect those relationships. Similarly, incorrect distributional assumptions built into the models would be passed on to the users’ analyses. However, when replacing only a select fraction of race and gender and leaving many original values on the file, inferences may be relatively insensitive to the assumptions of the synthetic data models. In practice, this model dependence means that agencies should release metadata that help analysts decide whether or not the synthetic data are reliable for their analyses. For example, agencies might include the code used to generate the synthetic values as attachments to public releases of data. Or they might include generic statements that describe the imputa- tion models, such as “main effects and interactions for income, education, and gender are included in the imputation models for race.” Analysts who desire finer detail than afforded by the imputations may have to apply for restricted access to the collected data. Even with such metadata, secondary data analysts would be greatly helped if the Census Bureau provided some way for them to learn in real time about the quality of inferences based on the synthetic data (or any masked version of SIPP). Ideally, the quality measures provided would be specific to particular inferential quantities rather than broad measures. For example, reporting comparisons of means, variances, and correlations in the observed and synthetic data does little to help analysts estimating complex models. One approach is for the Census Bureau to develop a verification server

90 REENGINEERING THE SURVEY (Reiter, Oganian, and Karr, 2009). This server, located at the Census Bureau, would store the original and synthetic (or otherwise masked) data sets. Analysts, who have only the synthetic data, would submit queries to the server for measures of data quality for certain estimands. The server would run the analysis on both the original and synthetic data and report back to the analyst a measure of data quality that compares the inferences obtained from both sources. The server could also serve as a feedback mechanism for the agency, capturing what quantities analysts care most about. ­ Agencies might be able to use this information to improve the quality of future data releases. There may be additional disclosure risks of releasing the utility measures; research would be needed to gauge these risks and, more broadly, to develop and fully test the functionality and usability of a verification server. Synthesizing SIPP Data The synthesis of the SIPP gold standard file, which contains linked SIPP, SSA, and IRS data, is very intense: Only a handful of some 600 variables remain unsynthesized. Practically all variables are synthesized to ensure a small chance of linking the synthesized records to the existing SIPP p ­ ublic-use records. With the reengineered SIPP, such heavy synthesis may not be necessary. If the released data do not include such detailed admin- istrative information as longitudinal earnings histories, the Census Bureau can synthesize only the values of quasi-identifiers for at-risk records and the linkable values available in administrative sources. It may not even be necessary to synthesize entire variables to achieve adequate protection. For example, synthetic values could replace top-coded monetary and age values and aggregated geographies. The benefits of synthesis over top-coding are illustrated by An and Little (2007); more research is needed on methods for simulating geographies. Providing information in the tails and finer g ­ eographies would improve on the current SIPP public-use product with- out necessarily increasing disclosure risks. Methods of gauging the risks inherent in partially synthetic data with only some values synthesized are described in Reiter and Mitra (2009). If the released data do contain detailed administrative data, similar to the gold standard file, the Census Bureau has several options. It can proceed as with the current SIPP, releasing a file without linked data and a highly synthesized version of the linked data. Or it can try to reach new memo- randa of understanding with SSA and IRS that make it possible to do less synthesizing. For example, it may be possible to synthesize earnings and benefits histories, leaving the other variables on SIPP as is. Regardless of the path chosen, the Census Bureau should recognize that most SIPP users are not likely to support the release of a file with linked administrative records

EXPANDED USE OF ADMINISTRATIVE RECORDS 91 if the time required to create the file and evaluate its risks and utility delays its release in comparison to a standard SIPP public-use file. Restricted Access for SIPP In addition to public-use microdata files, the Census Bureau makes more detailed data from SIPP and other surveys available via a restricted access mode, which permits use of the data in any of the nine RDCs oper- ated by the bureau (see http://www.ces.census.gov/index.php/ces/cmshome). The files available in the RDCs are stripped of obvious identifiers, such as name and address, but do not contain recodes or other modifications that blur the underlying data in the public-use versions.14 The RDC restricted access mode, however, has limitations. Analysts who do not live near a secure data enclave, or who do not have the resources to relocate temporarily to be near one, are shut out from RDCs. Gaining restricted access generally requires months of proposal preparation and background checks; analysts cannot simply walk into any secure data enclave and immediately start working with the data. As recommended by a previous National Research Council report (2005), the Census Bureau should continue to pursue ways to speed up the project approval process in the RDCs. Another restricted access approach is to establish a remote access system for SIPP data. When queried by analysts, these systems provide output from statistical models without revealing the data that generated the output. Such servers are in the testing stage at the Census Bureau. If they are found useful, they would provide an excellent resource for certain analyses on the genuine data without having to go to an RDC. However, remote access systems are not immune from disclosure risks. Clever ­queries can reveal individual data values. For example, asking for a regression model that includes an indicator variable that equals 1 for a unique value of some predictor and 0 for all other variables enables the analyst to pre- dict the outcome variable perfectly (Gomatam et al., 2005). These types of intrusions could be especially problematic if a public-use data set is provided and the remote access system is open to all users. For example, an ill-­intentioned user could look at a continuous, unaltered variable to determine unique values, then submit regression queries with indicator variables to learn about those records’ other variables. The Census Bureau can limit the risks of such problems by restricting access to the server. For example, users of the server could be required to go through a licensing procedure. In addition, the server could keep track of and audit requests, 14  To date, SIPP files that have been linked to administrative records are not available in the RDCs outside the Census Bureau.

92 REENGINEERING THE SURVEY so that any ill-intentioned intruder who sneaks through the licensing might be identified and punished. CONCLUSIONS AND RECOMMENDATIONS The Role of Administrative Records in a Reengineered SIPP Conclusion 3-1: In reengineering the Survey of Income and Program Participation (SIPP) to provide policy-relevant information on the short-run dynamics of economic well-being for families and households, the Census Bureau must continue to use survey interviews as the primary data collec- tion vehicle. Administrative records from federal and state agencies cannot replace SIPP, primarily because they do not provide information on people who are eligible for—but do not participate in—government assistance pro- grams and, more generally, because they do not provide all of the detail that is needed for SIPP to serve its primary goal. Many records are also difficult to acquire and use because of legal restrictions on data sharing, and some of the information they contain may be erroneous. Nonetheless, information from administrative records that is relevant to SIPP and likely to improve the quality of SIPP reports of program participation and income receipt in particular can and should be used in a reengineered SIPP. Conclusion 3-2: The Census Bureau has made excellent progress with the Statistical Administrative Records System and related systems, such as the person validation system, in building the infrastructure to support widespread use of administrative records in its household survey programs. The bureau’s administrative records program, both now and in the future as it adds new sets of records and analysis capabilities, will be an important resource for applications of administrative records in a reengineered Survey of Income and Program Participation. Acquisition of Records Conclusion 3-3: Many relevant federal administrative records are r ­ eadily available to the Census Bureau for use in a reengineered Survey of Income and Program Participation (SIPP). However, most state administra- tive data are not available for use in a reengineered SIPP at this time and could be difficult to obtain. Recommendation 3-1: The Census Bureau should seek to acquire addi- tional federal records that are relevant to the Survey of Income and Pro- gram Participation, which could include records from the U.S. Department of Veterans Affairs and the Office of Child Support Enforcement.

EXPANDED USE OF ADMINISTRATIVE RECORDS 93 Recommendation 3-2: The Census Bureau, in close consultation with users, should develop a strategy for acquiring selected state administrative records, recognizing that it will be costly and probably unfeasible to acquire all relevant records from all or even most states. The bureau’s acquisition strategy should be guided by such criteria as the importance of the income source for lower income households, particularly in times of economic distress, and the relative ease of acquiring the records. Unemployment insurance benefit records should be a high priority for the Census Bureau to acquire on both of these counts, and the bureau should investigate whether it is possible to acquire these records from the National Directory of New Hires, which would eliminate the need to negotiate with individual states. Indirect Uses of Records Conclusion 3-4: Indirect uses of administrative records are those uses, such as evaluation of data quality and improvement of imputation models for missing data, in which the administrative data are never recorded on survey records. They are advantageous for a reengineered Survey of Income and Program Participation (SIPP) in that they should have little or no adverse effects on timeliness or the needed level of confidentiality protection of SIPP data products. Recommendation 3-3: The Census Bureau, in close cooperation with knowledgeable staff from program agencies, should conduct regular, fre- quent assessments of Survey of Income and Program Participation (SIPP) data quality by comparison with aggregate counts of recipients and income and benefit amounts from appropriate administrative records. When fea- sible, the bureau should also evaluate reporting errors for income sources— both underreporting and overreporting—by exact-match studies that link SIPP records with the corresponding administrative records. The Census Bureau should use the results of aggregate and individual-level comparisons to identify priority areas for improving SIPP data quality. Recommendation 3-4: The Census Bureau should move to replace hot-deck imputation routines for missing data in the Survey of Income and Program Participation with modern model-based imputations, implemented multiple times to permit estimating the variability due to imputation. Impu- tation models for program participation and benefits should make use of program eligibility criteria and characteristics of beneficiaries from admin- istrative records so that the imputed values reflect as closely as possible what is known about the beneficiary population. Before implementation, new imputation models should be evaluated to establish their superiority to the imputation routines they are to replace.

94 REENGINEERING THE SURVEY Recommendation 3-5: The Census Bureau should request the ­Statistical and Science Policy Office in the U.S. Office of Management and ­Budget to establish an interagency working group on uses of administrative records in the Survey of Income and Program Participation (SIPP).15 The group would include technical staff from relevant agencies who have deep knowl- edge of assistance programs and income sources along with Census Bureau SIPP staff. The group would facilitate regular comparisons of SIPP data with administrative records counts of income recipients and amounts (see R ­ ecommendation 3-3) and advise the Census Bureau on priorities for acquiring additional federal and selected state administrative records, how best to tailor imputation models for different sources of income and pro- gram benefits, and other matters related to the most effective ways to use administrative records in SIPP. The Census Bureau should regularly report on its progress in implementing priority actions identified by the group. Direct Uses of Records Conclusion 3-5: Direct uses of administrative records in a reengineered Survey of Income and Program Participation (SIPP), which include substi- tuting administrative values for missing survey responses, adjusting survey responses for net underreporting, using administrative values instead of asking survey questions, and appending additional administrative data, potentially offer significant improvements in the quality of SIPP data on income and program participation. They also raise significant concerns about increased risks of disclosure and delays in the release of SIPP data products. Recommendation 3-6: In the near term, the Census Bureau should give priority to indirect uses of administrative records in a reengineered Survey of Income and Program Participation (SIPP). At the same time and working closely with data users and agencies with custody of relevant a ­ dministrative records, the bureau should identify feasible direct uses of administrative records in SIPP to be implemented in the medium and longer terms. Social Security and Supplemental Security Income benefit records, which are available to the Census Bureau on a timely basis, are prime candidates for research and development on ways to use the admin- istrative values directly—either to adjust survey responses for categories of beneficiaries or to replace survey questions (which would reduce respon- dent burden)—in ways that protect confidentiality. 15  See Recommendation 4-5 regarding an advisory group of outside researchers and policy analysts.

EXPANDED USE OF ADMINISTRATIVE RECORDS 95 Recommendation 3-7: When considering the addition to the Survey of Income and Program Participation (SIPP) of administrative records v ­ alues for variables that have never been ascertained in the survey itself, the Census Bureau should ensure that the benefits from the added variables are worth the costs, such as additional steps to protect confidentiality. The bureau should consult closely with users to be sure that the added variables are central to SIPP’s purpose to provide information on the short- run dynamics of economic well-being and that their inclusion does not compromise the ability to release public-use microdata files that accurately represent the survey data. Confidentiality Protection and Data Access Conclusion 3-6: Multiple strategies for confidentiality protection and data access are necessary for a survey as rich in data as the Survey of Income and Program Participation. Public-use microdata files, which are available on a timely basis and in which confidentiality protection tech- niques do not unduly distort the relationships in the data, are the preferred mode of data release. Some uses may require access to confidential data that at present can be provided only at one of the Census Bureau’s Research Data Centers. Recommendation 3-8: The Census Bureau should develop confidenti- ality protection techniques and restricted access modes for the Survey of Income and Program Participation (SIPP) that are as user-friendly as pos- sible, consistent with the bureau’s duty to minimize disclosure risk. In this regard, the bureau should develop partial synthesis techniques for SIPP p ­ ublic-use microdata files that, based on evaluation results, are found to preserve the research utility of the information. For SIPP data that cannot be publicly released, the Census Bureau should give high priority to developing a secure remote access system that does not require visiting a Research Data Center to use the information. The bureau should also deposit SIPP files of linked survey and administrative records data (with identifiers removed) at all Research Data Centers in order to expand the opportunities for research that contributes to scientific knowledge and informed public policy.

Next: 4 Innovation in Design and Data Collection »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!