National Academies Press: OpenBook
« Previous: Part I: Principles and Practices for a Federal Statistical Agency
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 14
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 15
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 16
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 17
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 18
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 19
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 20
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 21
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 22
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 23
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 24
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 25
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 26
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 27
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 28
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 29
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 30
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 31
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 32
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 33
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 34
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 35
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 36
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 37
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 38
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 39
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 40
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 41
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 42
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 43
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 44
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 45
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 46
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 47
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 48
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 49
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 50
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 51
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 52
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 53
Suggested Citation:"Part II: Commentary." National Research Council. 2009. Principles and Practices for a Federal Statistical Agency: Fourth Edition. Washington, DC: The National Academies Press. doi: 10.17226/12564.
×
Page 54

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Part II: Commentary This section comments on most of the topics in the principles and practices; the comments are offered to explain, illustrate, or further define the statement of principle in Part I. DEFINITION OF A FEDERAL STATISTICAL AGENCY A federal statistical agency is a unit of the federal government whose principal function is the compilation and analysis of data and the dissemination of information for statistical purposes. A statistical agency may be labeled a bureau, center, division, or office or similar title, so long as it is recognized as a distinct entity. Statistical agencies have been established for several reasons: (1) to develop new information for an area of public concern (e.g., the Bureau of Labor Statistics, the National Center for Health Statistics); (2) to conduct large statistical collection and dissemination operations specified by law (e.g., the U.S. Census Bureau); (3) to compile and analyze statistics from sets of administrative records for policy purposes and public use (e.g., the Statistics of Income Division in the Internal Revenue Service); and (4) to develop broad and consistent es- timates from a variety of statistical and administrative sources in accordance with a prespecified conceptual framework (e.g., the Bureau of Economic Analysis in the U.S. Department of Commerce). Once established, many statistical agencies engage in all of these functions to varying degrees. 14

part iI: COMMENTARY 15 This definition of a federal statistical agency does not include many statistical activities of the federal government because they are not per- formed by distinct units, or because they do not result in the dissemination of statistics to others—for example, statistics compiled by the U.S. Postal Service to set rates or by the U.S. Department of Defense to test weapons (see National Research Council, 1998b, 2002b, 2003b, 2006d, on statistics and testing for defense acquisition). Nor does it include agencies whose primary functions are the conduct or support of problem-oriented research, although their research may be based on information gathered by statistical means, and they may also sponsor important surveys, as do, for example, the National Institutes of Health, the Agency for Healthcare Research and Quality, and other agencies in the U.S. Department of Health and Human Services. Finally, this definition of a statistical agency does not usually include agencies whose primary function is policy analysis and planning (e.g., the Office of Tax Analysis in the U.S. Department of the Treasury, the Office of the Assistant Secretary for Planning and Evaluation in the U.S. Department of Health and Human Services). Such agencies may collect and analyze statistical information, and statistical agencies, in turn, may perform some policy-related analysis (e.g., produce reports on trends in after-tax income or child care arrangements of families). However, to maintain credibility as an objective source of accurate, useful information, statistical agencies must be separate from units that are involved in developing policy and assessing policy alternatives. The work of federal statistical agencies is coordinated through the In- teragency Council on Statistical Policy (ICSP), created by the U.S. Office of Management and Budget (OMB) in the 1980s and authorized in statute in the 1995 reauthorization of the Paperwork Reduction Act. The ICSP is chaired by OMB and currently includes representation from a total of fourteen agencies and units, which are housed in nine cabinet departments and three independent agencies (see Appendix A): • Bureau of Economic Analysis (Commerce Department) • Bureau of Justice Statistics (Justice Department) • Bureau of Labor Statistics (Labor Department) • Bureau of Transportation Statistics (Transportation Department) • Census Bureau (Commerce Department) • Economic Research Service (Agriculture Department) • Energy Information Administration (Energy Department)

16 principles and practices for a federal statistical agency • National Agricultural Statistics Service (Agriculture Department) • National Center for Education Statistics (Education Department) • National Center for Health Statistics (Health and Human Services Department) • Office of Environmental Information (Environmental Protection Agency) • Office of Research, Evaluation, and Statistics (Social Security Administration) • Science Resources Statistics Division (National Science Foundation) • Statistics of Income Division (Treasury Department) Throughout the federal government, OMB recognizes more than 80 units and agencies that are not statistical agencies but that have annual budgets of $500,000 or more for statistical activities (U.S. Office of Man- agement and Budget, 2008c:Table 1). The principles for federal statistical agencies presented here should apply to other federal agencies that carry out statistical activities, and they may find many of the detailed practices pertinent as well. Similarly, the principles and practices may be relevant to statistical units in state and local government agencies, and international audiences may also find them useful. ESTABLISHMENT OF A FEDERAL STATISTICAL AGENCY One of the most important reasons for establishing a statistical agency is to provide information that will allow for an informed citizenry. A democ- racy depends on an informed electorate. A citizen has a right to information that comes from a trustworthy, credible source and is relevant, accurate, and timely. Timely information of high quality is also critical to policy analysts and decision makers in both the public and private sectors. (For more infor- mation on the purposes of official statistics, see the Fundamental Principles of Official Statistics of the United Nations Statistical Commission in Ap- pendix C; see also U.N. Economic Commission for Europe, 2003; U.N. Statistical Commission, 2003.) Federal statistical agencies serve the key functions of providing a broad array of information to the public and policy makers and of ensuring the necessary quality and credibility of the data. Commercial, nonprofit, and academic organizations in the private sec- tor also provide useful statistical information, including data they collect themselves and data they acquire from government agencies and other data

part iI: COMMENTARY 17 collectors to which they add value. However, because the benefits of statisti- cal information are shared widely throughout society and because it is often difficult to collect payments for these benefits, private markets are not likely to provide all of the data that are needed for public and private decision making or to make data as widely available as needed for important public purposes. Government statistical agencies are established to ensure that a broad range of information is publicly available. (See National Research Council, 1999b and 2005b, for a discussion of the governmental role in providing public goods, or near public goods, such as research and data.) The United States government collected and published statistics long before any distinct federal statistical agency was formed (see Duncan and Shelton, 1978; Norwood, 1995). The U.S. Constitution mandated the conduct of a decennial census of population beginning in 1790, and the census enumeration was originally conducted by U.S. marshals as one of their many duties. Legislation providing for the compilation of statistics on agriculture, education, and income was enacted by Congress in the 1860s. The Bureau of Labor (forerunner of the Bureau of Labor Statistics) was established by law in 1884 as a separate agency with a general mandate to respond to widespread public demand for information on the conditions of industrial workers. The Census Bureau was established as a permanent agency in 1902 to conduct the decennial census and related statistical activities. Many federal statistical agencies that can trace their roots back to the 19th or early 20th century, such as the National Center for Education Sta- tistics and the National Center for Health Statistics, were organized in their current form following World War II. Several relatively new agencies have since been established, including the Energy Information Administration, the Bureau of Justice Statistics, and the Bureau of Transportation Statistics. In every case, the agency itself, in consultation with users of its information, has major responsibility for determining its specific statistical programs and for setting priorities. Initially, many of these agencies also had responsibili- ties for certain policy analysis functions for their department heads. More recently, policy analysis has generally been located in separate units that are not themselves considered to be statistical agencies, a separation that helps establish and maintain the credibility of statistical agencies as providers of data and analyses that are not designed for particular policy alternatives. A statistical agency has at least two roles: (1) provider of the statistical information and analysis needed to inform policy making and program assessment by its own department, and (2) source of national statistics for

18 principles and practices for a federal statistical agency the public in its area of concern. It is sometimes difficult to keep these two roles distinct on policy-relevant statistics. An effective statistical agency, nevertheless, will frequently play a creative, not just reactive, role in the development of data needed for policy analysis. Sometimes federal statistical agencies play additional roles, such as monitor and consultant on statisti- cal matters to other units within the same department (see, e.g., National Research Council, 1985a) and collector of data on a reimbursable basis for other agencies. There is no set rule or guideline for when it is appropriate to establish a separate federal statistical agency, carry on statistical activities within the operating units of departments and independent agencies, or contract for statistical services from existing federal statistical agencies or other organi- zations. Establishment of a federal statistical agency should be considered when one or more of the following conditions prevail: • There is a need for information on an ongoing basis beyond the ca- pacity of existing operating units, possibly involving other departments and agencies. Such needs may require coordinating data from various sources, initiating new data collection programs to fill gaps, or developing regularly updated time series of estimates. • There is a need, as a matter of credibility, to ensure that major data series are independent of policy makers’ control. • There is a need to establish the functional separation of data on individuals and organizations that are collected for statistical purposes from data on individuals and organizations that may be used for administrative, regulatory, or law enforcement uses. Such separation, recommended by the Privacy Protection Study Commission (1977), bolsters a culture and practice of respect for privacy and protection of confidentiality. Functional separation is easier to maintain when the data to be used for statistical purposes are compiled and controlled by a unit that is separate from operat- ing units or department-wide data centers. The Confidential Information Protection and Statistical Efficiency Act of 2002 (CIPSEA) extended legal confidentiality protection to statistical data collections that may be carried The National Research Council (2001b:Ch. 6) cited a number of these reasons in recommending to the U.S. Department of Health and Human Services that it establish or identify a statistical unit to be assigned responsibility and authority for carrying out statistical functions and data collection for social welfare programs and the populations they serve; see also National Research Council and Institute of Medicine (2004).

part iI: COMMENTARY 19 out by any federal agency, whether a statistical agency or other type of agency (see Appendix B). Nonetheless, functional separation of statistical data from other kinds of data is important because it makes promises of confidentiality protection more credible. • There is a need to emphasize the principles and practices of an ef- fective statistical agency, for example, professional practice, openness about the data provided, and wide dissemination of data. • There is a need to encourage research and development of a broad range of statistics in a particular area of public interest or of government activity or responsibility. • There is a need to consolidate compilation, analysis, and dissemina- tion of statistics in one unit to encourage high-quality performance, elimi- nate duplication, and streamline operations. PRINCIPLES FOR A FEDERAL STATISTICAL AGENCY Principle 1: A federal statistical agency must be in a position to provide objective information that is relevant to issues of public policy. A statistical agency supplies information not only for the use of man- agers and policy makers in the executive branch and for legislative design- ers and overseers in Congress, but also for all those who require objective statistical information on public issues, whether the information is needed for purposes of production, trade, consumption, or participation in civic affairs. Just as a free enterprise economic system depends on the availability of economic information to all participants, a democratic political system depends on—and has a fundamental duty to provide—wide access to in- formation on education, health, transportation, the economy, the environ- ment, criminal justice, and other social concerns. Federal statistical agencies are responsible for providing statistics on conditions in a variety of areas. The resulting information is used both inside and outside the government not only to delineate problems and sometimes to suggest courses of action, but also to evaluate the results of Under the guidance issued for CIPSEA (see Appendix B), OMB in 2007 recognized two new statistical units: the Office of Applied Studies within the Substance Abuse and Men- tal Health Services Administration of the U.S. Department of Health and Human Services, and the Microeconomic Surveys Section of the Federal Reserve Board of Governors.

20 principles and practices for a federal statistical agency government activity or lack of activity. The statistics provide much of the basis on which the government itself is judged. This role places a heavy re- sponsibility on federal statistical agencies for impartiality and objectivity. In order to provide information that is relevant to public issues, sta- tistical agencies need to reach out to users of the data. Federal statistical agencies usually are in touch with the primary users in their own depart- ments. Considerable energy and initiative are required to open avenues of communication more broadly to other current and potential users, includ- ing analysts and policy makers in other federal departments, state and local government agencies, academic researchers, private-sector organizations, organized constituent groups, the media, and Congress. Advisory com- mittees are recommended as a means to obtain the views of users, as well as people with relevant technical expertise (see, e.g., National Research Council, 1993a). Many agencies obtain advice from committees that are chartered under the Federal Advisory Committee Act—examples include the Advisory Committee on Agriculture Statistics for the National Agricul- tural Statistics Service, the Board of Scientific Counselors for the National Center for Health Statistics, and the Census Advisory Committee of Profes- sional Associations for the Census Bureau. The Federal Economic Statistics Advisory Committee (FESAC), chartered in November 1999, provides substantive and technical advice to three agencies—the Bureau of Economic Analysis, the Bureau of Labor Statistics, and the Census Bureau—thereby providing an important cross-cutting perspective on major economic sta- tistics programs (see http://www.bls.gov/bls/fesac.htm [December 2008]). Some agencies obtain advice from committees and working groups that are organized by an independent association, such as the American Statistical Association’s Committee on Energy Statistics for the Energy Information Administration. One frequently recommended method for alerting statistical agencies to emerging statistical information needs is for the agency’s own staff to en- gage in analysis of its data (Martin, 1981; Norwood, 1995; Triplett, 1991). For example, relevant analysis may use the agency’s data to examine cor- relates of key social or economic phenomena or to study the statistical error properties of the data. Such in-house analysis can lead to improvements in the quality of the statistics, to identification of new needs, to a reordering of priorities, and to closer cooperation and mutual understanding with policy analysis units. In its work for a policy analysis unit, a statistical agency de- scribes conditions and possibly measures progress toward some previously identified goal, but it refrains from making policy recommendations. The

part iI: COMMENTARY 21 distinction between statistical analysis and policy analysis is not always clear, and a statistical agency will need to consider carefully the extent of policy- related activities that are appropriate for it to undertake. Principle 2: A federal statistical agency must have credibility with those who use its data and information. Users of a statistical agency’s data must be able to trust that the data were collected and analyzed in an objective, impartial manner and that they are as accurate and timely as the agency can make them. An agency should make every effort to provide accurate and credible statistics that will permit policy debates to be concerned about policy, not about the credibility of the data. Credibility is enhanced when an agency fully informs users of the strengths and weaknesses of the data, makes data available widely, and consults with users about priorities for data collection and analysis. When it does so, an agency is perceived to be working in the national interest, not the interest of a particular administration (Ryten, 1990). Principle 3: A federal statistical agency must have the trust of those whose information it obtains. The statistics programs of the federal government rely in large part on information supplied by individuals and by organizations outside the federal government, such as state and local governments, businesses, and other organizations. Some of this information is required by law or regula- tion (such as employers’ wage reports), some of it is related to administra- tion of government programs (such as information provided by benefit recipients), but much of it is obtained through the voluntary cooperation of respondents in statistical surveys. Even when response is mandatory, the cooperation of respondents reduces costs and likely promotes accuracy (see National Research Council, 1995b, 2004e). Important elements in encouraging such cooperation are that respondents believe that the data requested are important and legitimate for the government to collect, that they are being collected in an impartial, competent manner, and that the confidentiality of their responses will be protected. In brief, trust in a statistical agency must be maintained, and the agency must not be perceived as being swayed by political considerations. Respon- dent trust also depends on providing respondents with realistic promises of confidentiality that the agency can reasonably expect to honor and then

22 principles and practices for a federal statistical agency scrupulously honoring those promises. Finally, respondent trust depends on adopting practices that respect personal privacy, such as taking steps to minimize the intrusiveness of questions and the time and effort required to participate in a survey. Principle 4: A federal statistical agency must have a strong position of independence within the government. A statistical agency must be able to provide credible information that may be used to evaluate the program and policies of its own department or the government as a whole. More broadly, a statistical agency must be a trustworthy source of objective, accurate information for decision makers, analysts, and others inside and outside the government who want to use statistics to understand present conditions, draw comparisons with the past, and help guide plans for the future. For these purposes, a strong position of independence for a statistical agency is essential. Statistical agency independence must be exercised in a broad frame- work. Legislative authority usually gives ultimate responsibility to the sec- retary of the department rather than the statistical agency head. In addition, an agency is subject to the normal budgetary processes and to various coor- dinating and review functions of OMB, as well as the legislative mandates, oversight, and informal guidance of Congress. Within this broad framework, a statistical agency must work to main- tain its credibility as an impartial purveyor of information. In the long run, the effectiveness of an agency depends on its maintaining a reputation for impartiality; thus, an agency must be continually alert to possible infringe- ments on its credibility and be prepared to argue strenuously against such infringements. An agency head’s independence can be strengthened by being ap- pointed for a fixed term by the President, with approval by the Senate, as is the case with the heads of the Bureau of Labor Statistics and the National Center for Education Statistics. For a fixed term, it is desirable that it not coincide with the presidential term so that professional considerations are more likely to be paramount in the appointment process. In contrast, the heads of the Bureau of Justice Statistics, the Census Bureau, and the Energy Information Administration are presidential appointees, but their terms See the Fundamental Principles of Official Statistics of the United Nations Statistical Commission in Appendix C.

part iI: COMMENTARY 23 are not fixed and usually end with a change of administration. In some instances, heads of statistical agencies are career senior executives. It is also desirable that a statistical agency head have direct access to the secretary of the department or the head of the independent agency in which the statistical agency is located. Such access allows the head to inform new secretaries about the appropriate role of a statistical agency and present the case for new statistical initiatives to the secretary directly. Among the agency heads with presidential appointments, such direct access currently is provided by legislation only for the Bureau of Labor Statistics and the Energy Information Administration. It is desirable for a statistical agency to have its own funding appropria- tion from Congress and not be dependent on allocations from the budget of its parent department or agency, which may be subject to reallocation. These organizational aspects—appointment of the agency head by the President with approval by the Senate for a fixed term not coincident with that of the administration, direct access to the secretary of the agency’s department, and separate budgetary authority—are neither necessary nor sufficient for a strong position of independence for a statistical agency, but they facilitate such independence. In contrast, some agencies are under several layers of supervision within their departments (see Appendix A). Other characteristics related to independence are that a statistical agency has the following: • Authority for professional decisions over the scope, content, and frequency of data compiled, analyzed, or published within the framework set by its authorizing legislation. Most statistical agencies have such broad authority, limited by budgetary constraints, departmental requirements, OMB review, and congressional mandates. • Authority for selection and promotion of professional, technical, and operational staff. • Recognition by policy officials outside the statistical agency of its authority to release statistical information, including accompanying press releases and documentation, without prior clearance. • Authority to control information technology systems for data pro- cessing and analysis in order to securely maintain the integrity and confi- dentiality of data and reliably support timely and accurate production of key statistics. • Authority for the statistical agency head and qualified staff to speak about the agency’s statistics before Congress, with congressional staff, and before public bodies.

24 principles and practices for a federal statistical agency • Adherence to fixed schedules in public release of important statisti- cal indicators to prevent even the appearance of manipulation of release dates for political purposes. • Maintenance of a clear distinction between statistical information and policy interpretations of such information by the president, the secre- tary of the department, or others in the executive branch. • Dissemination policies that foster regular, frequent release of major findings from an agency’s statistical programs to the public via the media, the Internet, and other means. Control over personnel actions, especially the selection and appoint- ment of qualified professional staff, including senior executive career staff, is an important aspect of independence. Agency staff reporting directly to the agency head should have formal education and deep experience in the substantive, methodological, operational, or management issues facing the agency as appropriate for their positions. In addition, professional qualifica- tions are of the utmost importance for statistical agency heads, whether the profession is that of statistician or the subject-matter field of the statistical agency (National Research Council, 1997b). Relevant professional associa- tions can be a source of valuable input on suitable candidates. The authority to ensure that information technology systems fulfill the specialized needs of the statistical agency is another important aspect of independence. A statistical agency must be able to vouch for the integrity, confidentiality, and impartiality of the information collected and main- tained under its authority so that it retains the trust of its data providers and data users. Such trust is fostered when a statistical agency has control over its information technology resources, and there is no opportunity or percep- tion that policy, program, or regulatory agencies could gain access to records of individual respondents. A statistical agency also needs control over its information technology resources to support timely and accurate release of official statistics, which are often produced under stringent deadlines. Authority to decide the scope and specific content of the data collected or compiled and to make decisions about technical aspects of data collec- tion programs is yet another important element of independence, although such authority can never be without limits. Congress frequently specifies particular data that it wishes to be collected (e.g., data on job openings and labor turnover by the Bureau of Labor Statistics, data on family farms by the Economic Research Service and National Agricultural Statistics Service) and, in the case of the decennial census, requires an opportunity to review

part iI: COMMENTARY 25 the proposed questions. The OMB Office of Information and Regulatory Affairs, under the Paperwork Reduction Act (and under the preceding Fed- eral Reports Act), has the responsibility for designating a single data collec- tion instrument for information wanted by two or more agencies. It also has the responsibility under the same act for reviewing all questionnaires and other instruments for the collection of data from 10 or more respondents (see Appendix B). In addition, the courts sometimes become involved in in- terpreting laws and regulations that affect statistical agencies, as in a number of issues concerning data confidentiality and Freedom of Information Act requests and in the use of sampling in the population census. The budgetary constraints on statistical agencies and OMB review of data collections are ongoing; other pressures depend, in part at least, on the relations between a statistical agency and those who have supervisory or oversight functions. Agencies need to develop skills in communicating to oversight groups the need for statistical series and credibility in assessing the costs of statistical work. In turn, although it is standard practice for the secretary of a department or the head of an independent agency to have ultimate responsibility for all matters within the department or agency, the head of a statistical agency, for credibility, should be allowed full authority in professional and technical matters. For example, decisions to revise the methodology for calculating the consumer price index (CPI) or the gross domestic product (GDP) have been and are properly made by the relevant statistical agency heads. Other aspects of independence that underscore a statistical agency’s credibility are important as well. Authority to release statistical information and accompanying materials (including press releases) without prior clear- ance by department policy officials is important so that there is no oppor- tunity for or perception of political manipulation of any of the information. Authority for the statistical agency head and qualified staff to speak about the agency’s statistics before Congress, with congressional staff, and before public bodies is also important to bolster the agency’s standing. When a statistical agency releases information publicly, a clear distinc- tion should be made between the statistical information and any policy interpretations of such. Not even the appearance of manipulation for po- litical purposes should be allowed. This is one reason that statistical agen- cies are required by Statistical Policy Directive Number 3 (U.S. Office of Management and Budget, 1985) to adhere to predetermined schedules for the public release of key economic indicators and take steps to ensure that no person outside the agency can gain access to such indicators before the

26 principles and practices for a federal statistical agency official release time. Statistical Policy Directive Number 4 (U.S. Office of Management and Budget, 2008b) requires agencies to develop and publish schedules for release of other important social and economic indicators as well (see Appendix B). When an agency modifies a customary release schedule for statistical purposes, it should announce and explain the change as far in advance as possible. PRACTICES FOR A FEDERAL STATISTICAL AGENCY Practice 1: A Clearly Defined and Well-Accepted Mission A clear understanding of the mission of an agency, the scope of its statistical programs, and its authority and responsibilities are basic to planning and evaluating its programs and to maintaining credibility and independence from political control (National Research Council, 1986, 1997b). Some agency missions are clearly spelled out in legislation; other agencies have only very general legislative authority. On occasion, very specific requirements may be set by legislation or regulation. Agencies should communicate their mission clearly to others. The use of the Internet is one means to publicize an agency’s mission to a broad audience and to provide related information, including enabling legislation, the scope of the agency’s statistical program, confidentiality provisions, operating procedures, and data quality guidelines. An agency’s mission should focus on the compilation, evaluation, analysis, and dissemination of statistical information. In addition, considerable and formal attention must be paid to setting statistical priorities (National Research Council, 1976). Advice from outside groups should be sought on the agency’s statistical program, on setting statistical priorities, on the statistical methods used, and on data products and services. Such advice may be sought in a variety of formal and informal ways, but it should be obtained from data users and providers as well as professional or technical experts in the subject-matter area and in statistical methods and procedures. A strong research program in the agency’s subject-matter field can assist in setting priorities and identify- ing ways to improve an agency’s statistical programs (Triplett, 1991). Practice 2: Continual Development of More Useful Data Federal statistical agencies cannot be static. To provide information of continued relevance for public and policy use, they must continually

part iI: COMMENTARY 27 anticipate data needs for future policy considerations and look for ways to develop data systems that can serve broad purposes. To improve the quality and timeliness of their information, they must keep abreast of method- ological and technological advances and be prepared to implement new procedures in a timely manner. They must also continually seek ways to make their operations more efficient. Preparing for the future requires that agencies reevaluate existing data series, plan new data series as required, and be innovative and open in their consideration of ways to improve their programs. Because of the decentralized nature of the federal statistical sys- tem, innovation often requires cross-agency collaboration. Innovation also implies a willingness to implement different kinds of data collection efforts to answer different needs. Integration of Data Sources One way to increase the usefulness of survey data is to integrate them with data from other surveys or with data from administrative records, such as social program records. Such integration typically requires that several agencies work together. For example, in the area of health care provider statistics, a study by a panel of the Committee on National Statistics (CNSTAT) concluded that no single survey was likely ever to meet all the criteria, address all the technical problems, or meet all users’ needs for data. In order to provide adequate information on the availability, financing, and quality of health care, a coordinated and integrated system of data collection activities involv- ing several organizational entities was required (National Research Council and Institute of Medicine, 1992). Similarly, a CNSTAT study on retirement income statistics concluded that some of the information that is essential for analysis of savings and retirement decisions and the effect of medical care use and expenditures on retirement income security is most efficiently and accurately obtained from existing administrative records (National Research Council, 1997a). To be useful for estimation, this information (e.g., Social Security earnings histories, Medicare and Medicaid benefits) must be linked to individual data that are available from such panel surveys as the Health and Retirement Study sponsored by the National Institute on Aging, the National Longitu- dinal Surveys sponsored by the Bureau of Labor Statistics, and the Census Bureau’s Survey of Income and Program Participation. Similarly, linkage of employer and employment survey data with administrative records can

28 principles and practices for a federal statistical agency provide enhanced analysis and modeling capability: a good example is the Census Bureau’s Longitudinal Employer-Household Dynamics program (see http://lehd.did.census.gov/led [December 2008]; see also National Research Council, 2007a). Challenges to cost-effective data collection from households and in- dividuals because of declining survey response (see, e.g., de Leeuw and de Heer, 2002) make it more important than ever to consider ways in which administrative records can be used to bolster the completeness and quality of estimates from statistical agency programs while containing costs. One or a combination of the following four approaches could be used: evaluate survey data against administrative data (taking cognizance of differences that could affect the comparisons and of sources of error in both sets of records); improve the methods used to impute values to survey nonrespondents on the basis of patterns in administrative data; substitute administrative data for survey data; and combine survey and administrative data in statisti- cal models for specific estimates (see National Research Council, 2000c, 2000d). In most uses of administrative data, not only must consideration be given to upfront investments to facilitate the most effective approach to their use, but also careful attention must be paid to the means by which the confidentiality of linked or augmented data files can be protected while allowing access for research purposes (National Research Council, 2005b). Care must also be taken to ensure that extracts of data from administrative records were prepared correctly according to the specifications provided by the statistical agency. Sharing of Microdata Another way to improve data quality and develop new kinds of infor- mation is for statistical agencies that collect similar information to share microdata records. For example, the sharing of business data would make it possible to evaluate reporting errors and the completeness of coverage of business firms in different surveys. Such sharing would also make it pos- sible to develop more useful and accurate statistics on the nation’s economy while decreasing the reporting burden on business data providers (National Lower response rates reduce the effective sample size and increase the sampling error of estimates from surveys; lower rates also increase response bias in survey estimates to the extent that nonrespondents differ from respondents in ways that affect analysis and are not addressed by weighting and imputation procedures.

part iI: COMMENTARY 29 Research Council, 2006b). Subtitle B of CIPSEA, for the first time in the nation’s history, authorizes the sharing of business data among the three principal statistical agencies that produce the nation’s key economic statistics—the Bureau of Economic Analysis (BEA), the Bureau of Labor Statistics (BLS), and the U.S. Census Bureau. The first formal proposal for data sharing under CIPSEA involved matching data from BEA’s inter- national investment surveys with data from the Census Bureau’s Survey of Industrial Research and Development conducted for the National Science Foundation. The results helped BEA improve its survey sample frames and enabled the Census Bureau to identify companies that were not previously known to engage in research and development activities (U.S. Office of Management and Budget, 2004b:44-45). Longitudinal Data The need to understand temporal changes in important social or economic events may call for the development of longitudinal surveys that track people, institutions, or firms over time. Developing longitudinal data (and general purpose repeated cross-sectional data, as well) usually requires much coordination with policy research agencies, other statistical agencies, and academic researchers. Longitudinal data may require more sophisticated methods for collection and analysis than data from repeated or one-time cross-sectional surveys. In addition, considerable time may be needed to produce useful data products for analyzing transitions and other dynamic characteristics of longitudinal samples (although production of cross-sectional products from longitudinal surveys need not take long). Yet data from longitudinal surveys are potentially very useful—sometimes, they are the only means to answer important policy questions (see, e.g., National Research Council, 1997a, on data needs to inform retirement income policy, and National Research Council, 2001b, on data needs to evaluate the effects of the 1996 welfare reform legislation). Historically, because statistical agencies are oriented toward the mission of their particular department, the longitudinal surveys they developed (and cross-sectional data activities as well) typically focused on subject matter and population groups (or other entities) that the department serves. For The Census Bureau cannot share with BEA or BLS any tax information of businesses or individuals that it has permission to acquire from the Internal Revenue Service for statistical purposes without revision of Title 26 of the U.S. Code.

30 principles and practices for a federal statistical agency example, separate data sets are available on health characteristics of infants and children, educational characteristics for children and teenagers, and work force characteristics for adults. Increasingly, however, agencies have considered surveys that follow individuals across such key transitions as from early childhood to school and from school to the labor force (Na- tional Research Council, 1998a; National Research Council and Institute of Medicine, 2004). Examples of statistical agency surveys that are designed for analysis of some kinds of transitions include the Early Childhood Longitudinal Study (ECLS), sponsored by the National Center for Education Statistics in col- laboration with other agencies, and the National Longitudinal Surveys of Youth (NLSY79, NLSY97), sponsored by the Bureau of Labor Statistics. The ECLS includes two cohorts of children, one of kindergartners in 1998 who were followed through eighth grade and another of babies born in 2001 who were followed through kindergarten (http://www.nces.edu.gov/ ecls [December 2008]). A new cohort of kindergartners will be sampled in fall 2010 and followed through fifth grade. The NLSY includes two cohorts of young people, one of people ages 14-22 in 1979, who are being interviewed every other year, and the other of people ages 12-17 in 1997, who are being interviewed annually (http://www.bls.gov/nls/home.htm [December 2008]). Other important longitudinal surveys are sponsored by research agencies—for example, the National Institute on Aging sponsors the Health and Retirement Study (HRS), and the National Institute of Child Health and Human Development sponsors the new National Children’s Study (NCS) (see National Research Council and Institute of Medicine, 2008). The HRS, which began in 1992, includes people aged 50 and older, who are interviewed every 2 years, with a new cohort introduced every 6 years (http://hrsonline.isr.umich.edu [December 2008]). The NCS will include 100,000 children and follow them and their families from before birth through age 21, with enrollment at the first sites in January 2009 (http:// www.nationalchildrensstudy.gov [December 2008]). Operational Methods It is important for statistical agencies to be innovative in the methods used for data collection, processing, estimation, analysis, and dissemination. Agencies need to investigate new or modified methods that have the poten- tial to improve the accuracy and timeliness of their data and the efficiency

part iI: COMMENTARY 31 of their operations. Careful evaluation of new methods is required to assess their benefits and costs in comparison with current methods and to deter- mine effective implementation strategies, including the development of methods for bridging time series before and after a change in procedures. For example, experience with the use of computer-assisted interview- ing techniques, which many agencies have adopted for data collection, has identified benefits. It has also identified challenges for the timely provision of data and documentation that require continued research to develop solu- tions that maximize the gains from these techniques (see National Research Council, 2003e). Statistical agencies have turned to the Internet as a standard vehicle for data dissemination and are increasingly using it as a means of data col- lection. Internet dissemination facilitates the timely availability of data to a broad audience and provides a valuable tool for users to learn of related data sets from other agencies. However, it poses challenges in several areas, such as how best to provide information on data quality and appropriate use of the data to an audience that spans a wide range of analytical skills and understanding. Internet data collection poses new challenges in such areas as sample design, questionnaire design, and protecting data confidentiality. Yet even as work is ongoing on meeting these challenges, population censuses around the world, federal business surveys, and other surveys are using the Internet as one data collection mode to reduce costs and facilitate response (see Na- tional Research Council, 2008a, on Internet use in population censuses). The use of the Internet also requires careful evaluation of the effects on the quality of responses in comparison with traditional data collection modes (telephone, mail, personal interview). Practice 3: Openness About Sources and Limitations of the Data Provided A critically important means to instill credibility and trust among data users and data providers is for an agency to operate in an open and fully transparent manner with regard to the sources and the limitations of its data. Openness requires that an agency provide a detailed description of its data with acknowledgment of any uncertainty and a description of the methods used and assumptions made. Agencies should provide to users reliable indications of the kinds and amounts of statistical error to which the data are subject (see Brackstone, 1999; Federal Committee on Statistical

32 principles and practices for a federal statistical agency Methodology, 2001a; see also President’s Commission on Federal Statistics, 1971). Some statistical agencies have developed detailed quality profiles for some of their major series, such as those developed for the American Housing Survey (Chakrabarty, 1996), the Residential Energy Consumption Survey (Energy Information Administration, 1996), the Schools and Staff- ing Survey (Kalton et al., 2000), and the Survey of Income and Program Participation (U.S. Census Bureau, 1998). Earlier, the Federal Committee on Statistical Methodology (1978c) developed a quality profile for employ- ment as measured in the Current Population Survey. These profiles have proved helpful to experienced users and agency personnel responsible for the design and operation of major surveys and data series (see National Research Council, 1993a, 2007b). Openness about data limitations requires much more than providing estimates of sampling error. In addition to a discussion of aspects that stat- isticians characterize as nonsampling errors, such as coverage errors, non- response, measurement errors, and processing errors, a description of the concepts used and how they relate to the major uses of the data is desirable. Descriptions of the shortcomings of and problems with the data should be provided in sufficient detail to permit the user to take them into account in analysis and interpretation. Descriptions of how the data relate to similar data collected by other agencies should also be provided, particularly when the estimates from two or more series differ significantly in ways that may have policy implications. Openness means that a statistical agency should describe how decisions on methods and procedures were made for a data collection program. It is important to be open about research conducted on methods and data and other factors that were weighed in a decision. Openness also means that, when mistakes are discovered after statistics are released, the agency has an obligation to issue corrections publicly and in a timely manner. The agency should use not only the same dissemination vehicles to announce corrections that it used to release the original statistics, but also use additional vehicles, as appropriate, to alert the widest possible audience of current and future users of the corrections in the information. In summary, agencies should make an effort to provide information on the quality, limitations, and appropriate use of their data that is as frank and complete as possible. Such information, which is sometimes termed “metadata,” should be made available in ways that are easy for users to access and understand, recognizing that users differ in their level of under- standing of statistical data (see National Research Council, 1993a, 1997b,

part iI: COMMENTARY 33 2007b). Agencies need to work to educate users that all data contain some uncertainty and error, which does not mean the data are wrong but that they must be used with care. The Information Quality Act of 2000 stimulated all federal agen- cies to develop written guidelines for maintaining and documenting the quality of their information programs and activities. Using a framework developed collaboratively by the members of the Interagency Council on Statistical Policy (U.S. Departments of Agriculture et al., 2002), individual statistical agencies have developed quality guidelines for their own data collection programs, which are available on the Internet (see Practice 7 and Appendix B). Practice 4: Wide Dissemination of Data A statistical agency must have vigorous and well-planned dissemination programs to get information into the hands of users who need it on a timely basis. Planning should be undertaken from the viewpoint that the public has contributed the data elements, has paid for the data collection and pro- cessing, and should in return have the information accessible in ways that make it as useful as possible to the largest number of users. A good dissemination program provides data to users in forms that are suited to their needs. Data release may take the form of regularly updated time series, cross-tabulations of aggregate characteristics of respondents, and analytical reports that are made available in printed publications, on computer-readable media (e.g., CD-ROM), and on the Internet (see Ap- pendix D). Yet another form of dissemination involves access to microdata files, which make it possible to conduct in-depth research in ways that are not possible with aggregate data. Public-use microdata files may be developed for general release. Such files contain data for individual respondents that have been processed to protect confidentiality by deleting, aggregating, or modifying any information that might permit individual identification. Alternatively, an agency may provide or arrange for a facility on the Internet to allow users to aggregate individual microdata to suit their purposes, with safeguards so that the data cannot be retabulated in ways that could identify individual respondents. Another alternative is to grant a license to individu- al researchers to analyze restricted microdata (that is, data that have not been processed for general release) at their own sites by agreeing to follow strict procedures for protecting confidentiality and accepting liability for penalties

34 principles and practices for a federal statistical agency if confidentiality is breached. A fourth alternative is to allow researchers to analyze restricted microdata at secure sites maintained by a statistical agency, such as one of the Census Bureau’s Research Data Centers located at several universities and research organizations around the country or the National Center for Health Statistics’ Research Data Center at its headquarters (see Doyle et al., 2001; National Research Council, 2005b). Agencies should consider all forms of dissemination in order to gain the most use of their data consistent with protecting the confidentiality of responses. The stunning improvements over the past two decades in computing speed, power, and storage capacity, the growing availability of information from a wide range of public and private sources on the Internet, and the increasing richness of statistical agency data collections have increased the risk that individually identifiable information can be obtained (see National Research Council, 2003d:Ch. 5, 2005b). Statistical agencies must be vigi- lant in their efforts to protect against the increased threats to disclosure from their summary data and microdata products while honoring their obligation to be proactive in seeking ways to provide data to users. When statistical data are not disseminated in useful forms, there is a loss to the public, not only of wasted taxpayer dollars, but also of research findings that could have informed public policy and served other important societal purposes. A good dissemination program for statistical data uses a variety of channels to inform the broadest possible audience of potential users about available data products and how to obtain them. Such channels may include providing direct access to data on the Internet, depositing data products in libraries, establishing a network of data centers (such as the Census Bureau’s state data centers and the National Agricultural Statistics Service’s field of- fices), holding exhibits and making presentations at conferences, and main- taining lists of individuals and organizations to notify of new data. Agencies should also arrange for archiving of data with the National Archives and Records Administration (NARA) and other data archives, as appropriate, so that data are available for historical research in future years with suitable protections for confidentiality. An effective dissemination program provides not only the data, but also information about the strengths and weaknesses of the data in ways that can be comprehended by diverse audiences. Information about the limitations of the data should be included in every form of data release, whether in a printed report, on a computer-readable data file, or on the Internet. On occasion, the objective of presenting the most accurate data pos- sible may require more time than is consistent with the needs of users for the

part iI: COMMENTARY 35 information. The tension between frequency and promptness of release on one hand and accuracy on the other should be explicitly considered. When concerns for timeliness prompt the release of preliminary estimates (as in some economic indicators), consideration should be given to the frequency of revisions and the mode of presentation of revised figures from the point of view of the users as well as the issuers of the data. Agencies that release preliminary estimates must educate the public about differences among preliminary, revised, and final estimates. Practice 5: Cooperation with Data Users Users of federal statistical data span a broad spectrum of interests and needs. They include policy makers, planners, administrators, and research- ers in federal agencies, state and local governments, the business sector, and academia. They also include activists, citizens, students, and media repre- sentatives. An effective statistical agency endeavors to learn about its data users and to obtain input from them on the agency’s statistical programs. The needs of users can be explored by forming advisory committees, holding focus groups, analyzing requests and Internet activity, or under- taking formal surveys of users. The task requires continual alertness to the changing composition and needs of users and the existence of potential users. An agency should cooperate with professional associations, institutes, universities, and scholars in the relevant fields to determine the needs of the research community and obtain their insight on potential uses. An agency should also work with relevant associations and other organizations to de- termine the needs of business and industry for its data. Within the limitations of its confidentiality procedures as noted above, an agency should seek to provide maximum access to its data, including making the data available to external researchers for secondary analysis (National Research Council, 1985c, 2005b). Having data accessible for a wide range of analyses increases the return on the investment in data col- lection and provides support for an agency’s program. Once statistical data are made public, they may be used in numerous ways not originally envis- aged. An agency should attempt to monitor the major uses of its data as part of its efforts to keep abreast of user needs. In 2002 OMB introduced requirements for performance assessment of federal agencies; for statistical agencies, the requirements emphasize assessment of how well the agency understands and serves its users (see Appendix B). Researchers and other users of data frequently request data from statis-

36 principles and practices for a federal statistical agency tical agencies for specific purposes. The agency should have procedures in place for referring users to professionals within the agency who can compre- hend the user’s purposes and needs and who have a thorough knowledge of the agency’s data. Statistical agencies should view these services as a part of their dissemination activities. Ensuring equal access requires avoiding release of data to selected in- dividuals or organizations in advance of other users. Agencies that prepare special tabulations of their data on request for external groups must be alert to the proposed uses. If the data are to be used in court cases, administrative proceedings, or collective bargaining negotiations, it is wise to have a known policy ensuring that all sides may receive the special tabulations, regardless of which side requested them or paid the cost of the tabulation. Practice 6: Fair Treatment of Data Providers Clear policies and effective procedures for protecting data confidential- ity, respecting the privacy of respondents, and, more broadly, protecting the rights of human research participants are critical to maintaining the quality and comprehensiveness of the data that federal statistical agencies provide to policy makers and the public. Part of the challenge for statistical agencies is to develop effective means of communicating not only the agency’s pro- tection procedures and policies, but also the importance of the data being collected for the public good. Protecting Confidentiality Data providers must believe that the data they give to a statistical agency will not be used by the agency to harm them. For statistical data collection programs, protecting the confidentiality of individual responses is considered essential to encourage high response rates and accuracy of response. (For reviews of research on the relationship of concerns about confidentiality protection to response rates, see Hillygus et al., 2006; Na- tional Research Council, 2004e:Ch. 4.) Furthermore, if participants have been assured of confidentiality, then under federal policy for the protection of human subjects, disclosure of identifiable information about them would violate the principle of respect for persons even if the information is not sensitive and would not result in any social, economic, legal, or other harm (National Research Council, 2003d:Ch. 5). Historically, some agencies had legislative mandates supporting prom-

part iI: COMMENTARY 37 ises of confidentiality (e.g., for the U.S. Census Bureau, Title 13 of the U.S. Code, first enacted in 1929, and for the National Agricultural Statistics Service, various provisions in Title 7 of the U.S. Code); other agencies (e.g., the Bureau of Labor Statistics) relied on strong statements of policy, legal precedents in court cases, or custom (see Gates, 2000; Norwood, 1995). The latter agencies risked having their policies overturned by judicial inter- pretations of legislation or executive decisions that might have required the agency to disclose identifiable data collected under a pledge of confidential- ity (for an example involving the Energy Information Administration, see National Research Council, 1993b:185-186). To give additional weight and stature to policies that statistical agencies had pursued for decades, OMB issued a Federal Statistical Confidentiality Order on June 27, 1997. This order assured respondents who provided statistical information to specified agencies that their responses would be held in confidence and would not be used against them in any government action, “unless otherwise compelled by law” (U.S. Office of Management and Budget, 1997; see also Appendix B). CIPSEA became law in 2002, as Title V of the E-Government Act of 2002. Subtitle A of CIPSEA provides a statutory basis for protecting the confidentiality of all federal data collected for statistical purposes under a confidentiality pledge, including but not limited to data collected by statis- tical agencies. Subtitle A places strict limits on the disclosure of individu- ally identified information collected with a pledge of confidentiality; such disclosure to persons other than the employees or agents of the agency col- lecting the data can occur only with the informed consent of the respondent and the authorization of the agency head and only when the disclosure is not prohibited by any other law (e.g., Title 13). It also provides penalties for employees or agents who knowingly or willfully disclose statistical informa- tion (up to 5 years in prison, up to $250,000 in fines, or both). OMB issued guidance in 2007 to assist agencies in implementing Subtitle A of CIPSEA (U.S. Office of Management and Budget, 2007; see also Appendix B). Although confidentiality protection for statistical data is now on a much firmer legal footing across the federal government than prior to CIPSEA, there is an exception for some data from the National Center for Education Statistics (NCES) that could have an adverse effect on survey response. The USA PATRIOT Act of 2001, Section 508, amended the National Center for Education Statistics Act of 1994 to allow the U.S. Attorney General (or an assistant attorney general) to apply to a court to obtain any “reports, records, and information (including individually identifiable information)

38 principles and practices for a federal statistical agency in the possession” of NCES that are considered relevant to an authorized investigation or prosecution of domestic or international terrorism. Section 508 also removed the penalties for NCES employees who furnish individual records under this section. Statistical agencies continually strive to avoid inadvertent disclosure of confidential information in disseminating data. Recently, the widespread dissemination of statistical data via the Internet has heightened attention by agencies to ensuring that effective safeguards to protect confidential information are in place. Risks are increased when data for small groups are tabulated, when the same data are tabulated in a variety of ways, or when public-use microdata files (samples of records for unidentified individuals or units) are released with highly detailed content. Longitudinal surveys, for example, particularly newer ones, typically have richly detailed content for multiple domains (e.g., health, education, labor force participation) or multiple respondents (e.g., parents, students, teachers) or both. Risks may also be increased when surveys include linked administrative data or collect biomarkers from blood samples or other physiological measures (National Research Council, 2001a). Because of the disclosure risks associated with detailed tabulations and rich public-use microdata files, there is always a tension between the desire to safeguard confidentiality and the desire to provide public access to data. This dilemma is an important one to federal statistical agencies, and it has stimulated ongoing efforts to develop new statistical and administrative pro- cedures to safeguard confidentiality while permitting more extensive access. An effective federal statistical agency will exercise judgment in determining which of these procedures are best suited to its requirements to serve data users while protecting confidentiality. (Several Committee on National Statistics study panels have discussed these issues and alternative procedures for providing data access while maintaining confidentiality protection; see National Research Council, 1993b, 2000a, 2003d, 2005b.) Respecting Privacy To promote trust and encourage accurate response from data provid- ers, it is important that statistical agencies respect their privacy. When data providers are asked to participate in a survey, they should be told whether the survey is mandatory or voluntary, how the data will be used, and who will have access to the data. In the case of voluntary surveys, information on these matters is necessary in order for data providers to give their informed

part iI: COMMENTARY 39 consent to participate (see National Research Council, 2003d, on regula- tions and procedures for informed consent). Respondents invest time and effort in replying to surveys. The amount of effort or burden varies considerably from survey to survey, depending on such factors as the complexity of the information that is requested. Statistical agencies should attempt to minimize such effort, to the extent possible, by using concepts and definitions that fit respondents’ common understanding; by simplifying questionnaires; by allowing alternative modes of response (e.g., via the Internet) when appropriate; and by using administrative records or other data sources, if they are sufficiently complete and accurate to provide some or all of the needed information. In surveys of businesses or other institutions, agencies should seek innovative ways to obtain information from the institution’s records and minimize the need for respondents to reprocess and reclassify information. It is also the responsi- bility of agencies to use qualified, well-trained interviewers. Respondents should be informed of the likely duration of a survey interview and, if the survey involves more than one interview, how many times they will be con- tacted over the life of the survey. This information is particularly important when respondents are asked to cooperate in extensive interviews, search for records, or participate in longitudinal surveys. Ways in which participation in surveys can be made easier for respon- dents and result in more accurate data can be explored by such means as focus group discussions or surveys. Many agencies apply the principles of cognitive psychology to questionnaire design, not only to make the result- ing data more accurate, but also to make the time and effort of respondents more efficient (National Research Council, 1984). Some agencies thank respondents for their cooperation by providing them with brief summaries of the information after the survey is compiled. Increasing privacy concerns may contribute to observed declines in survey response rates. In a time when individuals are inundated with re- quests for information from public and private sources, when there are documented instances of identity theft and other abuses of confidential information on the Internet, when individual information is being used for terrorism-related investigatory or law enforcement purposes, it may not be surprising that individuals object to responding to censuses and surveys, even when the questions appear noninvasive and the data are collected for statistical purposes under a pledge of confidentiality. (See National Research Council, 2008b, for a literature review of public opinion on privacy in the wake of the September 11, 2001, terrorist attacks [Appendix M], and for

40 principles and practices for a federal statistical agency a conclusion [p. 84] that “census and survey data collected by the federal statistical agencies are not useful for terrorism prevention.”) The E-Government Act of 2002 requires agencies to develop privacy impact assessments (PIAs) whenever “. . . initiating a new collection of information . . . in an identifiable form. . . .” The purpose of a privacy impact assessment is to ensure there is no collection, storage, access, use, or dissemination of identifiable information that is not both needed and permitted. In response, statistical agencies have begun conducting and releasing PIAs for statistical programs and, in the process, rethinking how to respect individual privacy in order to maintain trust with data providers (see Appendix B). Statistical agencies should devote resources to understanding the pri- vacy and confidentiality concerns of individuals (and organizations). They should also devote resources to devising effective strategies for communicat- ing privacy and confidentiality policies and practices to respondents. Such strategies appear to be more necessary—and more challenging—than ever before. Finally, a reason that respondents reply to statistical surveys is that they believe that their answers will be useful to the government or to society gen- erally. Statistical agencies should respect this contribution by compiling the data and making them accessible to users in convenient forms. A statistical agency has an obligation to publish statistical information from the data it has collected unless it finds the results invalid. Protecting Human Research Participants Collecting data from individuals as part of a research study or a statisti- cal information program is a form of research involving human participants, for which the federal government has developed regulations, principles, and best practices over a period of 50 years (National Research Council, 2003d). The pertinent regulations, which have been adopted by 10 depart- ments and 7 agencies, are known as the “Common Rule” (45 CFR §46). The Common Rule regulations require that researchers protect the privacy of human participants and maintain the confidentiality of data collected from them, minimize the risks to participants from the data collection and analysis, select participants equitably with regard to the benefits and risks of the research, and seek informed consent from participants. Under the regulations, most federally funded research involving human participants must be reviewed by an independent institutional review board (IRB) to

part iI: COMMENTARY 41 determine that the design meets the ethical requirements for protection. (For information about the Common Rule and procedures for the certifi- cation of IRBs by the Office for Human Research Protections in the U.S. Department of Health and Human Services, see http://www.hhs.gov/ohrp [December 2008].) Data collections of federal statistical agencies are subject to IRB review within some departments. The Census Bureau, citing the confidentiality provisions in its own enabling legislation (13 USC §9), has maintained an exemption from IRB review for its data collection programs under 45 CFR §46.101(b.3), which permits exemption if “federal statute(s) require(s) without exception that the confidentiality of the personally identifiable information will be maintained throughout the research and thereafter.” Whether or not a statistical agency is subject to formal IRB review, it should strive to incorporate the spirit of the Common Rule regulations in the design and operation of its data collection programs. An agency that is required to obtain IRB approval for data collection should work proactively with the IRB to determine how best to apply the regulations in ways that do not unnecessarily inhibit response. For example, signed written consent is not necessary for mail surveys and is hardly ever necessary for telephone surveys of the general population; such documentation does not provide any added protection to the respondent, and it is likely to reduce participa- tion. As noted above, an effective statistical agency will seek ways—such as sending an advance letter—to furnish information to potential respondents that will help them make an informed decision about whether to partici- pate. Such information should include the planned uses of the data and their benefits to individuals and the public. Practice 7: Commitment to Quality and Professional Standards of Practice The best guarantee of high-quality data is a strong professional staff that includes experts in the subject-matter fields covered by the agency’s pro- gram, experts in statistical methods and techniques, and experts in data col- lection, processing, and other operations. A major function of an agency’s leadership is to strike a balance among these groups and promote working relationships that make the agency’s program as productive as possible, with each group of experts contributing to the work of the others. An effective statistical agency devotes resources to developing, imple- menting, and inculcating standards for data quality and professional

42 principles and practices for a federal statistical agency practice. Although a long-standing culture of data quality contributes to professional practice, an agency should also seek to develop and document standards through an explicit process. The existence of explicit standards and guidelines, regularly reviewed and updated, facilitates training of new in-house staff and contractors’ staffs. The OMB document, Standards and Guidelines for Statistical Surveys (U.S. Office of Management and Budget, 2006b), is helpful in that it covers every aspect of a survey from planning through data release (see also U.S. Office of Management and Budget, 2006a, and Appendix B). It recommends that agencies develop additional, more detailed standards that focus on their specific statistical activities (see, e.g., the Statistical Standards of the National Center for Education Statistics, available at http://nces.ed.gov/statprog/2002/stdtoc.asp [December 2008]; and the Energy Information Administration’s Standards Manual, available at http://www.eia.doe.gov/smg/Standards.html [December 2008]). An effective statistical agency keeps up to date on developments in theory and practice that may be relevant to its program, such as new tech- niques for imputing missing data (see, e.g., National Research Council, 2004e:App. F). An effective agency is also alert to changes in the economy or in society that may call for changes in the concepts or methods used in particular data sets. Yet the need for change often conflicts with the need for comparability with past data series, and this issue can easily dominate consideration of proposals for change. Agencies have the responsibility to manage this conflict by initiating more relevant data series or revising exist- ing series to improve quality while providing information to compare old and new series, such as was done when the BLS revised the treatment of owner-occupied housing in the CPI. To ensure the quality of its data collection programs and reports, an effective statistical agency has mechanisms and processes for obtaining both inside and outside review of such aspects as the soundness of the data collec- The data quality guidelines of statistical agencies in other countries are also helpful; for Canada, see http://www.statcan.ca [December 2008]; for Great Britain, see http://statistics. gov.uk [December 2008]. Reviews of concepts underlying important statistical data series include: National Research Council (1995a and 2005c) on concepts of poverty; National Research Council (2002a) on cost-of-living concepts; National Research Council (2005a) on “satellite” ac- counts for nonmarket activities, such as home production, volunteerism, and human capital investment; National Research Council (2006a) on concepts of food insecurity and hunger; and National Research Council (2006c) on concepts of residence for the U.S. census and the American Community Survey.

part iI: COMMENTARY 43 tion and estimation methods and the completeness of the documentation of the methods used and the error properties of the data. For individual publi- cations and reports, formal processes are needed that incorporate review by agency technical experts and, as appropriate, by technical experts in other agencies and outside the government. (See Appendix B for a description of recent OMB guidelines for peer review of scientific information; reviews at a program or agency-wide level are considered under Practice 10.) Practice 8: An Active Research Program Substantive Research and Analysis A statistical agency should include staff with responsibility for conduct- ing objective substantive analyses of the data that the agency compiles, such as analyses that assess trends over time or compare population groups: • Agency analysts are in a position to understand the need for and purposes of the data from a survey or other data collection program and know how the statistics will be used. Such information must be available to the agency and understood thoroughly if the survey design is to produce the data required. • Those involved in analysis can best articulate the concepts that should form the basic framework of a statistical series. Agency analysts are well situated to understand and transmit the views of external users and researchers; at the same time, close working relationships between analysts and data producers are needed for the translation of the conceptual frame- work into the design and operation of the survey or other data collection program. • Agency analysts have access to the complete microdata and so are in a better position than analysts outside the agency to understand and de- scribe the limitations of the data for analysis purposes and to identify errors or shortcomings in the data that can lead to subsequent improvements. • Substantive research by analysts on an agency’s staff will have cred- ibility because of the agency’s commitment to openness about the data provided and maintaining independence from political control. • Substantive research by analysts on an agency’s staff can assist in formulating the agency’s data program, suggesting changes in priorities, concepts, and needs for new data or discontinuance of outmoded or little- used series.

44 principles and practices for a federal statistical agency As with descriptive analyses provided by the agency, substantive analyses should be designed to be relevant to policy by addressing topics of public interest and concern. However, such analyses should not include positions on policy options or be designed to reflect any particular policy agenda. These issues are discussed in Martin (1981), Norwood (1975), and Triplett (1991). Research on Methodology and Operations For statistical agencies to be innovative in methods for data collection, analysis, and dissemination, research on methodology and operational pro- cedures must be ongoing. Methodological research may be directed toward improving survey design, measuring error and, when possible, reducing it from such sources as nonresponse and reporting errors, reducing the time and effort asked of respondents, evaluating the best mix of interview modes (e.g., mail, telephone, personal interview) to cope with increasing nonre- sponse rates due to such phenomena as cell-phone-only households, devel- oping new and improved summary measures and estimation techniques, and developing innovative statistical methods for confidentiality protection. Research on operational procedures may be directed toward facilitating data collection in the field, improving the efficiency and reproducibility of data capture and processing, and enhancing the usability of Internet-based data dissemination systems. Much of current practice in statistical agencies was developed through research they conducted or obtained from other agencies. Federal statistical agencies, frequently in partnership with academic researchers, pioneered the applications of statistical probability sampling, the national economic accounts, input-output models, and other analytic methods. The U.S. Census Bureau pioneered the use of computers for processing the census, and research on data collection, processing, and dissemination operations continues to lead to creative uses of automated procedures and equipment in these areas. Several federal statistical agencies sponsor research using academic principles of cognitive psychology to improve the design of ques- tionnaires, the clarity of data presentation, and the ease of use of electronic data collection and dissemination tools such as the Internet. The history of the statistical agencies has shown repeatedly that methodological and opera- tions research can lead to large productivity gains in statistical activities at relatively low cost.

part iI: COMMENTARY 45 An effective statistical agency actively partners with the academic com- munity for methodological research. It also seeks out academic and industry expertise for improving data collection, processing, and dissemination operations. For example, a statistical agency can learn techniques and best practices for improving software development processes from computer scientists (see National Research Council, 2003e, 2004d). Research on Policy Uses Much more needs to be known about how statistics are actually used in the policy-making process, both inside and outside the government. Re- search about how the information produced by a statistical agency is used in practice should contribute to future improvements in design, concepts, and format of data products. For example, public-use files of statistical microdata were developed in response to the growing analytic needs of government and academic researchers. Gaining an understanding of the variety of uses and users of an agency’s data is only a first step. More in-depth research on the policy uses of an agency’s information might, for example, explore the use of data in microsimulation or other economic models, or go further to examine how the information from such models and other sources is used in decision making (see National Research Council, 1991a, 1991b, 1997a, 2000b, 2001b, 2003a). Practice 9: Professional Advancement of Staff An effective federal statistical agency has personnel policies that encour- age the development and retention of a strong professional staff who are committed to the highest standards of quality work. There are several key elements of such a policy: • The required levels of technical and professional qualifications for positions in the agency are identified, and the agency adheres to these re- quirements in recruitment and professional development of staff. Position requirements take account of the different kinds of technical and other skills, such as supervisory skills, that are necessary for an agency to have a full range of qualified staff, including not only statisticians, but also experts in relevant subject-matter areas, data collection, processing, and dissemina- tion processes, and management of complex, technical operations.

46 principles and practices for a federal statistical agency • Continuing technical education and training, appropriate to the needs of their positions, is provided to staff through in-house training pro- grams and opportunities for external education and training. • Position responsibilities are structured to ensure that staff have the opportunity to participate, in ways appropriate to their experience and ex- pertise, in research and development activities to improve data quality and cost-effectiveness of agency operations. • Professional activities, such as publishing in refereed journals and presentations at conferences, are encouraged and recognized, including presentations of technical work in progress with appropriate disclaimers. Participation in relevant statistical and other scientific associations is en- couraged to promote interactions with researchers and methodologists in other organizations. Such participation is also a mechanism for openness about the data provided. • Interaction with other professionals is increased through technical advisory committees, supervision of contract research and research consul- tants, fellowship programs of visiting researchers, exchange of staff with relevant statistical, policy, or research organizations, and opportunities for new assignments within the agency. • Accomplishment is rewarded by appropriate recognition and by af- fording opportunity for further professional development. The prestige and credibility of a statistical agency is enhanced by the professional visibility of its staff, which may include establishing high-level nonmanagement posi- tions for highly qualified technical experts. An effective statistical agency considers carefully the costs and benefits—monetary and nonmonetary—of using contractor organiza- tions, not only for data collection as most agencies do, but also to supple- ment in-house staff in other areas. Outsourcing can have benefits, such as: providing experts in areas in which the agency is unlikely to be able to attract highly qualified in-house staff (e.g., some information technology functions), enabling an agency to handle an increase in its workload that is expected to be temporary or that requires specialized skills, and allowing an agency to learn from best industry practices. However, outsourcing can also have costs, including that agency staff become primarily contract managers 8Only the Bureau of Labor Statistics and the Census Bureau maintain their own inter- viewing staff.

part iI: COMMENTARY 47 and less qualified as technical experts and leaders in their fields. An effec- tive statistical agency maintains and develops a sufficiently large number of in-house staff, including mathematical statisticians, who are qualified to analyze the agency’s data and to plan, design, carry out, and evaluate its core operations so that the agency maintains the integrity of its data and its credibility in planning and fulfilling its mission. Statistical agencies should also maintain and develop staff with the expertise necessary for effective management of contractor resources. An effective statistical agency has policies and practices to instill the highest possible commitment to professional ethics among its staff, as well as procedures for monitoring contractor compliance with ethical standards. When an agency comes under pressure to act against its principles—for example, if it is asked to disclose confidential information for an enforce- ment purpose or to support an inaccurate interpretation of its data—it must be able to rely on its staff to resist such actions as contrary to the ethical principles of their profession. An effective agency refers its staff to such statements of professional practice as the guidelines published by the American Statistical Association (1999) and the International Statistical Institute (1985), as well as to the agency’s own statements about protec- tion of confidentiality, respect for privacy, standards for data quality, and similar matters. It endeavors in other ways to ensure that its staff are fully cognizant of the ethics that must guide their actions in order for the agency to maintain its credibility as a source of objective, reliable information for use by all. Practice 10: A Strong Internal and External Evaluation Program Statistical agencies that fully follow such practices as continual develop- ment of more useful data, openness about sources and limitations of the data provided, wide dissemination of data, commitment to quality and pro- fessional standards of practice, and an active research program will likely be in a good position to make continuous assessments of and improvements in the relevance and quality of their data collection systems. Yet even the best functioning agencies will benefit from an explicit program of internal and independent external evaluations, which frequently offer fresh perspectives. Such evaluations need to address not only specific agency programs, but also the agency’s portfolio of programs considered as a whole.

48 principles and practices for a federal statistical agency Evaluating Quality Evaluation of data quality for a continuing survey or other kind of data collection program begins with regular monitoring of quality indicators. For surveys, such monitoring includes unit and item response rates, population coverage rates, and information on sampling error, such as coefficients of variation. (The American Community Survey provides these indicators on its web page, http://www.census.gov/acs/www/ [December 2008].) In ad- dition, in-depth assessment of quality on a wide range of dimensions—in- cluding sampling and nonsampling errors across time and among popula- tion groups and geographic areas—needs to be undertaken on a periodic basis (National Research Council, 2007b). Research on methods to improve data quality may cover such areas as alternative methods for imputing values for missing data and alternative question designs, using cognitive methods, to reduce respondent reporting errors. Methods for such research may include the use of “methods panels” (small samples of respondents with whom experiments are conducted by using alternative procedures and questionnaires), matching with adminis- trative records, simulations of sensitivity to alternative procedures, and the like. The goal of the research is the development of feasible, cost-effective improved procedures for implementation. In ongoing programs for which it is disruptive to implement improve- ments on a continuing basis, a common practice is to undertake major re- search and development activities at intervals of, say, 5 or 10 years or longer. Agencies should ensure, however, that the intervals between major research and development activities do not become so long that data collection pro- grams deteriorate in quality, relevance, and efficiency over time. Regular, well-designed program evaluations, with adequate budget sup- port, are key to ensuring that data collection programs do not deteriorate. Having a set schedule for research and development efforts will enable data collection managers to ensure that the quality and usefulness of their data are maintained and help prevent the locking into place of increasingly less optimal procedures over time. Evaluating Relevance In addition to quality, it is important to assess the relevance of an agency’s data collection programs. The question in this instance is whether the agency is “doing the right thing” in contrast to whether the agency is

part iI: COMMENTARY 49 “doing things right.” Relevance should be assessed not only for particu- lar programs or closely related sets of programs, but also for an agency’s complete portfolio to assist it in making the best choices among program priorities given the available resources. Keeping in close touch with stakeholders and important user constituencies—through such means as regular meetings, workshops, con- ferences, and other activities—is important to ensuring relevance. Customer surveys can be helpful on some aspects of relevance, although they typically provide only gross indicators of customer satisfaction, usually with regard to timeliness and ease of use of data products. As discussed in the next section, including other federal statistical colleagues, both as users and as collabora- tors, in this communication can also be valuable. Statistical agencies commonly find that it is difficult to discontinue or scale back a particular data series, even when it has largely outlived its usefulness relative to other series, because of objections by users who have become accustomed to it. In the face of limited resources, however, discon- tinuing a series is preferable to across-the-board cuts in all programs that reduce the accuracy and usefulness of the more relevant and less relevant data series alike. Regular internal and external reviews can help an agency not only reassess its priorities, but also develop the justification and support for changes to its portfolio. Types of Reviews Regular program reviews should include a mixture of internal and ex- ternal evaluation. Agency staff should set goals and timetables for internal evaluations, which should involve staff who do not regularly work on the program under review. Independent external evaluations should also be conducted on a regular basis, the frequency of which should depend on the importance of the data and on how quickly changes in such factors as respondent behavior and data collection technology may adversely affect a program. In a world in which people and organizations appear increasingly less willing to respond to surveys, it becomes urgent to continually monitor response and have more frequent evaluations than in a more stable environ- ment. In addition to program evaluations, agencies should seek outside re- views to examine priorities and quality practices across the entire agency. External reviews can take many forms. They may include recommen- dations from advisory committees that meet at regular intervals (typically every 6 months). However, advisory committees should never be the sole

50 principles and practices for a federal statistical agency source of outside review because the members of such committees rarely have the opportunity to become deeply familiar with agency programs. Administrations often develop evaluation mechanisms (see Appendix B) that may be helpful to an agency. External reviews can also take the form of a “visiting committee” using the National Science Foundation model or academic models (see, e.g., http://www.nsf.gov/od/oia/activities/cov/covs. jsp [December 2008]); or a special committee established by a relevant pro- fessional association (see, e.g., American Statistical Association, 1984); or a study by a panel of experts (see, e.g., National Research Council, 1985a, 1985b, 1986, 1993a, 1997b, 2000b, 2000c, 2003c, 2004c, 2004d, 2008c, 2008d). Practice 11: Coordination and Cooperation with Other Statistical Agencies The U.S. federal statistical system consists of many agencies in different departments, each with its own mission. Nonetheless, statistical agencies do not and should not conduct their activities in isolation. An effective statisti- cal agency actively explores ways to work with other agencies to meet current information needs, for example, by seeking ways to integrate the designs of existing data systems to provide new or more useful data than a single system can provide. An effective agency is also alert for occasions when it can provide technical assistance to other agencies—including not only other statistical agencies, but also program agencies in its department—as well as occasions when it can receive such assistance in turn. Efforts to standard- ize concepts and definitions, such as those for industries, occupations, and race and ethnicity, further contribute to effective coordination of statistical agency endeavors, as does the development of broad macro models, such as the system of national accounts (see, e.g., National Research Council, 2004a, 2004b; also see Appendix B). Initiatives for sharing data among sta- tistical agencies (including individual data and address lists when permitted by law and when sharing does not violate confidentiality promises) can be helpful for such purposes as achieving greater efficiency in drawing samples, evaluating completeness of population coverage, and reducing duplication among statistical programs, as well as reducing respondent burden. The responsibility for coordinating statistical work in the federal gov- ernment is specifically assigned to the Office of Information and Regulatory Affairs (OIRA) in OMB by the Paperwork Reduction Act (previously, by the Federal Reports Act and the Budget and Accounting Procedures Act—

part iI: COMMENTARY 51 see Appendix B). The Statistical and Science Policy Office in OIRA, often working with the assistance of interagency committees, reviews concepts of interest to more than one agency; issues standard classification systems (of industries, metropolitan areas, etc.) and oversees their periodic revision; consults with other parts of OMB on statistical budgets; and, by reviewing statistical information collections as well as the statistical programs of the government as a whole, identifies gaps in statistical data, programs that may be duplicative, and areas in which interagency cooperation might lead to greater efficiency and added utility of data. The Statistical and Science Policy Office also is responsible for coordinating U.S. participation in in- ternational statistical activities. The Statistical and Science Policy Office encourages the use of admin- istrative data for statistical purposes, when feasible, and works to establish common goals and norms on major statistical issues, such as confidential- ity. It sponsors and heads the interagency Federal Committee on Statistical Methodology (FCSM), which issues guidelines and recommendations on statistical issues common to a number of agencies (see Federal Committee on Statistical Methodology, 1978a-2005; for the papers from the FCSM 2007 research conference, see http://www.fcsm.gov/events/papers2007 [December 2008]). It encourages the Committee on National Statistics at the National Academies to serve as an independent adviser and reviewer of federal statistical activities. The 1995 reauthorization of the Paperwork Reduction Act created a statutory basis for the Interagency Council on Statistical Policy (ICSP), formalizing an arrangement whereby statistical agency heads participated with OMB in activities to coordinate federal statistical programs (see Appendixes A and B). There are many forms of interagency cooperation and coordination. Some efforts are multilateral, some bilateral. Many result from common interests in specific subject areas, such as economic statistics, statistics on people with disabilities, or statistics on children or the elderly. U.S. Office of Management and Budget (2008c:Ch. 3) describes several interagency collaborative efforts, such as joint support for research that fosters new and innovative approaches to surveys, expansion and improvement of the coverage and features of FedStats, which provides access to statistics from The Statistical and Science Policy Office was renamed from the Statistical Policy Office to reflect added responsibilities with respect to the 2001 Information Quality Act standards and guidelines, OMB’s guidance on peer review planning and implementation, and evalua- tions of science underlying proposed regulatory actions.

52 principles and practices for a federal statistical agency more than 100 government agencies at http://www.fedstats.gov [December 2008], and implementation of comparable measures of disability on major household surveys. A common type of bilateral arrangement is the agreement of a program agency to provide administrative data to a statistical agency to be used as a sampling frame, a source of classification information, or a summary compilation to check (and possibly revise) preliminary sample results. The Bureau of Labor Statistics, for example, benchmarks its monthly establish- ment employment reports to data supplied by state employment security agencies. Such practices improve statistical estimates, reduce costs, and eliminate duplicate requests for information from the same respondents. In other cases, federal statistical agencies engage in cooperative data collec- tion with state counterparts to let one collection system satisfy the needs of both. A number of such joint systems have been developed, notably by the Bureau of Labor Statistics, the National Agricultural Statistics Service, the National Center for Education Statistics, and the National Center for Health Statistics. Another example of a joint arrangement is the case in which one statisti- cal agency contracts with another to conduct a survey, compile special tabula- tions, or develop models. Such arrangements make use of the special skills of the supplying agency and facilitate use of common concepts and methods. The Census Bureau conducts many surveys for other agencies, both the National Center for Health Statistics and the National Agricultural Statistics Service receive funding from other agencies in their departments to support their survey work, and the Division of Science Resources Statistics receives funding from agencies in other departments to support several of its surveys (see U.S. Office of Management and Budget, 2008c:Table 2). The major federal statistics agencies are also concerned with interna- tional comparability of statistics. Under the leadership of OMB’s Statisti- cal and Science Policy Office, they contribute to the deliberations of the United Nations Statistical Commission, the Organisation for Economic Co-operation and Development, and other international organizations, participate in the development of international standard classifications and systems, and support educational activities that promote improved statistics in developing countries. Statistical agencies also learn from and contribute to the work of established statistical agencies in other countries in such areas as survey methodology, record linkage, confidentiality protec- tion techniques, and data quality standards. Several statistical agencies run educational programs for government statisticians in developing countries.

part iI: COMMENTARY 53 Some statistical agencies have long-term cooperative relationships with international groups, for example, the Bureau of Labor Statistics with the International Labor Organization, the National Agricultural Statistics Service with the Food and Agriculture Organization, the National Center for Education Statistics with the International Indicators of Education Systems project of the Organisation for Economic Co-operation and De- velopment, and the National Center for Health Statistics with the World Health Organization. To be of most value, the efforts of statistical agencies to cooperate as partners with one another should involve the full range of their activities, including definitions, concepts, measurement methods, analytical tools, dissemination modes, and disclosure limitation techniques. Such efforts should also extend to policies and professional practices, so that agencies can respond effectively and with a coordinated voice to such government-wide initiatives as data quality guidelines, privacy impact assessments, perfor- mance rating criteria, institutional review board requirements, and others. Finally, coordination efforts should encompass the development of data, especially for emerging policy issues (National Research Council, 1999a). In some cases, it may be not only more efficient, but also productive of needed new data for agencies to fully integrate the designs of existing data systems, such as when one survey provides the sampling frame for a related survey. In other instances, cooperative efforts may identify ways for agencies to improve their individual data systems so that they are more useful for a wide range of purposes. Two of the more effective continuing cooperative efforts in this regard have been the Federal Interagency Forum on Aging-Related Statistics and the Federal Interagency Forum on Child and Family Statistics. The former was established in the mid-1980s by the National Institute on Aging, in cooperation with the National Center for Health Statistics and the Census Bureau. The forum’s goals include coordinating the development and use of statistical data bases among federal agencies, identifying information gaps and data inconsistencies, and encouraging cross-national research and data collection for the aging population. The forum was reorganized in 1998 to include six new member agencies and has grown over the years to include 15 agencies. The forum develops a periodic indicators chart book, which was first published in 2000 and was most recently issued in 2008 (Federal Interagency Forum on Aging-Related Statistics, 2008). The Federal Interagency Forum on Child and Family Statistics was formalized in a 1994 executive order to foster coordination and collabora-

54 principles and practices for a federal statistical agency tion in the collection and reporting of federal data on children and families. Its membership currently includes 22 statistical and program agencies. The forum’s reports (e.g., Federal Interagency Forum on Child and Family Sta- tistics, 2007, 2008) describe the condition of America’s children, including changing population and family characteristics, the environment in which children are living, and indicators of well-being in the areas of economic security, health, behavior, social environment, and education. No single agency, whether a statistical or program agency, could have produced the forum reports alone. Working together in this way, federal sta- tistical agencies contribute to presenting data in a form that is more relevant to policy concerns and to a stronger statistical system overall.

Next: References »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!