National Academies Press: OpenBook
« Previous: 2. What Completion Means and Why It's Important
Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×

Page 29

3 Complexities in Counting

Understanding both how many students are currently dropping out and trends in dropout rates over time is critical to discussions of policies and practices and their effects. Yet this information is not as simple to obtain as one might expect. Because rates of school completion or dropping out are counted in a variety of ways, it is difficult both to compare rates for different groups or to be precise in tracking change and identifying correlations. Statistical overviews, such as the reports from the National Center for Education Statistics (NCES), provide several kinds of information and note that the results vary depending on what is measured.

COUNTING METHODS

Three rates are the most frequently used in discussions of school completion. One is the event dropout rate, the number of students in a particular category who were enrolled but left school without completing the requirements within a specified period of time. The second is the status dropout rate, which indicates the percentage of young people who are of age to be enrolled in or have completed school but are not attending and have not received a diploma. (The NCES report counts young people aged 1624 for each year in calculating this rate.) The third is the high school completion rate, which indicates the proportion of students in a certain age category (such as 18 to 24) who have received a diploma or other credential

Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×

Page 30

(such as a GED diploma). Table 3-1 describes these and two other methods.

Although each rate is useful, the existence of these different ways of counting dropouts is a source of confusion. Press coverage of dropout



TABLE 3-1 Methods of Counting High School Dropouts

Rate

Who Is Counted

Comments

Event Dropout Rate (annual)

Students in a given grade or in a given age span who were enrolled and failed to complete the year's requirements

Difficulty of tracking whereabouts of students who leave affects count. May overcount dropouts if students who transfer to other jurisdictions or otherwise later complete school are counted.

Results vary depending on which grades are included, time of year data are collected, etc.

School Completion Rate

Students who reach a particular age and have received a certificate

Typically does not distinguish by type of credential. Selection of age can result in overcount or undercount for some purposes.

Status Dropout Rate

Students who reach a particular age without having received a certificate and are not enrolled in school

Selection of age range yields differing results. Typically does not distinguish by type of certificate. Avoids difficulties caused by student transfers.

On-time Graduation Rate (longitudinal)

Students who graduate in a given year and were enrolled in ninth grade 3 years earlier

Difficulty of tracking whereabouts of students who leave affects count. Difficult to account for students not counted as ninth graders, such as those enrolled in nongraded programs, those who dropped out earlier, immigrants, etc.

Attrition Rate (longitudinal)

Students who were enrolled in an earlier grade, usually ninth, and are no longer enrolled by twelfth grade

Difficult to account for students enrolled in nongraded programs (i.e., not counted as ninth graders).

Difficulty of tracking whereabouts of students who leave affects count.

Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×

Page 31

statistics rarely distinguishes among the different measures or clearly accounts for discrepancies in the numbers produced by different methods. Papers and reports produced in an academic context also sometimes refer to particular dropout rates without making clear exactly who was being counted.

The rate used is important for a number of reasons. First, as discussed below, the number of students counted as dropouts can vary quite significantly depending on which measure is used. Kaufman (2000) has reported, for example, that in 1999, 85.9 percent of 18- to 24-year-olds received some sort of certificate (including alternatives such as the GED and others). If only those who received traditional diplomas are counted, however, the figure is 76.8 percent. When dropout rates are used as indicators of the relative success of reforms or other programs, the discrepant numbers can lead to vastly different conclusions. Dropout rates are also an important means of gauging the outcomes for cohorts of students; the needs of students who are incorrectly classified as school completers are likely not to be met.

SOURCES OF DATA

The principal sources of national-level data about dropping out are produced by the U.S. Census Bureau (the Current Population Survey [CPS]) and the National Center for Education Statistics (NCES) (the Common Core of Data and Longitudinal Studies Program). The CPS is an annual survey of a nationally representative sample of all U.S. households. An adult in each of 50,000 households sampled is asked for information about many things, including the schooling of household members over the age of three. This survey has been done annually for several decades, and is, Kaufman explained, “the only source of long-term trends in dropout and completion rates” (Kaufman, 2000:4).

However, Kaufman also described several complications in the CPS that need to be kept in mind. First, as with any such questionnaire, the categories chosen can have major effects on the results. The CPS, for example, asks about school enrollment for young people aged 15 to 24, so it doesn't specifically collect data about dropouts younger than 15. The survey also asks about the school completion rate for young people aged 18 to 24, but Kaufman points out that there are pros and cons to doing so. If a younger age span were selected, the survey could provide early warning about potential problems, though it might overestimate the noncomple-

Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×

Page 32

tion rate by including students who will subsequently graduate. An older age span (including people up to age 30, for example) could be chosen to avoid counting students who will eventually complete high school, but it would have other disadvantages, most notably that the outcomes for these students would reflect policies 10 or more years in the past. Most important, however, Kaufman explained, is the fact that changes in the wording of survey questions on the CPS that were made early in the 1990s have disrupted the trend line for much of the data produced by the survey. 1

Kaufman also discusses several complications in the use of CPS data to report state-level data. First, since the CPS sampling procedures were designed for a national population, the sample sizes for some states are not large enough to yield stable results. Thus the margins of error for the state calculations are large, which means that comparisons among these results should be viewed with care. He noted, for example, that the apparently large difference between the rates for Mississippi (82 percent) and Nebraska (93 percent) are not actually statistically significant. Moreover, because the data reflect the status of 18- to 24-year-olds, they may offer limited useful information about schooling in a particular state. The young people whose status is counted may be out-of-state college students, migrant workers, or others who have not actually attended the state's public schools.

The NCES's Longitudinal Studies Program includes two studies, High School and Beyond and the National Education Longitudinal Study, that use surveys to track a variety of information about cohorts of students as they move through school. Both studies count as nondropouts students who were enrolled in “an alternative program leading to an equivalency certificate” and those who received an equivalency certificate, including a GED diploma (Kaufman, 2000). However, as noted above, looking separately at these different groups of students is important. The longitudinal data provided by these studies are nevertheless valuable, Kaufman explained, because they allow a closer look at developments such as exceptions to the recent general decline in dropout rate among low-income and low-achieving students.

1 There were two principal changes. The first (1992) related to the way in which respondents indicated the level of education they had completed and resulted in a decreased (and apparently more accurate) status dropout rate. In 1994, several changes in data collection methods (use of computer-assisted telephone interviewing and adjustments for undercounts, for example) resulted in increased dropout rates which may also reflect more accurate counts (Dynarski, 2000:8-9).

Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×

Page 33

NCES also sponsors a survey called the Common Core of Data, through which statistical information is collected from the departments of education in each state, the District of Columbia, and U.S. territories. NCES is also working with states to develop uniform means of collecting data on dropouts. Twenty-six states are currently participating, and Kaufman explained that when all 50 are participating, the data will allow more precise state-by-state comparisons than are currently available from the CPS data, as well as national data. Table 3-1 (above) shows the definitions of a dropout that have been developed for this data collection effort.

COMPLICATIONS

Clearly, a primary reason that counting dropouts is not straightforward is that defining them is not. If GED recipients are counted as school completers (nondropouts), for example, the resulting data will obscure the fact that part of the narrowing of the gap between black and white students' completion rates is attributable to the rise in GED certification among blacks. Since, as we have noted, GED certification has less value in the marketplace than a traditional diploma and may have other implications for life chances, this way of counting obscures an important difference. The existence of other alternatives to the traditional diploma, such as diplomas that represent a lesser degree of academic achievement, may further complicate matters. Alternative programs and certificates may play a valuable role for many students. Nevertheless, if these credentials also have less economic value than traditional diplomas, and possibly other negative implications for students' futures, the inability to distinguish the outcomes for students who receive these credentials from those for other students will be a significant impediment to understanding dropping out and school completion.

A further complication was pointed out by workshop discussant David Grissmer, who noted that the CPS does not collect data on those who join the military. He explained that since the 1960s, the policies for admission and recruitment into the military (including requirements regarding diplomas and GED certificates) have changed markedly, as has the composition of the entering population. If the effects of the military's policies, and the changing proportions of dropouts and various population subgroups in its ranks were factored in, Grissmer suggested, some national trends might look noticeably different, though this analysis has not been done.

Moreover, dropout and school completion statistics now make no dis-

Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×

Page 34

tinction among the reasons that students failed to complete school or receive a diploma. Fine (1987) and many others have identified categories of students who leave school not entirely of their own volition. Such students, often called “pushouts,” include students who have presented significant discipline problems, students who have been reassigned to special education programs (in some cases because they are discipline problems rather than because of a diagnosed disability), and students who are discouraged from continuing in school by formal policies or informal practices. The category of pushouts may also include students who are expelled or suspended. The relative dearth of data about these students is another piece of the puzzle observers face when they try to understand the problem of dropouts.

Recent controversies about the meaning of differing accounts of the dropout rate in the state of Texas illustrate well the nature of the problem. The state calculates an annual dropout rate for all students (for 1997-1998) of 1.6 percent (Smisko, 2000). The rate is based on reports from the districts that must explain why all students who left the district's schools did so and list the criteria by which students are classified as dropouts; see Box 3-1. Smisko also reported that, as has been widely reported elsewhere, dropout rates in Texas have declined both for the student population as a whole and for African American and Hispanic students over the past decade or so.

BOX 3-1 Criteria for Identifying Dropouts in the State of Texas

  • A student who is absent without approved excuse or documented transfer and does not return to school by the following year
  • A student who completes the school year but fails to reenroll the following year
  • A student who leaves to enter the military before graduation
  • A student from a special education, ungraded, or alternative education program who leaves school
  • A student who leaves school and enters a program not qualifying as an elementary/secondary school (e.g., cosmetology school)
  • A student enrolled as a migrant, whose whereabouts are unknown

SOURCE: Smisko, 2000:4.

Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×

Page 35

Other observers have considered the available data about enrollment and school completion in Texas and come to quite different conclusions. For example, Haney (2001) has questioned the means by which Texas counts its dropouts, arguing that many school leavers who should be counted as dropouts are not. He has also done a number of calculations that yield a significantly more sobering picture of the rate of school completion in Texas, noting particularly that many minority students are not faring as well as the Texas reports indicate. Haney uses data on enrollment in each grade to show the rate at which white, black, and Hispanic students progress from grade to grade. He also examined the proportion of students enrolled in the ninth grade who later graduated on time for successive age cohorts by race. Using this measure, Haney found not only that the proportion of students graduating on time had declined slightly for all groups, but also that the rates for black and Hispanic students (generally about 50%) are significantly lower than those for whites (generally about 70%) (Haney, 2000).

Haney further found that a significant change in the rate at which black and Hispanic students progress from grade to grade has occurred since the mid-1980s (at which time their rates were only slightly lower than those for white students). Specifically: “By the end of the 1990s 25-30% of Black and Hispanic students, as compared with only 10% of White students, were being retained to repeat grade 9, instead of being promoted to grade 10” (Haney, 2000:5).

Haney argues that official Texas dropout calculations exclude many students. While those who reach the twelfth grade on time are graduating in greater numbers, he argues, those who are retained or leave the system earlier have become both less likely to graduate and more likely to be minorities (Haney, 2001:12-13). Haney does not attempt to account for all of the children who leave the system before grade 12 but believes that most are dropouts. The difficulties of accounting for students who leave a system mean lack of information not only about dropouts but also about students who complete their education elsewhere. Haney's work does, however, clearly demonstrate the importance of seemingly technical decisions about which students to count.

Still others have raised questions about Haney's methodology. For example, Carnoy et al. (2001) have disputed both Haney's claim that Texas's dropout rates have increased since 1990 and his claim that the Texas Assessment of Academic Skills is partly responsible. Smisko reports changes Texas is making in the way the state keeps track of school leavers designed

Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×

Page 36

to capture more detailed information. A recent report by the Texas Education Agency (2000) addresses many of the questions that have been raised about the data. It recommends, for example, that the state “add a Grade 9-12 Longitudinal Completion/Student Status Rate” (Texas Education Agency, 2000:2). The report recommends other improvements to data collection efforts designed to improve the state's success at keeping track of all students.

Kaufman provides another example of the difficulty in his discussion of similar disputes over dropout rates in California. Noting that differing calculations could rightly leave the public wondering whether the dropout rate was “12 percent and falling or 33 percent and rising,” he pointed out some of the practical difficulties that face states, apart from the definitional ones already discussed:

Resources are such that many schools cannot track all of their dropouts. While some schools may indeed engage in the “shell game” that their detractors accuse them of—moving dropouts to alternative programs and letting them slip away—many schools just did not know what happened to all of their “no-shows” [students who completed any of grades 7-11 but did not attend the following year] (Kaufman, 2000:31).

Kaufman's view is that no one statistical method can provide a full and accurate picture of the ways in which U.S. students move through and out of school. He believes that measures of on-time graduation, dropout, and eventual completion are all necessary, and the committee would expand his notion to include measures that provide greater detail about the different pathways students take. The larger point, however, is that it is very difficult for nonexperts to evaluate and compare rates that are calculated in different ways.

RECOMMENDATIONS

The committee concludes that current means of collecting state-, district-, and national-level data on students' progress through school and into the workforce, while valuable, are insufficient to inform policy makers and the public.

Recommendation 3: The committee recommends that policy makers, researchers, and funders of research consider the urgent need for the following kinds of additional data (disaggregated to allow monitoring of

Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×

Page 37

such populations as different minority groups, English-language learners, and students with disabilities):

  • data that allow valid comparisons across states and, possibly, across smaller jurisdictions;
  • longitudinal data that allow tracking of a greater diversity of student pathways, such as participation in alternatives to traditional secondary schooling and the earning of alternatives to the traditional diploma;
  • data that allow separate reporting on the progress of students who follow such alternate pathways, both while they are in school and after they leave school, whether they are employed, unemployed, or participating in postsecondary education;
  • data that allow improved tracking of students at risk for dropping out because of factors that may be apparent in elementary and middle school, such as temporary dropping out in early grades, absenteeism, retention in grade, and the like. Such data could assist jurisdictions in identifying populations of students in need of intervention and in evaluating the success of their efforts to intervene. Such data could also be used to improve public understanding of school completion and the demands on school systems.

Part of the difficulty with currently available data is that they are collected by a variety of entities for a variety of purposes at the state, district, and school levels, as well as the federal level. The adoption of a single measure that would allow comparisons across jurisdictions would address some of the difficulties in the current policy discussion, but it would have negative consequences as well. The measures exist because of the complexities of what needs to be measured, and each provides valuable information. CEETE concludes that more, not less, information about dropout behavior is needed, but believes that greater clarity and coordination is needed as well.

Recommendation 4: The committee recommends that the U.S. Department of Education provide leadership and oversight to coordinate data collection and establish long-term objectives for collecting district, state, and national data on school completion. Data available from the U.S. Department of Labor should be considered as part of this effort.

Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×
Page 29
Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×
Page 30
Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×
Page 31
Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×
Page 32
Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×
Page 33
Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×
Page 34
Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×
Page 35
Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×
Page 36
Suggested Citation:"3.Complexities in Counting." National Research Council. 2001. Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing. Washington, DC: The National Academies Press. doi: 10.17226/10166.
×
Page 37
Next: 4. Effects of High-Stakes Testing and Standards »
Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing Get This Book
×
 Understanding Dropouts: Statistics, Strategies, and High-Stakes Testing
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The role played by testing in the nation's public school system has been increasing steadily—and growing more complicated—for more than 20 years. The Committee on Educational Excellence and Testing Equity (CEETE) was formed to monitor the effects of education reform, particularly testing, on students at risk for academic failure because of poverty, lack of proficiency in English, disability, or membership in population subgroups that have been educationally disadvantaged. The committee recognizes the important potential benefits of standards-based reforms and of test results in revealing the impact of reform efforts on these students. The committee also recognizes the valuable role graduation tests can potentially play in making requirements concrete, in increasing the value of a diploma, and in motivating students and educators alike to work to higher standards. At the same time, educational testing is a complicated endeavor, that reality can fall far short of the model, and that testing cannot by itself provide the desired benefits. If testing is improperly used, it can have negative effects, such as encouraging school leaving, that can hit disadvantaged students hardest. The committee was concerned that the recent proliferation of high school exit examinations could have the unintended effect of increasing dropout rates among students whose rates are already far higher than the average, and has taken a close look at what is known about influences on dropout behavior and at the available data on dropouts and school completion.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!