The committee’s intent is for Indicators 1-14 to form the core of a national program to monitor the health of the education system by gathering information from the national, state, and local levels. Although these indicators were developed in conjunction with a specific set of recommendations from a previous report, when considered together they have the potential to provide insights into key elements of the K-12 education system in STEM that are difficult to glean from current data collection systems. However, additional research and data beyond the indicators would be required to undertake a full-scale evaluation of the nation’s progress toward the recommendations in Successful K-12 STEM Education, and to link the inputs of the education system to outcomes.
First, the monitoring and reporting system would be more meaningful if it included measurements of progress toward the goals of increasing the number of underrepresented students who pursue science and engineering degrees and careers, expanding the STEM-capable workforce, and increasing science literacy, because those goals provide the context for the recommendations in Successful K-12 STEM Education. Although district and state data are valuable as system-level indicators, it would be possible to track progress toward the goals for STEM education and provide a more comprehensive portrait of education and students’ experiences in STEM by measuring a wide range of student-level outcomes in the following categories:
• K-12 academic achievement and participation in science and mathematics (e.g., conceptual understanding; proficiency with the practices of science, engineering, and mathematics; science and mathematics course-taking patterns; enrollment in technical training programs while in high school).
• Values, attitudes, and beliefs about STEM (e.g., students’ fascination with natural and physical phenomena, interest in and value of science, beliefs about their competence in science, identities as science learners).
• Access to and participation in STEM-related activities (e.g., authentic research experiences and internships; interactions with adult mentors; out-of-school time science activities such as science clubs and competitions).
• Postsecondary training and education in the STEM disciplines (e.g., intention to study STEM expressed in K-12; accumulation of college credit in STEM courses; degrees and certificates earned).
• Participation in STEM-related careers (e.g., intention to pursue STEM-related careers and STEM-related career counseling in K-12; eventual participation in STEM workforce).
Because access and participation are vital for students from groups that are underrepresented in STEM, it is important to track each of these outcomes by race, ethnicity, language status, and socioeconomic status.
Through various NCES surveys, NSF’s Science and Engineering Indicators, and the Bureau of Labor Statistics, national-level data are widely available on certain aspects of the outcomes in these categories—albeit to a lesser extent for measures of students’ values, attitudes, and beliefs. Further research would be needed to develop appropriate measures of affective indicators. Although previous efforts to develop such measures have faced challenges, promising efforts are currently under way (e.g., Hidi and Renninger, 2006; Dorph et al., 2012). In addition, some of the existing measures may require refinement to enable disaggregation of data for different student groups. They also might need refinement to reflect a broad definition of postsecondary education that includes technical training with or without certification, 2-year colleges, 4-year colleges, and postbaccalaureate study, and a broad definition of STEM-related careers that includes such occupations as health care and energy technicians and science and mathematics teachers, in addition to scientists, engineers, and mathematicians.
Second, the information that can be gleaned from any set of indicators is necessarily limited. The proposed indicators will yield data that enable counts, classification, and the tracking of trends on the national level. Other kinds of research are needed to explain those trends and to understand how the recommendations from Successful K-12 STEM Education are being implemented in various educational contexts. Balancing data from the indicators with richer, more localized sources of information such as case studies, interviews, and classroom observations would enable a more complete description of K-12 STEM education.
To generate relevant information that can be used for improvement, the monitoring system would be designed with the capability to:
• assess progress toward key improvements recommended in the 2011 National Research Council report Successful K-12 STEM Education;
• measure student knowledge, interest, and participation in the STEM disciplines and STEM-related activities;
• track financial, human capital, and material investments in K-12 STEM education at the federal, state, and local levels;
• provide information about the capabilities of the STEM-education workforce, including teachers and principals; and
• facilitate strategic planning for federal investments in STEM education and workforce development, when used with labor force projections.
By focusing on student subgroups and the nation as a whole, these capabilities could be met in a way that illuminates variations in access, opportunities to learn, and progress as they relate to different student populations and socioeconomic contexts.
Effective monitoring and reporting systems are dynamic and evolve over time. The committee developed its proposed indicators in the context of the adoption and implementation of the Common Core State Standards for Mathematics and the development, adoption, and eventual implementation of the Next Generation Science Standards based on A Framework for K-12 Science Education (National Research Council, 2012). Both developments will be driving U.S. mathematics and science education for the foreseeable future. Different indicators might be warranted as these reforms mature, and as additional research becomes available on such factors as school conditions, teacher quality, effective instructional practices, and student performance measures. As this research emerges along with data from the indicators, it will be important to periodically reassess the continuing relevance and utility of the data that are generated, and to refine and adjust the indicators as necessary.
The committee’s intent is for the proposed monitoring and reporting system to monitor progress in K-12 STEM education and promote the organizational practices of examining data and using the results of such examination to take the necessary actions to improve performance and attain the desired goals (U.S. Department of Education, 2010). Thus, the monitoring system should be designed to promote the practices of continuous quality improvement toward achieving the nation’s goals for education and workforce development in STEM.
The range of activities described below can be launched immediately to begin developing the full suite of proposed indicators. Many of these activities can be undertaken simultaneously. However, because it may not be possible to undertake work on all aspects of the proposed 14 indicators immediately, the committee identified 6 of them as the highest priorities (see Box 2).
IDENTIFY DATA COLLECTION MECHANISMS
For each of the 14 indicators presented in the preceding section, we identified available and potentially available sources of data and discussed some limitations of those data sources as they relate to specific indicators. On the whole, existing data systems do not currently provide all of the needed information for the proposed indicators (see Table 1). Currently, the full complement of data is only available for Indicators 10, 13, and 14. This scarcity of data reflects the fact that existing federal data collection systems primarily provide information about schools rather than schooling (the process by which students learn). Many national surveys ask about such characteristics as the number of teachers in a school in each subject, whether the school has a theme, and the number of students taking advanced courses.
The committee’s intent is for efforts to be undertaken now to establish a system for collecting information on all 14 indicators. However, if resources for the monitoring system are too limited to support full implementation, the committee has identified six indicators as being of the highest priority:
|• Indicator 2.
|Time allocated to teach science in grades K-5.
|• Indicator 4.
|Adoption of instructional materials in grades K-12 that embody the Common Core State Standards for Mathematics and A Framework for K-12 Science Education.
|• Indicator 5.
|Classroom coverage of content and practices in the Common Core State Standards for Mathematics and A Framework for K-12 Science Education.
|• Indicator 6.
|Teachers’ science and mathematics content knowledge for teaching.
|• Indicator 9.
|Inclusion of science in federal and state accountability systems.
|• Indicator 14.
|Federal funding for the research identified in Successful K-12 STEM Education.
The committee selected these six indicators in the belief that they represent the points of greatest leverage to improve the education system, student outcomes in the STEM disciplines, and progress toward the three goals of education in STEM. The first five priority indicators (2, 4, 5, 6, and 9) reflect conditions that are at the core of teaching and learning: time, materials, instruction, teacher knowledge, and accountability. The sixth priority indicator (14) calls for new research to fill critical gaps in knowledge about programs and practices that contribute to student learning and to the other goals of STEM education.
Of the priority indicators, 4, 5, and 6 will be the most resource intensive to develop, in part because they have never before been tracked on a large scale and existing measures do not take into account the current emphasis on the practices of science, mathematics, and engineering. Crafting a valid, independent procedure for analyzing instructional materials will constitute the bulk of the effort for Indicator 4. For Indicators 5 and 6, some relevant measures exist of classroom coverage and teachers’ content knowledge for teaching—mostly in mathematics. These measures have not been taken to scale and further work would be required to develop direct measures of teachers’ knowledge of the practices of science, mathematics, and engineering.
The effort required to develop these priority indicators is directly related to the current lack of useful data to understand and inform decision making about the issues that matter most to teaching and learning. Despite the effort involved, the committee deemed these indicators as high priority because at the dawn of new reforms in science, mathematics, engineering, and technology education, there is great need for sound data on these phenomena.
In contrast, the committee’s proposed indicators address core elements of the learning process: the content and quality of the curriculum, the opportunities students have to learn that content, and teachers’ knowledge for teaching science, mathematics, and engineering. This focus represents a significant shift in emphasis for state and federal data systems. This shift in focus presents a dilemma about whether to use existing data sources or to create a new data collection vehicle dedicated to the proposed indicators.
Using existing data sources has advantages: existing data or reasonable modifications to scheduled national surveys could yield initial information for all 14 indicators; it is less expensive; some baseline data already exist; and the sampling frames enable comparisons across different subject areas. There are also disadvantages to relying on existing data sources: no single survey or cluster of surveys provides all of the needed information, a staggered rotation schedule for the surveys means that a full complement of current data is never available at any given time, considerable work would be required to modify the surveys in ways that would yield the desired information, and the varied sampling frames affect the ability to draw inferences across different levels of the education system. Moreover, relying on external data sources means ceding control over the focus of the surveys and the questions used. Changes by the survey organization may mean that some variables are no longer available, or would be based on a different implementation of the questionnaire. Even seemingly small changes to the response format of a question may have profound consequences and change the results in unexpected ways.
FURTHER DEVELOP THE PROPOSED INDICATORS
This report presents a framework for developing an indicator system around key elements of K-12 education in STEM. Although it was beyond the scope of the study to provide detailed specifications of the ways in which these indicators should be implemented, the committee assumes that the implementers will undertake a rigorous process to more fully develop a set of valid and reliable indicators with strong associations to the desired outcomes. For each indicator, the development process would include defining the construct, identifying what is known about what constitutes quality (i.e., what predicts downstream impact), identifying the most appropriate sources of data for broad measurement of the indicator, and identifying the most important topics to study in more depth than is possible with large-scale surveys. As one example, the committee’s initial development of Indicators 1, 2, 3, 5, and 8 emphasizes quantity or prevalence; fully developed versions of these indicators also would measure quality. In most of those cases, additional research will be required to identify what constitutes quality and how to measure it.
The state of development of the proposed indicators varies, and the committee created the following four categories to classify their relative states of development:
TABLE 1 Summary of Indicators
|Recommendations from Successful K-12 STEM Education
|State of Development
|Access to Quality STEM Learning
|Districts Should Consider All Three Models of STEM-Focused Schools
|1. Number of, and enrollment in, different types of STEM schools and programs in each district.a,b
|Districts Should Devote Adequate Instructional Time and Resources to Science in Grades K-5
|2. Time allocated to teach science in grades K-5.a,b
|3. Science-related learning opportunities in elementary schools.a,b
|Districts Should Ensure That Their STEM Curricula Are Focused on the Most Important Topics in Each Discipline, Are Rigorous, and Are Articulated as a Sequence of Topics and Performances
|4. Adoption of instructional materials in grades K-12 that embody the Common Core State Standards for Mathematics and A Framework for K-12 Science Education.a
|5. Classroom coverage of content and practices in the Common Core State Standards for Mathematics and A Framework for K-12 Science Education.a,b
|Districts Need to Enhance the Capacity of K-12 Teachers
|6. Teachers’ science and mathematics content knowledge for teaching.a
|7. Teachers’ participation in STEM-specific professional development activities.a
|Districts Should Provide Instructional Leaders with Professional Development That Helps Them to Create the School Conditions That Appear to Support Student Achievement
|8. Instructional leaders’ participation in professional development on creating conditions that support STEM learning.a,b
|Policy and Funding Initiatives
|Policy Makers at the National, State, and Local Levels Should Elevate Science to the Same Level of Importance as Reading and Mathematics
|9. Inclusion of science in federal and state accountability systems.
|10. Inclusion of science in major federal K-12 education initiatives.
|11. State and district staff dedicated to supporting science instruction.
|States and National Organizations Should Develop Effective Systems of Assessment That Are Aligned with A Framework for K-12 Science Education and That Emphasize Science Practices Rather Than Mere Factual Recall
|12. States’ use of assessments that measure the core concepts and practices of science and mathematics disciplines.
|National and State Policy Makers Should Invest in a Coherent, Focused, and Sustained Set of Supports for STEM Teachers
|13. State and federal expenditures dedicated to improving the K-12 STEM teaching workforce.
|Federal Agencies Should Support Research That Disentangles the Effects of School Practice from Student Selection, Recognizes the Importance of Contextual Variables, and Allows for Longitudinal Assessments of Student Outcomes
|14. Federal funding for the research identified in Successful K-12 STEM Education.
• Type 1: At least some data currently are available through U.S. Department of Education surveys or other large-scale efforts. Ongoing development may be required to more fully develop the indicator.
• Type 2: Appropriate data can be collected by modifying existing U.S. Department of Education surveys. Conceptual or empirical work may be required to develop valid and reliable survey items.
• Type 3: New surveys might be required to collect appropriate data, which would involve conceptual and empirical development.
• Type 4: Conceptual and empirical development are required to begin specifying the indicator.
Each type corresponds to the level of resources required to fully develop the indicator, which increases from 1 to 4. For example, it is more costly and time consuming to fund a systematic research and development effort than it is to compile available data. Similarly, modifying questions on existing surveys is less resource intensive than creating entirely new surveys.
For Type 1 indicators that fall into other categories in Table 1, existing data might provide partially useful information, but revising existing surveys would yield more relevant information about the indicator in question. In those cases, conceptual and empirical work are required to develop survey items or more fully specify the indicator as initial data are collected. For indicators that span all four types, modifications to existing surveys might be useful in the short term; but in the longer term, other kinds of data collection might be more appropriate, and more work is required to fully develop the indicator.
COMPILE, ANALYZE, AND REPORT ON EXISTING DATA FOR TYPE 1 INDICATORS
As discussed above, existing national data systems provide at least partial coverage for the Type 1 indicators. Through grant awards, agency budgets, legislative language, and state applications for flexibility under the No Child Left Behind Act, there is information on several other indicators, but the relevant documents would have to be gathered and analyzed to yield the data related to the indicators. The committee estimates that within 1 to 2 years, NCES and NSF could compile and analyze relevant data that have already been collected and use those analyses to produce an initial status report. Such a report might include a discussion of all available data on the Type 1 indicators (2, 5, 6, 7, 9, 10, 13, and 14) along with presentations of student-level data that illustrate the nation’s status relative to the three goals for education in STEM. An analysis of gaps in the available data might lead to conclusions and recommendations about developing the full suite of indicators, which could be discussed in the report.
TABLE NOTES: Shaded areas represent the committee’s highest priorities. These indicators are most proximal to the core of teaching and learning.
aData should be disaggregated to report on different groups of students and to facilitate analyses of how the indicators vary with the socioeconomic status of states or school districts.
bInitial development of this indicator emphasizes quantity; full development also should include quality.
MODIFY EXISTING SURVEYS OR DEVELOP NEW DATA COLLECTION MECHANISMS FOR TYPE 2 AND 3 INDICATORS
If the monitoring system relies on existing data sources, revising existing or past questions or adding new questions to existing NCES surveys could yield the needed information on some of the Type 2 indicators (notably, 2, 3, 9, and 11). In those cases, NCES could begin the modifications immediately so that they are complete by the next data collection for the relevant surveys: see the Appendix, which shows that the next data collection for most relevant surveys ranges from the 2012-2013 academic year to the 2015-2016 academic year.
Development of items for a dedicated survey or surveys also could begin immediately. Regardless of which data collection mechanism is chosen, careful attention should be given to developing and validating survey items so that they provide reliable measures of the specified indicator.
SUPPORT WORK THAT IS NEEDED TO FURTHER DEVELOP TYPE 4 INDICATORS
For the Type 4 indicators (1, 4, 6, 7, 8, and 12), research or conceptual development is required to identify what the indicator should measure and how it should be measured. For example, Indicators 4 and 12 require the development of new procedures to analyze the alignment of instructional materials and assessments with standards documents; the understanding of those indicators will remain limited until such procedures are developed. In a different vein, general information currently could be collected about instructional leaders’ participation in professional development (Indicator 8). However, the research bases on what constitutes effective leadership for STEM and on which kinds of professional development support that leadership are not yet robust enough to identify exactly what this indicator should measure. Further research on those topics is needed to modify existing surveys in a way that will generate useful information.
IES and NSF could support much of the needed research and development through existing grant programs. The research community also might be enticed to pursue this work through new solicitations or Dear Colleague letters. Given the steps involved to solicit, fund, and conduct the needed research, the committee estimates that acquisition of the full suite of data on these indicators would take approximately 5-10 years from the time the effort is launched.
PRODUCE REGULAR REPORTS ON THE INDICATORS AND STEM EDUCATION GOALS
Despite the availability of some data that are relevant to the committee’s proposed indicators, these data are not regularly analyzed and compiled in a report that is focused on K-12 education in the STEM disciplines. As the summary in the Appendix shows, most NCES survey data are not regularly compiled for publication in reports. Instead, data tables from many of the surveys are publicly available. NCES annual reports (The Condition of Education, Digest of Education Statistics, High School Dropout and Completion Rates, and Indicators of School Crime and Safety) typically are broad in focus. Some spinoff reports from the Condition of Education have addressed the STEM disciplines (e.g., National
Center for Education Statistics, 1997a, 1997b), and NSF’s annual Science and Engineering Indicators report includes a chapter on K-12 science and mathematics education. However, NCES and NSF do not regularly publish reports focused solely on K-12 education in the STEM disciplines. In the 1990s, NSF produced two biennial reports on K-12 science and mathematics education indicators apart from the Science and Engineering Indicators, but that reporting program was discontinued in 1995 (for the last of these reports, see National Science Foundation, 1996).
It would be valuable for NCES or NSF to use data from the proposed monitoring system to begin producing a biennial report that is unique and specific to the issue of K-12 education in the STEM disciplines (along the lines of reports requested by Congress on dropouts and the implementation of the Individuals with Disabilities Education Act). This report could present data that demonstrate the nation’s progress with respect to the proposed indicators and the goals for K-12 education in STEM. Consistent with the proposed capabilities of the monitoring system, the report might analyze the currently available data to provide usable information on student knowledge, interest, and participation in the STEM disciplines and STEM-related activities; federal, state, and local investments in K-12 education in STEM; and the capabilities of the STEM-education workforce.
SUPPORT EFFORTS TO ENGAGE STAKEHOLDERS
The U.S. Department of Education and NSF could play a key role in developing and sustaining the proposed monitoring system by engaging stakeholders in ongoing discussions about inputs and outputs to the system. Such efforts could increase the understanding of the monitoring system; inform the development, definition, and refinement of the indicators; and ensure that the data are used to support improvements. As an example, periodic conferences linked to the reports described above could address the nation’s progress toward the nation’s STEM education goals and the recommendations in Successful K-12 STEM Education. Such conferences could engage education leaders, policy makers, researchers, and science and engineering professional societies in general discussions of the current state of K-12 education in STEM; in focused discussions of data related to specific indicators; in proposed adjustments to the education system in light of the nation’s progress; and in critical examinations of the ongoing utility of the indicators.