Methods and Measures
In this chapter, the committee responds to the third element in our charge: to identify new methods or approaches for conducting research supported by the National Center for Education Research (NCER) and the National Center for Special Education Research (NCSER) of the Institute of Education Sciences (IES). We include both measures (a project type, or goal, in the IES matrix) and methods (a separate competition) because of the close links between the two. We placed this chapter here for the sake of narrative flow and will return to the second element in our charge—how best to organize NCER and NCSER’s request for application (RFA) process—in Chapter 8.
One of IES’s hallmarks since its inception has been its continuous investment in advancing education methods and measures. IES has adopted three primary strategies aimed at improving the quality of research methods in education: (1) funding basic research on methodological innovation and measurement, (2) prioritizing specific applied research methods in its RFAs, and (3) fostering a community of scholars with the necessary skills to make use of new and innovative methods and measures.
IES’s investments in methodological innovation has produced a wealth of knowledge in this arena. This investment is both through field-generated research via grants from NCER and NCSER, and through IES-driven research focused on the What Works Clearinghouse (WWC) via contracts from the National Center for Education Evaluation and Regional Assistance. While the committee focused on the first of these types of research, given the statement of task, there are clearly connections between the two.
These investments have produced core knowledge around estimating average treatment effects—in both randomized controlled trials (RCTs) and quasi-experimental designs (QEDs)—as well as models and data useful for planning studies with adequate power for hypothesis tests. This funding has also advanced research methods specifically appropriate for research on students with disabilities, including advances in statistical approaches to estimating effects in single-case designs.
IES has also invested in development of measures, largely through field-generated research funded through NCER and NCSER. They include new approaches for measuring student academic and behavioral outcomes in the context of research, as well as the expansion of available assessments for use in practice, including a number of universal screening and progress monitoring tools. There have also been advancements in the technologies of student assessments, including the use of adaptive testing.
IES has also established high standards that have been widely adopted across the field for how causal research is conducted. Through its RFAs and guidance to proposal reviewers—and in alignment with recommendations for internal validity through the WWC—IES encourages submitted studies to meet high technical standards. Examples include the requirement that Efficacy and Replication studies be adequately powered, that studies prioritize research designs aligned with causal inferences (e.g., experimental designs, quasi-experimental designs, single subject designs), and more recently, that Efficacy and Replication studies provide information on their generalizability and on the cost effectiveness of the intervention being studied (IES, 2021).
In addition to these formal avenues for research on methods and measurement, NCER and NCSER have worked to establish a community of education research scholars focusing specifically on methodology. It has done so in large part through its investment in methodological training opportunities, described in Chapter 7 of this report. IES also invested in the initial development and growth of the Society for Research on Educational Effectiveness, a research organization focused on increasing the field’s capacity to design and conduct causal investigations, which, in 2008, launched the Journal of Research on Educational Effectiveness committed to publishing causal studies in education. Without such an investment, it is hard to imagine that causal studies in education would be anywhere close to where they are today.
Collectively, these three strategies converge to provide a roadmap for how IES can support the development of tools to conduct high-quality scientific research in education. But, as outlined across this report, as the educational landscape shifts, so too must IES’s investments in methods and measures research. A focus on treatment heterogeneity, implementation and
adaptation, knowledge mobilization, and equity means that IES will need to re-orient its investment in methods and measures.
We begin with underlying principles to guide our recommendations:
- IES’s charge as written into the Education Sciences Reform Act (ESRA) requires that the institute maintain its focus on causal research. IES is uniquely situated—among other federal agencies and private foundations—to develop and test interventions in education settings. This focus should certainly continue.
- Since causal questions are inherently comparative, descriptive work is also needed to conceptualize and describe current practices and the context of schools and districts. This means IES will need to invest in other approaches beyond causal designs (e.g., descriptive, qualitative, mixed methods).
- Questions of what works and how it works need to be pursued in concert. Only by pairing different methodologies can researchers answer not only what works for improving student outcomes, but also how to make something work, for whom, and under what conditions. The committee’s view is that each of these questions needs answering and each is necessary to inform the others.
- Theoretical frameworks play an essential role in connecting research questions across studies. The connections across causal and descriptive studies are strengthened when researchers are clear about the theoretical framework they are developing and testing.
THE FUTURE OF METHODS RESEARCH
Summary of Methods Research to Date
NCER and NCSER have invested in methodological innovation from their beginnings. This investment was first via unsolicited grants and later through a separate grant program, Statistical and Research Methodology in Education, that funded research relevant to both centers. From its beginning in 2002 through 2020, NCER awarded 93 grants to support methodological innovation in the education sciences. In an analysis of abstracts from these studies, Klager and Tipton (2021) revealed that funded studies have been roughly evenly divided across four categories:
- Psychometrics (n = 28), including value-added models (n = 8).
- Statistical Models for Analysis (n = 23), including multilevel models and missing data (n = 13).
- Randomized Controlled Trial Designs (n = 28), including power analyses (n = 7), effect size computations and interpretations (n = 5), and single-case designs, n = 6).
- QED Designs (n = 14), including regression-discontinuity (n = 6) and comparative interrupted time series (n = 5).
Overall, these studies have addressed a variety of difficult problems that occur in applied research. Abstracts indicate that most of these studies (n = 48) mention the development and availability of free software tools for use by applied researchers, providing a mechanism to increase the likelihood that methodological innovations get taken up in future IES-funded work. Further seeding the potential for methodological uptake, many of the funded studies have resulted in methodological workshops delivered at national research conferences in education. The committee thinks that this approach used to generate knowledge and use of statistical methods has been one of NCER and NCSER’s considerable strengths.
Methods Research Moving Forward
In this report, we have argued that education research needs to focus on five crosscutting themes: the heterogeneity of contexts, experiences, and treatment effects; the adaptation of programs and policies to local contexts, leading to different degrees and types of implementation; the need to better understand and test new ways to support the development of knowledge that is useful for decision making; the continued need to take advantage of education technologies; and the need to focus directly on the goal of improving equity in educational experiences.
In this section, based upon what has been previously studied and these themes and goals, we propose areas that need new methodological development. Overall, each of these areas begins from the question: What methods are required for researchers developing and testing interventions to provide decision makers with the information they need regarding interventions?
Methods for Understanding Treatment Effect Heterogeneity
Current literature makes clear that there is no single effect of an intervention, and instead that effects likely vary across structures, contexts, cultures, and conditions (Joyce & Cartwright, 2020). As such, education research stands to benefit from studies that improve the ability to understand how treatment effects vary. Meeting this goal requires both quantitative methods and qualitative methods, as both are essential for developing theory and understanding mechanism.
IES is already a leader in building quantitative approaches to heterogeneity. Over the past decade, an increasing number of methods grants have focused on questions of treatment effect heterogeneity, understanding moderators of effects, and external validity (n = 14). These studies have provided methods for estimating and testing hypotheses about the degree of heterogeneity, as well as methods for improving generalizations from samples in studies to populations in need of evidence. This generalization literature, for example, has shown that if treatment effects vary, the average treatment effect estimated from a randomized trial in a sample of convenience can be as different from the true population average treatment effect as one estimated using a nonexperimental design. That is, external and internal validity biases can be of the same magnitude.
To date, much of this research has focused on how to improve estimates of average treatment effects (what is called generalization). Repeatedly, however, decision makers call upon research to provide them not simply an estimate of the average treatment effect, but also a prediction regarding if the intervention will work in their school, district, or community. To date, only three of the methods grants have focused directly on the development and testing of methods for the prediction of local treatment effects. Predicting local effects with precision will require both new statistical methods for analysis, such as machine learning and Bayesian Additive Regression Trees, and more complex research designs, such as factorial, crossover (Bose & Dey, 2009), and stepped-wedge designs (Hussey & Hughes, 2007). As these methods are better understood, and fit to the realities of education contexts, they may provide important insights into how studies should be conducted in the field. For example, it is likely that studies focused on heterogeneity and prediction will require larger samples than are typical in studies of the average treatment effect. In order to know exactly how much larger and what other trade-offs might be included, however, methods for study design, including determining power and precision, will be needed.
Finally, not all of the methods required are quantitative. In order to understand treatment effect heterogeneity—essential for the prediction of local causal effects—data are not sufficient on their own. Instead, the development and refinement of theory will be essential. Theory can help, for example, guide researchers in determining why treatment effects might vary, under what conditions interventions might be most useful, and the mechanism through which an intervention works. It is here that qualitative and mixed methods research especially offers promise.
Methods for Understanding Implementation and Adaptation
Tied to the concept of heterogeneity is the need to understand the implementation and adaptation of interventions. Decision makers need
to know what adaptations implementers make and why, which adaptations are productive and which adaptations go “too far,” and what kinds of supports are required to implement well. IES has shown interest in and encouraged methods development related to implementation, fidelity, and mediation. To date, six Statistical Models and Research Methods grants have focused on these topics. However, more methods are needed to study implementation and adaptations made as programs move across places and people (reconceived in Chapter 4 as Development and Adaptation grants).
There are several exciting possibilities for continued methods development in this burgeoning field. Methods for evaluating implementation build on many familiar designs for studying efficacy and effectiveness, while also expanding beyond them through a variety of randomized and nonrandomized designs (Brown et al., 2017). They include, but are not limited to, hybrid effectiveness-implementation designs (Curran et al., 2012), multiphase optimization strategy implementation trials (e.g., Collins, Murphy, & Stretcher, 2007), helix counterbalanced designs (Sarkies et al., 2019), and stepped-wedge trials (Brown & Lilford, 2006). Additional methods include survival analysis to evaluate sustainability (e.g., Brookman-Frazee et al., 2018) as well as system dynamics, network analysis, and agent-based modeling to assess diffusion and spread (Northridge & Metcalf, 2016; Burke et al., 2015; Mabry et al., 2008). Closely related to implementation research, a family of improvement approaches with roots in statistics, industry, and health care have migrated to education (Cohen-Vogel et al., 2018). Described by some as representing a fourth wave of implementation science, the approaches involve iterative tests of change in an increasingly larger number of classrooms, grades, and schools (e.g., Bryk, 2020; Bryk et al., 2015; Lewis, 2015). The approaches, which include but are not limited to improvement science, design-based implementation research, and design experimentation, share an emphasis on learning from adaptations that occur as programs are tested in an ever-growing number of settings as well as authentic collaborations between researchers and practicing educators that span innovation design, prototype testing, and implementation (e.g., Cohen-Vogel et al., 2015; Cobb et al., 2013; Donovan, 2013; Means & Harris, 2013; Anderson & Shattuck, 2012; Bryk, Gomez, & Grunow, 2011; Design-Based Research Collective, 2003). Methods for evaluating improvement projects include variants of trial designs, quasi-experimental designs, qualitative field techniques, and systematic reviews, as well as program, process, and economic evaluations (Portela et al., 2015).
Of particular interest for their rigor and sensitivity in detecting variation in a system are statistical process control methods, which distinguish between common-cause variation and special-cause variation to determine when changes are significant and when a process is out of control (see Provost & Murray, 2011; Deming, 1982; Juran, 1951; Shewhart, 1931,
and later in this chapter for a discussion of methods for learning from and about education technologies). Closely related to interrupted time series designs, statistical process control can detect variation across subgroups and sites, not just over time, and displays information more intuitively for real-time monitoring and decision making in practice (Fretheim & Tomic, 2015). These methods also are especially valuable for highlighting the distinction in framing between enumerative studies that describe the current state and analytical studies that make predictions about a future state (Provost, 2011).
Finally, questions related to implementation and adaption are fundamentally questions of process, an area where qualitative and mixed methods excel. The power behind mixed methods research lies in integrating data from multiple sources. Qualitative methods can inform the development or refinement of quantitative instruments, for example, and quantitative data can inform sampling procedures for naturalistic observations, interviews, or case study (e.g., O’Cathain, Murphy, & Nicholl, 2010). Consequently, the committee believes that standards for the conduct and reporting of data from qualitative and mixed methods could be helpful for a future IES. The further development, testing, and refinement of these methods will enhance the ability of researchers to study implementation of evidence-based practices in education.
Methods for Knowledge Mobilization
As the committee noted in Chapter 1 of this report, if the research that NCER and NCSER fund is not useful to or used by its intended audience, it is not meeting the charge mandated under ESRA to effect change in student outcomes. In Chapter 4, we proposed the creation of a new project type focused on Knowledge Mobilization. The purpose of this project type is to continue to develop a science of decision making in education, in order to understand current practice and to develop and test new strategies for mobilizing knowledge produced from research so that it may be used to support improved practice in schools.
Studying knowledge mobilization can be difficult because it is a subtle and complex process, one that does not always lend itself to the kind of randomized controlled design common with other interventions (e.g., researchers do not have two sets of research-practice partnerships to test out one form of knowledge utilization in one group and a different form or control message in another group). Thus, it is necessary to continue to develop innovative methods to help make these kinds of comparisons and study strategies to mobilize knowledge. There are several opportunities for the development of methods (for a broader overview, see Gitomer & Crouse, 2019).
By far, the most common method for studying knowledge mobilization in education to date is survey and interview methods (e.g., May et al., forthcoming; Penuel et al., 2017; Weiss & Bucuvalas, 1980). While these approaches have been useful for descriptive studies of research use in nationally representative samples of educators and education leaders, they fall prey to social desirability bias and retrospective smoothing. In response, there are new efforts aimed at studying decision making in real time using observational methods (e.g., Huguet et al., 2021). These methods are labor intensive and, to date, limited to small N descriptive studies. However, there is great potential for adapting such methods for use in experimental design of interventions to foster knowledge mobilization that include observation or, for example, video analyses of nationally representative samples of school board meetings (see Box 6-1 for an additional need in the knowledge mobilization space).
Another key development in research on knowledge mobilization has been the use of social network methods to map the relationships between producers and consumers of research and the intermediaries who knit them together (Frank et al., 2020; Gitomer & Crouse, 2019; Finnigan, Daly, & Che, 2013). This approach allows researchers to identify who the powerful
actors are and how information flows across systems. Outside of education, there are researchers who have used natural language processing and other strategies to track the uptake of research studies or ideas in legislation or policies (Weber & Yanovitsky, 2021; Yanovitsky & Weber, 2020; Weber, 2018), an approach that could profitably be adapted for scholarly studies of knowledge mobilization in education. Network methods and natural language processing methodologies applied to knowledge mobilization face a number of challenges, some that are general to network methods, such as sampling concerns, and some that are distinctive to knowledge mobilization, such as adequately capturing information flows (Gitomer & Crouse, 2019). IES investments in network methods and natural language processing for knowledge mobilization studies could fuel important advances in this area.
Additionally, one of the arguments the committee makes in Chapter 4 of this report is that “connectors” between project types are needed to help surface promising findings and interventions. This suggests that one area of growth will be the need for methods for systematic review and meta-analytic studies. Given the scope of the WWC, it is perhaps surprising that outside of single-case designs, there has only been one single Statistical and Research Methods grant focused on the development of meta-analytic methods. Many possible types of syntheses—and thus methods—are necessary. Perhaps the most obvious is the need for methods for synthesizing findings from impact studies; this includes methods for very small meta-analyses (as found in the WWC) and for very large meta-analyses focused on understanding variation (including 50 or more studies). Given the growing trends toward open data and data science, integrated data analysis and other data harmonization methods (Kumar et al., 2021, 2020; Musci et al., 2020) may be particularly valuable for synthesizing findings across disparate studies. Less obvious, but equally important, is the need for methods for synthesizing descriptive studies (Discovery and Needs Assessment) and for surfacing promising interventions (Development and Adaptation).
Supporting all of these is the need for methods research that informs various aspects of the meta-analysis process, including, for example, methods for efficiently and systematically searching the literature (e.g., using machine learning algorithms), efficient and standardized coding and reporting, presenting and conveying the results to nonexperts, and measuring knowledge mobilization and research use. It is likely, for example, that the best syntheses do not focus solely on quantitative summaries of the field, but also provide rich examples and information on the intervention mechanisms and components—again, a combination of both quantitative and qualitative methods.
Finally, the importance of studying knowledge mobilization motivates strengthening participatory research methods, which highlight the value
of including the voices, perspectives, and questions originating from those who are intended to benefit from the research. Examples include participatory design, action research, youth participatory action research, and community-based participatory research (Stringer & Aragón, 2020; Balazs & Morello-Frosch, 2013; Robertson & Simonsen, 2012; Cammarota & Fine, 2010; Shalowitz et al., 2009). How best to engage with the range of stakeholders when discovering, innovating, and adapting, or evaluating a new educational experience may vary by research goal, emphasizing the importance of considering these perspectives throughout the research, not merely at its “end.” Yet such methods may carry significant time and resource costs, not just for researchers but also for practitioners and community members. Refining these methods allows elucidating when and how to engage in coproduction in a manner that is not only beneficial, but ethical and equitable.
Methods for Learning from and about Education Technologies
Since the founding of IES, determining how, when, and under what conditions education technology can improve student outcomes has been at the fore. It is perhaps surprising, then, that to date zero IES methods grants have explicitly focused on methods for working with data from education technologies. This is not to say that IES has not invested here, however. For example, NCER recently awarded five grants under the Digital Learning Platforms to Enable Efficient Education Research Network that will redesign existing digital learning platforms to support research.
Education technology data differ from typical data in randomized trials in that they include a vast amount of process data. For example, in addition to a pre-test and a post-test, an education technology product may also collect “click” data regarding every single item, the pathway taken through the intervention, and even data on attention. These new data bring new opportunities for understanding student learning. The committee anticipates a continued need for learning analytic methods.
But education technology research is broader than simply studying how to use technology to deliver learning experiences to students. Here we also include the promise of new and emerging data sources, including big data. These sources include administrative data, as well as data scraped from the web and from learning platforms. They also include data not only about students, but also about teachers, schools, and communities. We anticipate that these data will become increasingly useful in all types of projects, from answering descriptive questions about how systems work (Discovery and Needs Assessment), to how students’ progress and learn (Development and Adaptation), to how to understand treatment effect heterogeneity and predict local treatment effects (Impact and Heterogeneity Analyses), to the networks through which teachers and leaders interact and share knowledge
(Knowledge Mobilization). We anticipate an ongoing need for methods development in all of these areas.
Methods for Centering Equity in Research
Throughout this report we have argued that equity should be front and center as the primary goal for research funded by IES. To date, this has not been an explicit focus of methods development grants at IES (though certainly questions of equity have motivated the development of many methods). Below we provide examples of several possible areas for methods development to support this work.
Interventions focused on small subgroups or communities, such as students with low-incidence disabilities (e.g., traumatic brain injury), are often hampered by the fact that recruiting large samples is simply not feasible. In these cases, the resulting studies will need to be smaller than usual and may have additional considerations for recruitment. The development and testing of new research designs and statistical analysis methods for conducting small causal studies, both randomized and quasi-experimental, are needed.
In Chapters 4 and 5, we argued that focusing on interventions that can be studied by randomized trials severely limits the type of interventions that IES-funded studies can focus upon and learn about. Some of the largest effects on student outcomes may, in fact, arise from structural changes that are difficult to randomize. To date, IES has invested heavily in the development of quasi-experimental methods (n = 20 grants to date), but several important questions remain. For example, this work might address the conditions under which common quasi-experimental methods, such as difference-in-difference, instrumental variables, and synthetic control groups, perform well and where they do not. This might also include methods for not only conducting quasi-experimental studies on existing data, but also planning future quasi-experimental studies that involve collecting new data. Importantly, as with randomized trials, this next wave of methods development needs to focus both on estimating the average treatment effect using these designs and on methods for understanding heterogeneity and generalizability.
Generally, a methodological focus on equity can proceed in two ways: either via an examination of changes over time (or across treatment and control groups) in disparities between groups, such as the subgroups articulated in the No Child Left Behind Act and the Every Student Succeeds Act, or through a focus on creating conditions to enhance the performance of a traditionally underserved community, without explicitly measuring disparities but relying on the research literature to identify an underserved community, as expressed in President Biden’s Executive Order on Advancing Racial Equity. For example, Atteberry, Bischoff, and Owens (2021) have
developed statistical approaches for gauging progress toward racial and ethnic achievement equity in U.S. school districts, focusing both on performance relative to other groups within the same district and in comparison to statewide averages.
Finally, schools have increasingly begun to rely upon education technology products to diagnose, assess, and place students (at all age levels). Here there is the opportunity for algorithmic biases to enter the systems. This creates an increased need for the development of methods and approaches to study and improve these algorithms, including the data these systems are developed upon and how to ensure that methods that perform well in the sample in which they were developed also perform well and without bias in new samples that might be quite different.
THE FUTURE OF MEASUREMENT RESEARCH
Summary of Measurement Research to Date
Studies that develop, evaluate, and scale measures are currently funded at NCER and NCSER within each topic area. Through 2020, the centers have funded 176 measurement studies.1 An analysis of the abstracts of these studies indicates that they can be categorized by their unit of analysis: students, teachers, or “other” (Table 6-1).2
Collectively, these studies have provided the field with a number of measures related to student outcomes and student characteristics. These measures have expanded the field’s understanding of the ways interventions impact students. At the same time, there is a need for further research
TABLE 6-1 Proportion of Measurement Grants Funded by NCER and NCSER, by Target
Table 21. Proportion of Measurement grants by target – NCER and NCSER
SOURCE: Klager & Tipton, 2021 [Commissioned Paper]. Data from https://ies.ed.gov/funding/grantsearch/.
1 This analysis is based on studies with GoalType = Measurement. This excludes investments via center grants, networks, or studies with multiple goals.
2 This sentence was modified after release of the report to IES to indicate that this tally of funded studies runs through 2020.
on measures related to education systems, education leaders, and teachers. Detailed information on students only limits understanding of the mechanisms by which interventions lead to changes in student outcomes, as well as whether specific school or teacher characteristics moderate the impact of interventions. As we lay out a measurement agenda moving forward, we give careful attention to measurement tools across the education system and identify where IES might want to consider additional work.
Methods and measures are closely linked. Often new methods require new measures, and sometimes new measures spur the creation of new methods or the improvement of existing methods. Therefore, many of the issues we point to throughout this report will also require the development of additional measures. While we do not offer specific recommendations on which measures to invest in, we acknowledge in Chapter 9 that IES will need to consider strategic investments in support of our other recommendations.
Emerging Needs in Measurement Research
As noted above, the committee sees a number of areas where the development of new measures would facilitate IES’s work as it continues to grow. In this section, we identify a few areas where we believe investment from IES could support emerging fields.
Expanding the Range of Student Outcome Measures
When it comes to measuring “what works,” IES has in the past 20 years emphasized a broad range of student outcomes beyond standardized test scores and grades alone. This is evidenced in the broad range of measurement studies focused on student outcomes. At the same time, IES-funded researchers still frequently use standardized test scores and grades as the primary outcomes of their studies. This focus is easy to understand as these metrics are regularly collected by education institutions and agencies, relatively easy to access by researchers, and currently prioritized as outcomes in some education policies. Indeed, even research focused on social-emotional learning (SEL) often includes test scores or grades as the ultimate result or outcome in models and research designs. However, an overreliance on these narrow measures of learning make it difficult to understand the mechanisms and processes by which interventions have impact. Moreover, grades and achievement are the tip of the iceberg when it comes to assessing student learning.
However, there is now a deep knowledge base about the links between “upstream” affective, psychological, and behavioral processes that play a role in the “downstream” distal achievement of students (NASEM, 2018).
Moreover, there are many more ways to measure learning, both inside and outside the classroom, than test scores and grades. SEL, motivation, and behavior (e.g., persistence, engagement, disciplinary behavior)—and the processes and moderators that shape these outcomes—are important to study in and of themselves.
Developing and Validating Measures beyond the Student Level
Measures of the structural and contextual factors that shape student outcomes.
It is important to measure the opportunities that education systems provide and the context in which learning occurs, in addition to how students perform. Rather than narrowly focusing on direct-to-student interventions (that often locate the problem within students), studies of the learning environment, systems, and contexts can also be valuable. Examples of such foci include federal, state, district, school, and classroom policies and practices that influence effective teaching and learning; school leaders and the educational opportunities they foster; and how the instructional environment and interactions between students, teachers, administrators, and staff shape students’ learning and experience.
Measures of the context in which children develop and in which students learn, from birth through college, would be valuable. Of the 176 grants awarded by IES over the last 20 years, only four grants (2%) have focused on measuring qualities of schools as the primary question of interest. Studies that develop and validate structural and contextual measures that assess how these factors influence students’ SEL, engagement, motivation, behavior, and performance—and how these systemic and contextual factors may differentially impact students from structurally disadvantaged backgrounds—would be valuable.
Measures of teacher development, practice, and effectiveness.
Research on the measurement and assessment of teacher development, teacher practice, and teacher effectiveness in creating more equitable learning environments where all students are valued, engaged, and perform to their potential—regardless of their background and social identities—is important. The classroom climates that teachers create can predict students’ experiences and learning; moreover, teacher practice can be observed, measured, improved, and intervened upon in an interactive fashion over the course of terms and years.
Of the 176 measurement grants awarded by IES over the past 20 years, only 16 grants (9%) included measures of teachers or teacher practice. The vast majority of IES grants (89%) focus almost exclusively on measurement of students and student-level characteristics.
To understand how students learn and develop in the American education system, it is essential to understand what goes on with schools and
teachers inside and outside the classroom. Research on how teachers create the learning environment of their classes has centered on three core aspects that many professional development efforts variously target: (1) teachers’ intentions to enact changes to their practice; (2) teachers’ implementation of those intended changes/practices in their classrooms; and (3) students’ perceptions and experiences of those enacted practices (e.g., Murphy et al., 2021). Implementation measurement is labor intensive and more work is needed to build on recent IES-supported advances in automated measurement of instructional practice (e.g., Kelly et al., 2018).
Measurement research focused on teachers’ practices is an important step in identifying which practices positively influence students’ SEL, engagement, motivation, behavior, and performance. In addition, it will be helpful to develop measures of teacher professional development (PD) in order to identify what kinds of PD are effective in creating changes and improvements to teachers’ intentions and implementation of policies, practices, interactions with students, interactions with parents, and other aspects that mitigate group-based experience and achievement gaps in their classrooms and support all students’ learning and development.
Measures of knowledge mobilization.
As discussed in Chapter 4, the committee identified knowledge mobilization as a project type. In the past, IES has funded efforts to measure knowledge use through the creation and support of two knowledge utilization centers. Work from these centers resulted in validated survey measures of instrumental, conceptual, and symbolic use (Penuel et al., 2016) and measures of depth of research use (May et al., forthcoming, 2021). The work also highlights the psychometric challenges of measuring practitioner knowledge of research quality (Hill & Briggs, 2020). These measures, developed for survey research, could be built upon and extended by developing measures that could be used in observational data (including longitudinal observational data, video data, and observation in the context of experiments) as well as tracing the impact of research in policy and practice (e.g., Farrell et al., 2018; Huguet et al., 2017). In order to advance this work, IES will need to consider how to leverage existing work and what kinds of additional measures to support new knowledge mobilization project time.
Developing and Validating Measures of Equity and Inequity
Given the urgency of improving educational equity, the field needs more informative measures of the range of inequities in inputs, processes, and outcomes to help monitor and spur progress across all of these areas. How can it be known when systems, learning environments, and opportunities inside and outside the classroom (e.g., curricula, textbooks, instructional practices, teacher-student interactions) are equitable or inequitable? While
school systems are generally required to report student outcomes disaggregated by various demographic characteristics, measuring and comparing between-group gaps in experiences, achievement, and proficiency rates (and growth over time) face multiple challenges, due to small subgroup sizes, distortion in binary measures, lack of a clear criterion for comparison, and ambiguity in interpreting changes in absolute gaps (Ho, 2008). For example, structurally disadvantaged student populations often experience the classroom setting differently than their structurally advantaged peers; thus, should measures of equity in such student experiences always include an advantaged comparison group? Many quantitative critical race scholars argue that requiring White and other advantaged “quasi-control groups” or “comparison groups” is a racist practice that assumes that the experiences of advantaged groups serve as a normative standard by which to compare other groups (e.g., Flanagin et al., 2021; Sablan, 2018; Garcia et al., 2017). Other measures of gaps, disparate impact, and disproportionality exist (e.g., Reardon & Ho, 2014) but are not consistently used across the field, whether due to technical complexity or limitations in applicability. Developing clearer measures of differences would support more effective and transparent monitoring of equity in outcomes.
A growing body of frameworks and tools have emerged for measuring equity in education, highlighting a range of dimensions and indicators for school systems to monitor (e.g., Hyler et al., 2021; Alliance for Resource Equity, 2020; NASEM, 2019). These include student, teacher, and staff inputs; funding and infrastructure; curricula; school climate; leadership; and teaching practices. Measurement along any single dimension could constitute an accounting of strengths and needs, documenting evidence on a checklist, comparing group differences, or calculating more complex metrics. For example, student composition may be measured in terms of its diversity (e.g., Keylock, 2005), its similarity to the broader population (e.g., Reardon & Firebaugh, 2002; Atkinson, 1970), or each group’s exposure to other groups (e.g., Massey & Denton, 1988). Examining the relationship between dimensions, such as between demographics and inputs, then allows for measuring the extent to which all groups have equal access to those resources and opportunities. This could be calculated as correlations or as probability distributions (e.g., Shannon, 1948). Assessing the distribution of individuals and resources across organizational structures, or the distribution of individuals’ participation in and experience of various interactional processes, could serve as measures of inclusion. Other challenges emerge when measuring growth and gaps.
Building on these measures of diversity, equality, and inclusion to assess equity requires tracking change over time. A key conceptual distinction between equality and equity is that while equality focuses only on the present, equity recognizes the influence of past experiences. Although the
above measurement approaches account for situational differences, they do not capture historical differences. Tracking past and future change is critical, both to account for compounding historical inequalities and to assess whether investments are successful in subsequently reducing gaps. Future projections are essential for anticipating what is needed to achieve more equitable outcomes. The field needs reliable and transparent measures of equity from birth to college, not only to make sense of the multiple dimensions and indicators that influence outcomes, but more importantly, to guide policy and practice in providing the resources, opportunities, and supports necessary for educational equity.
Using Technology to Develop New Approaches and Tools for Measurement
The field of education has largely benefited from new and emerging technology that allows researchers and practitioners to understand the mechanisms that improve students’ learning and development. Education technology has the potential to be a powerful tool for measurement and assessment allowing new insights into learning and teaching. For instance, data can shed light on the learning process (e.g., observational data such as classroom audio or video recordings, learning management system behavior, analyses of electronic documents, etc.). Web-scraping tools, education data mining, and learning analytics and the data that result from these approaches also offer new opportunities for measurement research.
Developing Common Measures
A major problem that the field of education encounters is a plethora of measures created by education researchers and practitioners. Understanding and effecting system-wide implementation and improvement demands a coherent set of measures that link processes and outcomes across levels. For example, measures that are calibrated across tests to a single scale of measurement support the same inferences about student performance from one locality to another and from one year to the next (National Research Council, 1999). Collectively, such measures could facilitate moving beyond simplistic deficit frames that attribute gaps to students, by revealing the opportunity gaps in what education systems provide. Systems of measures further enable researchers and practitioners to examine the relationships between processes across levels (Bryk et al., 2015; Provost & Murray, 2011).
At the same time, an overemphasis on common measures may force researchers to use measures that are not well suited to the outcomes they focus on and may limit creativity and development of innovative measures. For this reason, the committee concluded that encouraging, but not requir-
ing, common measures is ideal and allows investigators to pursue innovative measures as called for by theory and the needs of particular studies.
IES’s investments over the past two decades have led to substantial methodological advancements in education research, particularly with respect to how to conduct randomized controlled trials. To continue to set the standard for research and respond to the current needs of education writ large, IES will need to expand the range of research on methods it funds. The committee recognizes that ESRA calls for IES to maintain a focus on causal research. At the same time, descriptive research is needed to be able to fully understand the context of interventions and the nuances of implementation. This means IES will need to invest in research on methods and approaches beyond causal designs that can help to answer questions about how and why interventions work or do not work across varying contexts (e.g., descriptive, qualitative, and mixed methods).
IES should develop competitive priorities for research on methods and designs in the following areas:
- Small causal studies
- Understanding implementation and adaptation
- Understanding knowledge mobilization
- Predicting causal effects in local contexts
- Utilizing big data
IES should convene a new competition and review panel for supporting qualitative and mixed-methods approaches to research design and methods.
In order to respond to the new study types and priority topics and to support the continued growth of methods, new measures and new approaches to measurement will be required. IES has funded numerous studies focused on development of measures, and these studies have provided the field with a number of measures related to student outcomes and student characteristics and have expanded the field’s understanding of the ways interventions impact students’ learning and achievement. At the same time, there is a need for research on measures of other student outcomes such as motivation, behavior and social-emotional development as well as measures related to educational systems, education leaders, and teachers. For this reason, we offer a recommendation for IES to consider related to measurement
research that will support continued growth in other parts of NCER and NCSER’s portfolio.
IES should develop a competitive priority for the following areas of measurement research:
- Expanding the range of student outcome measures
- Developing and validating measures beyond the student level (e.g., structural and contextual factors that shape student outcomes; teacher outcomes; knowledge mobilization)
- Developing and validating measures related to educational equity
- Using technology to develop new approaches and tools for measurement
Alliance for Resource Equity. (2020). Dimensions of Equity. https://www.educationresource-equity.org/dimensions.
Anderson, A., O’Rourke, E., Chin, M., Ponce, N., Bernheim, S., and Burstin, H. (2018). Promoting health equity and eliminating disparities through performance measurement and payment. Health Affairs, 37(3), 371–377. https://doi.org/10.1377/hlthaff.2017.1301.
Anderson, T., and Shattuck, J. (2012). Design-based research: A decade of progress in education research? Educational Researcher, 41(1), 16–25. https://doi.org/10.3102/0013189X11428813.
Atkinson, A.B. (1970). On the measurement of inequality. Journal of Economic Theory, 2(3), 244–263.
Atteberry, A., Bischoff, K., and Owens, A. (2021). Identifying progress toward ethnoracial achievement equity across U.S. school districts: A new approach. Journal for Research on Educational Effectiveness, 14, 410–441.
Balazs, C.L., and Morello-Frosch, R. (2013). The three R’s: How community based participatory research strengthens the rigor, relevance and reach of science. Environmental Justice (Print), 6(1). https://doi.org/10.1089/env.2012.0017.
Bose, M., and Dey, A. (2009). Optimal Crossover Designs. Hackensack, NJ: World Scientific Publishing.
Brookman-Frazee, L., Zhan, C., Stadnick, N., Sommerfeld, D., Roesch, S., Aarons, G.A., Innes-Gomberg, D., Bando, L., and Lau, A.S. (2018). Using survival analysis to understand patterns of sustainment within a system-driven implementation of multiple evidence-based practices for children’s mental health services. Frontiers in Public Health, 6, 54. https://doi.org/10.3389/fpubh.2018.00054.
Brown, C.A., and Lilford, R.J. (2006). The stepped wedge trial design: A systematic review. BMC Medical Research Methodology, 6, 54. https://doi.org/10.1186/1471-2288-6-54.
Brown, C.H., Curran, G., Palinkas, L.A., Aarons, G.A., Wells, K.B., Jones, L., Collins, L.M., Duan, N., Mittman, B.S., Wallace, A., Tabak, R.G., Ducharme, L., Chambers, D.A., Neta, G., Wiley, T., Landsverk, J., Cheung, K., and Cruden, G. (2017). An overview of research and evaluation designs for dissemination and implementation. Annual Review of Public Health, 38, 1–22. https://doi.org/10.1146/annurev-publhealth-031816-044215.
Bryk, A.S. (2020). Improvement in Action: Advancing Quality in America’s Schools. Stanford, CA: Carnegie Foundation for the Advancement of Teaching.
Bryk, A.S., Gomez, L.M., Grunow, A., and LeMahieu, P.G. (2015). Learning to Improve: How America’s Schools Can Get Better at Getting Better. Cambridge, MA: Harvard Education Press.
Bryk, A.S., Gomez, L., and Grunow, A. (2011). Getting Ideas into Action: Building Networked Improvement Communities in Education. Stanford, CA: Carnegie Foundation for the Advancement of Teaching.
Burke, J.G., Lich, K.H., Neal, J.W., Meissner, H.I., Yonas, M., and Mabry, P.L. (2015). Enhancing dissemination and implementation research using systems science methods. International Journal of Behavioral Medicine, 22(3), 283–291. https://doi.org/10.1007/s12529-014-9417-3.
Cammarota, J., and Fine, M. (2010). Revolutionizing Education: Youth Participatory Action Research in Motion. New York: Routledge.
Cobb, P., Jackson, K., Smith, T., Sorum, M., and Henrick, E. (2013). Design research with educational systems: Investigating and supporting improvements in the quality of mathematics teaching and learning at scale. National Society for the Study of Education, 112(2), 320–349.
Cohen-Vogel, L., Allen, D., Rutledge, S., Harrison, C., Cannata, M., and T. Smith. (2018). The dilemmas of research-practice partnerships: Implications for leading continuous improvement in education. Journal of Research on Organization in Education, 2(1).
Cohen-Vogel, L., Tichnor-Wagner, A., Allen, D., Harrison, C., Kainz, K., Socol, A.R., and Wang, Q. (2015). Implementing educational innovations at scale: Transforming Researchers into continuous improvement scientists. Educational Policy, 29(1), 257–277.
Collins, L.M., Murphy, S.A., and Strecher, V. (2007). The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): New methods for more potent eHealth interventions. American Journal of Preventive Medicine, 32(5 Suppl), S112–S118. https://doi.org/10.1016/j.amepre.2007.01.022.
Curran, G.M., Bauer, M., Mittman, B., Pyne, J.M., and Stetler, C. (2012). Effectiveness-implementation hybrid designs: Combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical Care, 50(3), 217–226. https://doi.org/10.1097/MLR.0b013e3182408812.
Deming, W.E. (1982). Quality, Productivity and Competitive Position. Cambridge, MA: MIT Press.
Design-Based Research Collective. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Research, 32(1), 5–8.
Donovan, M.S. (2013). Generating improvement through research and development in education systems. Science, 340(6130), 317–319.
Eberhard, K. (2021). The effects of visualization on judgment and decision-making: a systematic literature review. Management Review Quarterly. https://doi.org/10.1007/s11301-021-00235-8.
Finnigan, K.S., Daly, A.J., and Che, J. (2013). Systemwide reform in districts under pressure: The role of social networks in defining, acquiring, and using research evidence. Journal of Educational Administration, 51, 476–497.
Flanagin, A., Frey, T., Christiansen, S.L., for the AMA Manual of Style Committee. (2021). Updated guidance on the reporting of race and ethnicity in medical and science journals. JAMA, 326(7), 621–627. https://doi.org/10.1001/jama.2021.13304.
Frank, K., Kim, J., Salloum, S., Bieda, K., and Youngs, P. (2020). From interpretation to instructional practice: A network study of early career teachers’ sensemaking in the era of accountability pressures and Common Core State Standards. American Educational Research Journal, 57, 2293–2338.
Fretheim, A., and Tomic, O. (2015). Statistical process control and interrupted time series: A golden opportunity for impact evaluation in quality improvement. BMJ Quality & Safety, 24, 748–752.
Garcia, N.M., López, N., and Vélez, V.N. (2017). QuantCrit: Rectifying quantitative methods through critical race theory. Race Ethnicity and Education, 21(2), 149–157. https://doi.org/10.1080/13613324.2017.1377675.
Gitomer, D.H., and Crouse, K. (2019). Studying the Use of Research Evidence: A Review of Methods. New York: William T. Grant Foundation.
Hill, H.C., and Briggs, D.C. (2020). Education Leaders’ Knowledge of Causal Research Design: A Measurement Challenge (EdWorkingPaper 20-298). Annenberg Institute at Brown University. https://doi.org/10.26300/vxt5-ws91.
Ho, A.D. (2008). The problem with “proficiency”: Limitations of statistics and policy under No Child Left Behind. Educational Researcher, 37(6), 351–360.
Huguet, A., Coburn, C.E., Farrell, C.F., Kim, D.H., and Allen, A-R. (2021). Constraints, values, and information: How district leaders justify their positions during instructional deliberations. American Educational Research Journal. First published online February 20, 2021. https://doi.org/ https://doi/10.3102/0002831221993824.
Huguet, A., Allen, A-R., Coburn, C.E., Farrell, C.C., Kim, D.H., and Penuel, W.R. (2017). Locating data use in the microprocesses of district-level deliberations. Nordic Journal of Studies in Education, 3(1), 21–28.
Hussey, M.A., and Hughes, J.P. (2007). Design and analysis of stepped wedge cluster randomized trials. Contemporary Clinical Trials, 28(2), 182–191. https://doi.org/10.1016/j.cct.2006.05.007.
Hyler, M.E., Carver-Thomas, D., Wechsler, M., and Willis, L. (2021). Districts Advancing Racial Equity (DARE) Tool. Palo Alto, CA: Learning Policy Institute.
Institute of Education Sciences. (IES) (2021). About Standards for Excellence in Education Research. https://ies.ed.gov/seer/index.asp.
Joyce, K.E., and Cartwright, N. (2020). Bridging the gap between research and practice: Predicting what will work locally. American Educational Research Journal, 57, 1045–1082.
Juran, J.M. (1951). Juran’s Quality Control Handbook. New York: McGraw-Hill.
Kelly, S., Olney, A.M., Donnelly, P., Nystrand, M., and D’Mello, S.K. (2018). Automatically measuring question authenticity in real-world classrooms. Educational Researcher, 47, 451–464.
Keylock, C.J. (2005). Simpson diversity and the Shannon–Wiener index as special cases of a generalized entropy. Oikos, 109, 203–207. https://doi.org/10.1111/j.0030-1299.2005.13735.x.
Klager, C.R., and Tipton, E.L. (2021). Commissioned Paper on the Summary of IES Funded Topics. Paper prepared for the National Academies of Sciences, Engineering, and Medicine, Committee on the Future of Education Research at the Institute of Education Sciences in the U.S. Department of Education. https://nap.nationalacademies.org/resource/26428/READY-KlagerTipton_IES_Topic_Analysis_Jan2022v4.pdf.
Kumar, G., Basri, S., Imam, A.A., Khowaja, S.A., Capretz, L.F., and Balogun, A.O. (2021). Data harmonization for heterogeneous datasets: A systematic literature review. Applied Sciences, 11, 8275. https://doi.org/10.3390/app11178275.
Kumar, G., Basri, S., Imam, A.A., and Balogun, A.O. (2020). Data harmonization for heterogeneous datasets in big data—a conceptual model. In R. Silhavy, P. Silhavy, and Z. Prokopova (Eds.), Software Engineering Perspectives in Intelligent Systems. CoMeSySo 2020. Advances in Intelligent Systems and Computing (Vol. 1294). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-63322-6_61.
Lewis, C. (2015). What is improvement science? Do we need it in education? Educational Researcher, 44(1), 54–61.
Li, Q. (2020). Overview of Data Visualization. Embodying Data: Chinese Aesthetics, Interactive Visualization and Gaming Technologies, 17–47. https://doi.org/10.1007/978-981-15-5069-0_2.
Mabry, P.L., Olster, D.H., Morgan, G.D., and Abrams, D.B. (2008). Interdisciplinarity and systems science to improve population health: A view from the NIH Office of Behavioral and Social Sciences Research. American Journal of Preventive Medicine, 35(2 Suppl.), S211–S224. (Erratum in: American Journal of Preventive Medicine, 35, 611).
Massey, D.S., and Denton, N.A. (1988). The dimensions of residential segregation. Social Forces, 67(2), 281–315. https://doi.org/10.2307/2579183.
May, H., Farley-Ripple, E.N., Blackman, H., Wang, R., Shewchuk, S., Tilley, K., and Van Horne, S. (forthcoming). Survey of Evidence in Education for Schools (SEE-S) Technical Report. Center for Research Use in Education, University of Delaware, Newark.
May, H., Blackman, H., Wang, R., Tilley, K., and Farley-Ripple, E.N. (2021). Characterizing Schools’ Depth of Research Use. Paper presented at the Annual Meeting of the American Educational Research Association, April 2021.
Means, B., and Harris, C.J. (2013). Towards an evidence framework for design-based implementation research. In B.J. Fishman, W.R. Penuel, A.R. Allen, and B.H. Cheng (Eds.), Design Based Implementation Research: Theories, Methods, and Exemplars. National Society for the Study of Education Yearbook (Vol. 112, pp. 320–349). New York: Teachers College.
Murphy, M., Fryberg, S., Brady, L., Canning, E., and Hecht, C. (2021). Global Mindset Initiative Paper 1: Growth Mindset Cultures and Teacher Practices. https://ssrn.com/abstract=3911594 or http://dx.doi.org/10.2139/ssrn.3911594.
Musci, R.J. (2020). Integrated Data Analysis in Prevention Science. [PowerPoint Slides]. https://prevention.nih.gov/education-training/methods-mind-gap/integrated-data-analysis-prevention-science.
National Academies of Sciences, Engineering, and Medicine (NASEM). (2019). Monitoring Educational Equity. Washington, DC: The National Academies Press. https://doi.org/10.17226/25389.
———. (2018). How People Learn II: Learners, Contexts, and Cultures. Washington, DC: The National Academies Press. https://doi.org/10.17226/24783.
National Research Council (NRC). (2000). How People Learn: Brain, Mind, Experience, and School: Expanded Edition. Washington, DC: The National Academies Press. https://doi.org/10.17226/9853.
———. (1999). Embedding Questions: The Pursuit of a Common Measure in Uncommon Tests. Washington, DC: The National Academies Press. https://doi.org/10.17226/9683.
Northridge, M.E., and Metcalf, S.S. (2016). Enhancing implementation science by applying best principles of systems science. Health Research Policy and Systems, 14, 74. https://doi.org/10.1186/s12961-016-0146-8.
O’Cathain, A., Murphy, E., and Nicholl, J. (2010). Three techniques for integrating data in mixed methods studies. British Medical Journal, 341, c4587.
Padilla, L. M., Creem-Regehr, S. H., Hegarty, M., & Stefanucci, J. K. (2018). Decision making with visualizations: a cognitive framework across disciplines. Cognitive research: principles and implications, 3, 29. https://doi.org/10.1186/s41235-018-0120-9.
Penuel, W.R., Briggs, D.C., Davidson, K.L., Herlihy, C., Sherer, D., Hill, H.C., Farrell, C.C., and Allen, A.-R. (2017). How school and district leaders access, perceive, and use research. AERA Open, 3(2), 1–17. https://doi.org/10.1177/2332858417705370.
Penuel, W.R., Briggs, D.C., Davidson, K.L., Herlihy, C., Sherer, D., Hill, H.C., Farrell, C.C., and Allen, A.-R. (2016). Findings from a National Study of Research Use among school and district leaders. Technical report No. 1. Boulder, CO: National Center for Research in Policy and Practice.
Portela, M.C., Pronovost, P.J., Woodcock, T., Carter, P., and Dixon-Woods, M. (2015). How to study improvement interventions: A brief overview of possible study types. BMJ Quality & Safety, 24(5), 325–336. https://doi.org/10.1136/bmjqs-2014-003620.
Provost, L.P. (2011). Analytical studies: A framework for quality improvement design and analysis. BMJ Quality & Safety, 20(Suppl. 1), i92–i96.
Provost, L.P., and Murray, S. (2011). The Health Care Data Guide: Learning from Data for Improvement. San Francisco: John Wiley & Sons.
Reardon, S.F., and Firebaugh, G. (2002). Measures of multigroup segregation. Sociological Methodology, 32, 33–67. https://doi.org/10.1111/1467-9531.00110.
Reardon, S.F., and Ho, A.D. (2014). Practical issues in estimating achievement gaps from coarsened data. Journal of Educational and Behavioral Statistics, 40(2), 158–189.
Robertson, T., and Simonsen, J. (2012). Challenges and opportunities in contemporary participatory design. Design Issues, 28. https://doi.org/10.1162/DESI_a_00157.
Sablan, J.R. (2018). Can you really measure that? Combining critical race theory and quantitative methods. American Educational Research Journal, 56(1), 178–203. https://doi.org/10.3102/0002831218798325.
Sarkies, M.N., Skinner, E.H., Bowles, K.A., Morris, M.E., Williams, C., O’Brien, L., Bardoel, A., Martin, J., Holland, A.E., Carey, L., White, J., Haines, T.P. (2019). A novel counterbalanced implementation study design: Methodological description and application to implementation research. Implementation Science, 14, 45. https://doi.org/10.1186/s13012-019-0896-0.
Shalowitz, M.U., Isacco, A., Barquin, N., Clark-Kauffman, E., Delger, P., Nelson, D., Quinn, A., and Wagenaar, K.A. (2009). Community-based participatory research: A review of the literature with strategies for community engagement. Journal of Developmental and Behavioral Pediatrics 30(4), 350–361. https://doi.org/10.1097/DBP.0b013e3181b0ef14.
Shannon, C.E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x.
Shewhart, W.A. (1931). Economic Control of Quality of Manufactured Products. New York; London: Van Nostrand; MacMillan.
Stringer, E.T., and Aragón, A.O. (2020). Action Research. Thousand Oaks, CA: SAGE Publications.
Weber, M.S. (2018). Methods and approaches to using web archives in computational communication research. Communication Methods and Measures, 12(2–3), 200–215. https://doi.org/10.1080/19312458.2018.1447657.
Weber, M.S., and Yanovitzky, I. (Eds.). (2021). Networks, Knowledge Brokers, and the Public Policymaking Process. New York: Palgrave.
Weiss, C.H., and Bucuvalas, M.J. (1980). Social Science Research and Decision Making. New York, NY: Columbia University Press.
Yanovitzky, I., and Weber, M. (2020). Analyzing use of evidence in public policymaking processes: A theory-grounded content analysis methodology. Evidence & Policy: A Journal of Research, Debate and Practice, 16(1), 65–82.
This page intentionally left blank.