National Academies Press: OpenBook

Behavioral Economics: Policy Impact and Future Directions (2023)

Chapter: 12 Conducting and Disseminating Behavioral Economics Research

« Previous: Part III: Looking to the Future
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

12

Conducting and Disseminating Behavioral Economics Research

Much progress has been made in establishing the theoretical foundations of behavioral economics and the fruitful application of this research in many policy domains. In this chapter, we step aside from the content of this work to consider how research in the field is conducted. Issues with the conduct and dissemination of research limit its impact, and we make recommendations about ways to strengthen the field, with a focus on three issues:

  • Replicability: Can researchers replicate the results from a given study in a follow-up study, even using modestly different means of analyzing the data?
  • Generalizability: Can the results from a study be generalized to other settings, populations, and groups, even when relatively minor changes are made in the nature of the intervention being tested?
  • Publication bias: Does the scientific process by which research gets disseminated and published bias the conclusions that researchers and the policy community draw from research?

These issues are not unique to behavioral economics or other behavioral and social science fields. We focus on their particular relevance to behavioral economics because the results of interventions are usually nuanced and effects depend on precise wording, framing, and the definitions of the target population, which makes generalizing results especially important and challenging. Effect sizes are often small, which means that the results may be particularly subject to changes in the analytic method used and the generalizability of the results. Finally, these issues are important for both

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

policy makers and academic researchers as they collaborate to identify the research that will be most useful for addressing specific issues, translating it for application in the design and development of interventions, and implementing those interventions at scale (discussed in Chapter 13).

REPLICABILITY

We look first at replicability in a narrow sense: whether the results of a particular study or intervention will be the same if retested in another study for the exact same policy or intervention, on the same population used in the original study, using the same data. The volume of behavioral economics research has grown dramatically over the last four decades, and because the demand increasingly is for evidence about policy applications (rather than theoretical exercises), the interest in replication has grown.

Publication in peer-reviewed journals is regarded as the best way to ensure that high-quality and reliable research becomes known. In the early 2000s, a paper published in the flagship journal of the American Economic Association, the American Economic Review, showed that results previously published in dozens of papers could not be replicated (McCollough & Vinod, 2003). In some cases, the authors had not kept the data or computer programs that had produced their results, but errors of computation were also identified. The American Economic Review subsequently began requiring authors to make their data and computer programs publicly available, and other journals in the field followed suit; since then, many studies have been successfully replicated. The American Economic Review has recently gone further, assigning a data editor to replicate, at least in primary substance, all papers accepted for publication. This process has not been adopted at most journals because it requires a major commitment of resources. However, some have suggested that journals should go even further and ask the reviewers of submitted articles to replicate analyses themselves, but this is regarded as infeasible or impractical in all but the simplest cases (e.g., of randomized controlled trials with small numbers of observations and few variables).

A somewhat broader issue with replicability is the robustness of research findings: whether seemingly minor changes in follow-up studies—such as in the exact sample used in the analysis, the covariates used, or other methods of handling the data—change the results. It is common for researchers conducting follow-up analyses to select a subsample of a larger, complete dataset used in the initial study, and that subsampling can result in changes in findings. In multivariate analyses, minor decisions in the estimation method used, computation of standard errors, and control variables can have significant effects on results. In addition, most researchers conduct what is known as specification searching, meaning that they search for

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

results that accord with their a priori expectations. Reports produced on this basis will present only a selective set of results based on that searching.

The review process can reveal some of the effects of such a search process, as peer reviews ask authors to conduct robustness checking themselves in order to detect whether search choices affected the reported results. But such a check would require that the researchers who conducted the study provide not only the exact dataset and variables used in the published results but also the larger dataset from which the published data were drawn, as well as all variables used in the analysis. This requirement is not always met in economics journals today, which often require only the provision of data and variables used in the published results. Another possibility is for project proposals to include plans for an external group to replicate the authors’ analysis, with funding and data sharing as needed.

Journals in other social sciences now often ask authors to post their programs and data, although this was not common until a few years ago, when attention to reliability and reproducibility gained significant public attention (see National Academies of Sciences, Engineering, and Medicine, 2019). In response, many journals began indicating whether the authors of each paper have posted their data and code and preregistered.1 In psychology, there has been a dramatic increase in publication of replication papers. In one activity, a group of researchers chose 100 lab experiments published in psychology journals and attempted to reproduce the results originally published. The results showed that only 37 percent of the replication experiments had statistically significant results, and the effect sizes were less than half those in the published papers, suggesting that the original studies had overstated the magnitude of the findings (Open Science Collaboration, 2015).

A barrier to making the data used in published articles available to other researchers is that behavioral economics data (and, more generally, behavioral sciences data) often have restricted access to protect privacy or for other reasons. Many of the studies cited in Chapters 510 were based on confidential records from company pension plans, schools, health care providers, and welfare departments, none of which are available to other researchers without specific permission from those who provided the data. Researchers conducting randomized controlled trials are also governed by strict privacy and confidentiality rules that prohibit release of the data in any form that might reveal the identity of the people in the study.

In some cases, it may be possible to do indirect replicability analysis without the data from a particular study, if a near-exact duplicate study

___________________

1 Preregistration is the practice of a researcher’s publicly sharing the intended research and analysis plan for the work. In addition to letting other researchers know about ongoing work, it can help deal with the problem of a researcher simply not reporting the results or even provide confirmation that the research was carried out.

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

can be undertaken. In many lab experiments, for example, the nature of the study population is precisely defined, as are the instructions to the study treatment and control groups. When this has been done, a new study on the exact same population with the exact same instructions can be conducted.

GENERALIZABILITY

We have noted that few of the studies identified in the six domains we examined used data from large-scale natural experiments or from design-based interventions with broad samples of the population of interest. Many of the studies in our review were conducted in a single geographical area, and many covered populations that are not representative of the general population in terms of race, gender, or socioeconomic status. Without tests of interventions and programs in different settings and different groups, it is difficult to know whether a particular policy tested in one setting and on one group would have the same results in a different setting and a different group.2

A related but conceptually distinct issue is whether minor variations in the treatment itself generate significant differences in findings. This issue is particularly important in behavioral approaches, especially nudges, where the framing (including the language used, the imagery chosen, and representation of choices) is carefully chosen by the study researchers. But whether differences in those framing issues and other minor variations would lead to different results cannot be known without testing those alternatives. Understanding these variations is important, and so too is testing in different settings and with different groups, to determine whether any treatment variations are different in only some settings or for only some groups.

In an ideal world, researchers would provide a complete mapping of how findings differ for a large set of minor variations in a treatment and across a wide variety of settings and groups, but there are many barriers to such an ideal. One is that in academic research there are more professional incentives for testing new interventions than for testing old ones: novelty is often of more interest than replication of an existing treatment in a different area or on a different population. Funders of research are also often more interested in testing new interventions than simply learning about the generalizability of existing interventions. Another barrier is that most interventions require the cooperation of an institution (a school, a private company, a health care provider), and such institutions are naturally interested primarily in tailoring an intervention to benefit the populations for

___________________

2Polman & Maglio (2022) suggest that some gains in generalizability can be made in experimental settings by making the experimental settings more representative of the general population (e.g., not just students) and hence more realistic.

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

which they are responsible, not in replicating evidence about what may have worked for other institutions and in other settings.

These barriers can constrain learning about who will most benefit from an intervention. As we emphasize throughout this report, a long-run goal in behavioral research is to learn which people respond to an intervention and under what circumstances. Achieving this goal may be impeded by an emphasis on either narrow populations that are not representative or broad study populations that produce average effects that dilute the findings that may apply to specific groups.

PUBLICATION BIAS

Publication bias is the term for the situation in which the research studies published in professional journals or other research outlets, taken together, are not a reliable summary of all the findings in the field or are a selective reporting of them. While some degree of selectivity in publication is inevitable, it is a concern if the reporting favors only certain kinds of results. Publication bias is found in all social sciences, but its seriousness varies across disciplines and methods of study: the extent of publication bias is larger in sociology and political science than in economics, for example (Gerber & Malhotra, 2008a, 2008b; Brodeur et al., 2016; Camerer et al., 2016; Vivalt, 2019; Brodeur, Cook, & Heyes, 2020).

One kind of publication bias derives from conflict of interest: it occurs when an author interprets the evidence and reaches a particular conclusion because they will benefit financially. An example may occur if a company funds research the results of which may affect its business or profits. This type of conflict of interest has gained much attention in academic institutions in the last decade, though the issue is perhaps most acute in medical and engineering schools, where faculty have incentives to either own their own firms or do extensive consulting work. Most universities now require strict acknowledgment of funding sources for all academic research. Many professional journals also require acknowledgment of funding sources and other possible conflicts of interest. Nonacademic institutions that publish research also generally report funding sources because those institutions almost exclusively conduct sponsored research. For example, a substantial percentage of randomized controlled studies of nudge interventions are funded by sponsors, and this is usually acknowledged in the published report.

A second type of publication bias occurs when the result of a study is deemed by the researchers to be less likely to be published than other results. A finding that something did not work well or perform as expected—of null results or statistical insignificance, for example—is generally more difficult to publish than findings of significance (e.g., Andrews & Kasy, 2019). In some cases, null results may not even be submitted for publication.

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

A third type of publication bias results from specification searching, noted above in connection with replicability. Researchers may tend to seek findings that meet their a priori expectations, and the difficulty in publishing null results and unexpected findings in professional peer-reviewed outlets reinforces this tendency. However, even projects that do not involve specification searching will be less likely to be published if the initial specification yields a null finding. Peer review may uncover some of these instances, as reviewers ask authors to test alternative specifications, but allowing independent researchers access to the full dataset on which the search took place is another method of addressing the bias.

Tests for the existence of publication bias typically rely on the classical theory of hypothesis testing in statistics. It is assumed that a given intervention has a true effect, but that random samples from a population will provide a distribution of estimates that are centered on the true effect but are sometimes higher and sometimes lower. This assumption means that even if the true effect is, say, positive, a certain fraction of estimates will be zero or negative, though the study as published may not show the expected number of zero or negative effects: this would be one indicator of publication bias. This effect is evidence because applied researchers have commonly agreed to assume that the true effect is probably nonzero.3 Thus, when the distribution of results from the study is substantially different from what would be expected, publication bias is a likely culprit (Andrews & Kasy, 2019).

RECOMMENDATION

Before offering possible solutions to these problems, we wish to emphasize that we do not suggest that the evidence reviewed in this report is unreliable. The results from some of the studies we analyzed have been replicated, and some of the interventions have been tested in multiple settings and found to have similar effects, as detailed in Chapters 510. In other cases, the evidence supports the robustness of the results, so publication bias is unlikely. However, addressing issues with replicability, generalizability, and publication bias would significantly increase the overall usefulness of behavioral economics research evidence.

___________________

3 The most common cutoff value is approximately two, implying that the probability (p) that the true effect is positive is 90 percent (sometimes 95 percent values are used). The phenomenon of “p-hacking,” which has generated much discussion in recent years, is the research practice of conducting specification searching until the p-value (i.e., the probability that the true effect is nonzero) is at least 90 percent, equivalent to searching for specifications that have a t-statistic of at least two.

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

Replicability

The most promising solution to the problem of replicability is for behavioral science researchers to make the data used in their analyses available so that other researchers can try to replicate their results (Simonsohn, 2013). Although there are barriers to this objective, discussed above, much more could be done to address this problem. There are still many journals that do not ask authors to provide their data and computer code for published articles, even when the data are in the public domain. Many university and for-profit academic publishers could provide more financial support for the editorial staff of professional journals to check computer code themselves. Proposals have also been made to set up research data centers or other secure computing facilities where proprietary data can be reanalyzed by licensed researchers who are subject to penalties for disclosure of private information. Funders—including the National Science Foundation and the National Institutes of Health—have already taken steps in this direction but could go further.

Generalizability

To address challenges with generalizability, researchers could test their interventions in a wide variety of settings, populations, and groups and test alternative forms of their interventions in those different settings. Another possibility is the creation of a database of studies, including ones that had not been published in selective journals, that would be accessible to all, so that null results and other significant findings are not overlooked. However, the development and maintenance of such a database would be a significant and potentially expensive endeavor, and questions about who might do that work and oversee the inclusion of valid research would need to be addressed.

Publication Bias

One initial step toward the acknowledgment of potential publication bias would be for those who conduct meta-analyses and summaries of the literature to include tests of publication bias.4 While there has been progress toward this goal, there are still meta-analyses that do not pay attention to such correction or do it only in a cursory way. Since there are different methods of correcting for publication bias, the authors of a meta-analysis can produce the results of the set of leading corrections in the same way

___________________

4 Such tests include funnel plots and, ideally, imputations of the likely effect size, if it were possible to observe the null effect studies that are not published—for example, following the procedure in Andrews & Kasy (2019).

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

that researchers estimating event studies often present the point estimates for different leading estimators. Adopting such corrections is also critical to provide reasonable expectations about effect size to researchers, referees, and editors so that estimated effect sizes can be judged to be large or small relative to those expectations.

Other possible steps could counter publication bias. One is for editors to promote topics that are less subject to publication bias and establish a goal of publishing null findings. Researchers can contribute by conducting studies that predict the results from a future study, based on evidence to date and prior reasoning (DellaVigna, Pope, & Vivalt, 20195). Comparing these forecasted results with the actual result can reveal when a null result contrasts with the prior beliefs of the research community. Yet another suggestion is to have researchers record the designs of their analyses in advance and delineate the exact specifications and statistical methods to be used in the analysis of the data, which will reduce the likelihood of specification searching. The American Economic Association has a registry of randomized controlled trials of this type. This suggestion could be taken further if studies could be accepted for publication on the basis of such preregistration, independent of the study’s subsequent findings.

Another possibility would be to create a more complete database of studies (including those with no selective publication) to encourage the correct calculation of standard errors, and to support studies appropriate to small effects sizes. Such an effort would require substantial resources from sponsoring institutions, but government and nongovernment funders could suggest, or even require, that researchers report the results of all their analyses, including those with null results, and also post them on a database that is searchable by others. This approach would make a larger set of studies available for other researchers. It would also be possible to take advantage of situations in which a large set of unpublished reports that are not necessarily intended for formal publication can nevertheless be made available to researchers. This is possible for randomized controlled trials conducted by nudge units and in cases where peer-reviewed study designs have to be reported before being carried out.6

We note that many researchers do not take into account common features of data that can bias standard errors downward, such as auto-correlation, geographic correlation, and multiple testing of hypotheses. Journal editors could require that these calculations be provided to avoid low standard errors and upward biased findings of statistical significance.

___________________

5 Also see their Social Science Prediction Platform at https://socialscienceprediction.org

6 This was done for the Time Sharing Experiments in Social Science: see Franco, Malhotra, & Simonovits (2014).

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

In addition, many researchers do not select sample sizes large enough for their studies when the true effect size is relatively small, as is the case for many nudges. Encouraging researchers to design studies with appropriate sample sizes would lead to fewer chance findings.

As we acknowledge above, these issues are relevant across all social and behavioral science fields, but behavioral economics as a field is in a position to lead the way. By addressing these issues, behavioral economists and those who support their work can not only strengthen the field’s research and reputation but also set an example that will benefit other fields.

Recommendation 12-1: Researchers, funders of research, university leaders, and journal editors in behavioral economics should take steps to support the replicability and generalizability of behavioral economics research, more fully acknowledge publication bias and take steps to detect its presence, and counter publication bias using a variety of approaches.

Table 12-1 provides examples of the ways researchers, funders, journal editors, and university leaders could strengthen research in behavioral economics—and set an example for other fields in which similar problems arise.

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

TABLE 12-1 Examples of Ways to Strengthen Research

Goal Researchers Funders Journal Editors University Administrators
Encourage and reward the publication of null results X X X
Encourage the use of sample sizes sufficient to detect small effects X X
Conduct, encourage, and reward replication of results and research transparency X X X X
Commit to uncovering systematic use of p-hacking measures and funnel plots in meta-analyses and designs in order to enhance transparency X X X
Set standards for evidence gathering and evaluation X X
Develop a shared, searchable platform of studies that is maintained in perpetuity as a resource for future researchers, whether or not the studies’ results are published X X
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

REFERENCES

Andrews, I., & Kasy, M. (2019). Identification of and correction for publication bias. American Economic Review, 109(8), 2766–2794. https://doi.org/10.1257/aer.20180310

Brodeur, A., Lé, M., Sangnier, M., & Zylberberg, Y. (2016). Star wars: The empirics strike back. American Economic Journal: Applied Economics, 8(1), 1–32. https://doi.org/10.1257/app.20150044

Brodeur, A., Cook, N., & Heyes, A. (2020). Methods matter: P-hacking and publication bias in causal analysis in economics. American Economic Review, 110(11), 3634–3660. https://doi.org/10.1257/aer.20190687

Camerer, C. F., Dreber, A., Forsell, E., Ho, T. H., Huber, J., Johannesson, M., Kirchler, M., Almenberg, J., Altmejd, A., Chan, T., & Heikensten, E. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351(6280), 1433–1436.

DellaVigna, S., Pope, D., & Vivalt, E. (2019). Predict science to improve science. Science, 366(6464), 428–429. https://doi.org/10.1126/science.aaz1704

Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502–1505. https://doi.org/10.1126/science.1255484

Gerber, A. S., & Malhotra, N. (2008a). Publication bias in empirical sociological research: Do arbitrary significance levels distort published results? Sociological Methods & Research, 37(1), 3–30. https://doi.org/10.1177/0049124108318973

———. (2008b). Do statistical reporting standards affect what is published? Publication bias in two leading political science journals. Quarterly Journal of Political Science, 3(3), 313–326. https://doi.org/10.1177/1532673X09350979

McCollough, B. D., & Vinod, H. D. (2003). Verifying the solution from a nonlinear solver: A case study. American Economic Review, 93(3), 873–892.

National Academies of Sciences, Engineering, and Medicine. (2019). Reproducibility and replicability in science. The National Academies Press.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716

Polman, E., & Maglio, S. J. (2022). Improving the generalizability of behavioral science by using reality checks: A tool for assessing heterogeneity in participants’ consumer-ship of study stimuli. Perspectives on Psychological Science. https://doi.org/10.1177/17456916221134575

Simonsohn, U. (2013). Just post it: The lesson from two cases of fabricated data detected by statistics alone. Psychological Science, 24(10), 1875–1888. https://doi.org/10.1177/0956797613480366

Vivalt, E. (2019). Specification searching and significance inflation across time, methods and disciplines. Oxford Bulletin of Economics and Statistics, 81(4), 797–816. https://doi.org/10.1111/obes.12289

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×

This page intentionally left blank.

Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 189
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 190
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 191
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 192
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 193
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 194
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 195
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 196
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 197
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 198
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 199
Suggested Citation:"12 Conducting and Disseminating Behavioral Economics Research." National Academies of Sciences, Engineering, and Medicine. 2023. Behavioral Economics: Policy Impact and Future Directions. Washington, DC: The National Academies Press. doi: 10.17226/26874.
×
Page 200
Next: 13 Implementing Behavioral Economics Approaches »
Behavioral Economics: Policy Impact and Future Directions Get This Book
×
 Behavioral Economics: Policy Impact and Future Directions
Buy Paperback | $25.00 Buy Ebook | $20.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Behavioral economics - a field based in collaborations among economists and psychologists - focuses on integrating a nuanced understanding of behavior into models of decision-making. Since the mid-20th century, this growing field has produced research in numerous domains and has influenced policymaking, research, and marketing. However, little has been done to assess these contributions and review evidence of their use in the policy arena.

Behavioral Economics: Policy Impact and Future Directions examines the evidence for behavioral economics and its application in six public policy domains: health, retirement benefits, climate change, social safety net benefits, climate change, education, and criminal justice. The report concludes that the principles of behavioral economics are indispensable for the design of policy and recommends integrating behavioral specialists into policy development within government units. In addition, the report calls for strengthening research methodology and identifies research priorities for building on the accomplishments of the field to date.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!