3
Lessons Learned and Future Pathways
INSIGHTS FROM THE ANALYTIC FRAMEWORK
Dylan Rebstock (program officer at the National Academies of Sciences, Engineering, and Medicine) invited workshop participants to share insights gleaned from their experiences with and discussions about the Analytic Framework.
Scott Page (expert contributor for the Analytic Framework and John Seely Brown Distinguished University Professor of Complexity, Social Science, and Management at the University of Michigan) said that although he does not work with surveys specifically, he thinks about the usefulness of different representations. He encouraged intelligence analysts to embrace the depth of the science, which includes a good statistical understanding of how to think about surveys; to understand the “art” of conducting surveys; and to emphasize the link between the survey and its purpose.
Courtney Kennedy (expert contributor for the Analytic Framework and director of survey research at Pew Research Center) reiterated that, if possible, intelligence analysts should carefully design approaches during periods of calm instead of waiting until “go-mode” to tackle difficult ethical issues and methodological challenges with new types of data. Page suggested that analysts ask themselves the following questions during those periods of calm: what is the set of questions that might be asked, and what is our cache of polling information? What information assets are available, how often are they needed, and how often are they rerun? Josh Pasek (associate professor of communication and media and political science, faculty associate in the Center for Political Studies, and core faculty for the
Michigan Institute for Data Science at the University of Michigan) agreed that analysts should lay the groundwork for the unexpected situations that could arise by developing a baseline of questions, which also makes it easier to recognize how old data relate to new data.
An intelligence analyst remarked that the depth of the workshop’s conversation about ethics was unexpected. Even though ethically challenging situations always arise, analysts do not spend as much time discussing them. Reflecting on the final scenario discussed during the workshop, he questioned whether obtaining some measurement, even if it is flawed and ethical boundaries have been pushed, is better than allowing a false public narrative to remain dominant. He urged the intelligence community (IC) to continue to think more deeply about these important ethical issues and related issues of data quality.
GAPS IN THE ANALYTIC FRAMEWORK
Rebstock pointed out that the Analytic Framework could not cover all areas of interest. For example, Charles Lau (co-lead expert contributor for the Analytic Framework and director of the International Survey Research Program at RTI International) highlighted the lack of information on the relationship between attitudes and behavior in the Analytic Framework. Kennedy added that the academic literature more broadly is lacking in terms of how to deal with sensitive measurements; the scientific community has more work to do to provide better tools to collect those types of measurements.
An analyst asked whether useful information could be collected from a snowball poll of elites across several countries. Bautista suggested using the quality tool in the Analytic Framework to try to understand the research process and develop more confidence in the credibility of the information and the soundness of the conclusions that are drawn from it. An analyst would explore the following questions about the data quality before evaluating the conclusions: who sponsored the study? Who was approached to participate in the study? Who conducted the study? Is the survey authentic? Were professional standards followed? The “beauty” of the Analytic Framework, he continued, is that it is a portable tool that intelligence analysts can apply to many types of scenarios.
Pasek added that it is important to consider strategies to approach a true black box. If the black box is worth exploring and no other path exists to obtain its data, analysts would need to determine whether they could validate parts of it in enough ways to trust the observations. Amaya noted that black boxes are usually a bad indicator, and black box processes are iterative: once an analyst determines how much validation is possible, the analytic plan might need to be revised. She cautioned against making comparisons across countries with a black box because it is impossible to
know what has changed across the countries. She pointed out that black box scenarios and snowball samples (the latter of which is discussed in her chapter in the Analytic Framework) are often well suited for qualitative approaches—the data can reveal a theme, which is a “safer” way to use these data and build confidence in them. She suggested that intelligence analysts study everything that could go wrong in these situations, understand the details of the relevant sampling approaches, and use Bautista’s rating system to evaluate quality. Lau also emphasized the danger in drawing cross-country inferences using these types of data sources because each country likely has a different sampling approach, recruitment approach, and composition. He commented that these types of sources are better suited to reveal qualitative insights about what is happening within countries.
Because intelligence analysts might be unfamiliar with the approaches presented in the Analytic Framework, Bautista proposed that they practice, essentially training themselves until these strategies become part of their everyday thinking. Rebstock requested that the IC provide feedback in the coming months on whether and how the Analytic Framework is being used.
REFLECTIONS ON THE ANALYTIC FRAMEWORK PROJECT
Elizabeth Zechmeister (co-lead expert contributor for the Analytic Framework, and Cornelius Vanderbilt Professor of Political Science and director of the Latin American Public Opinion Project at Vanderbilt University) described the learning process required for academics to better understand the work of intelligence analysts. She remarked that academics have a tendency to look critically at data and focus on the challenges that arise when trying to use a dataset or methodology, a mindset that somewhat contrasts that of intelligence analysts, who have an immediate task to accomplish and prioritize accordingly. Thus, she said that analysts benefit from having quality checklists and menus of options for data collection and inference.
Despite the difference between the academic and policy arenas, Zechmeister continued, the workshop discussions revealed several commonalities between the two communities: (1) a shared commitment to scientific rigor in achieving the best possible estimates of opinion as well as the assessments of certainty around them and their stability; (2) agreement on the importance of normalizing the integration of ethics in empirical research with respect to human subjects, the research team, and potential long-term consequences of research efforts; and (3) a mutual understanding of the challenge of translating survey science to educated nonexperts—more work could be done to identify language and frames that will increase the trust these stakeholders have in the work.
Zechmeister explained that the Analytic Framework is intended to be as broadly useful to the IC as possible, reflecting varied levels of dialogue to appeal to analysts with different types of experience and different responsibilities within the IC. She expressed her hope that the Analytic Framework provides helpful guidance and resources and encouraged the IC to explore its four commissioned papers in greater detail: (1) “Drawing Inferences from Public Opinion Surveys: Insights for Intelligence Reports,” written by Bautista, presents a two-fold rating system to evaluate the quality of a dataset; (2) “Alternatives to Probability-based Surveys Representative of the General Population for Measuring Attitudes,” written by Amaya, offers an inventory of different nonprobability sampling approaches as well as a discussion of the advantages and disadvantages of each; (3) “Ascertaining True Attitudes in Survey Research,” written by Kanisha Bond (assistant professor of political science at Binghamton University, State University of New York), provides examples of approaches to implicit attitudes and sensitive topics; and (4) “Integrating Data Across Sources,” written by Pasek and Sunghee Lee (research associate professor at the Institute for Social Research at the University of Michigan), presents an inventory of methods for data integration (NASEM, 2022). Zechmeister also highlighted several of the practical ideas that surfaced during the workshop, including the use of list experiments and issues of data quality in online research.
Zechmeister reviewed the key themes presented by the Analytic Framework, illuminating the importance of
- assessing quality across multiple dimensions;
- planning before conducting analyses (e.g., availability of documentation, design of analysis strategy, and articulation of expectations);
- elevating the role of humans in the research process;
- analyzing metadata to understand how a survey was conducted as part of the ethics and quality checks and to make inferences more sound;
- thinking about culture and its influence on survey response;
- keeping pace with the changing landscape of technology for data collection and analysis (e.g., rolling data collections and experiments, Short Message Service surveys, and online polls);
- pushing at the frontier as a community and continuing to learn from one another; and
- remaining aware of the differences across higher- and lower-capacity or income environments when applying cutting-edge technologies (e.g., the efficacy of a phone survey in a high-income, high-capacity context versus in a multilingual low-income, low-capacity context).
The workshop discussions revealed several challenges to implementing the guidance in the Analytic Framework. Zechmeister noted that intelligence analysts often respond to emerging policy questions in reactive situations without time or space to develop a research plan. Furthermore, their incentives are aligned to producing a recommendation to a policy maker who is a generalist. Thus, a dichotomy exists between this reality in which the IC operates on a day-to-day basis and the Analytic Framework, which presents a “deep dive” into the foundation of public opinion research. This reinforces the value of the ICs taking time to have the types of discussions that occurred during the workshop, to research and prepare in advance of crisis situations, to develop checklists, and to expand its inventory of data analysis techniques. She applauded the IC for its exceptional work to make the best recommendations with imperfect data. She posited that the Analytic Framework is a step toward future action but not the last step, with space to build on its foundation and emphasize “next-generation questions.”
Lau expressed his appreciation for the “pressure-testing” of the Analytic Framework that occurred throughout the workshop’s scenario-based discussions and noted how much he learned about its application. He commended Zachariah Mampilly (expert contributor for the Analytic Framework and Marxe Endowed Chair of International Affairs at the Marxe School of Public and International Affairs, City University of New York) for raising questions about ethics early in the experts’ discussions and echoed Bond’s point that the ethical and technical aspects of survey measurement are intertwined. Lau reiterated that the work of intelligence analysts is very difficult—they are experts in their own right.
A representative from the IC conveyed her gratitude for all of the contributions to the Analytic Framework and said that it achieved the IC’s three key objectives: (1) What is the state of the art? (2) How can that knowledge be used for decision-making processes in the IC? (3) How can those insights best be combined and communicated to the policy community? The four scenarios discussed during the workshop demonstrated that intelligence analysts can conduct high-quality analyses and continue to defend national security with the resources available in the Analytic Framework, which is multilayered with ample citations to other relevant literature.
Rebstock and Barbara Wanchisen (senior advisor for the behavioral sciences in the Division of Behavioral and Social Sciences and Education within the National Academies) observed that the work of the Analytic Framework could also be useful for other government agencies. They thanked the IC for its vision and support of the project, as well as all staff, expert panelists, and authors who contributed to the success of the endeavor.
This page intentionally left blank.