Analysis and Final Thoughts
Vision is not enough; it must be combined with venture. It is not enough to stare up to the step; we must step up the stairs.
Having outlined several frameworks that could be used to develop an operational version of an advanced forecasting system, the Committee on Forecasting Future Disruptive Technologies discussed further the challenges in building a next-generation forecasting system and what methods and actions would help ensure such a system’s success. To that end, this chapter discusses the following: whether a next-generation persistent disruptive technology forecasting system can be built using existing technologies and methods, the features and characteristics of a next-generation forecasting system, and laying the foundation for subsequent steps.
CAN A NEXT-GENERATION PERSISTENT DISRUPTIVE TECHNOLOGY FORECASTING SYSTEM BE BUILT USING EXISTING TECHNOLOGIES AND METHODS?
For this second of its two reports, the committee was originally asked to evaluate the outputs of Signtific (previously called X2), a forecasting platform under development in conjunction with the Defense Intelligence Agency (DIA) and the Office of the Director of Defense Research and Engineering (DDR&E). A change in task occurred after the committee had received the outputs of Signtific and found them of no use in producing a forecast of potentially disruptive technologies. Specifically, the data were not detailed enough to allow the committee to refute or confirm its hypothesis that input generated from a younger generation of researchers, technologists, and entrepreneurs would produce different results from a traditional, expert-based forecast. The region and culture from which data points originated were also not recorded, and therefore the data could not be used to determine if different regions and cultures would forecast different technologies with different impacts than those in forecasts produced by Western experts. Overall, the limited data produced by methodologies employed by the Signtific team did not produce enough signals and signposts to track potentially disruptive technologies successfully.
Although the experience with Signtific highlighted some of the challenges of building a robust data set from innovative methodologies, it did not prove that a next-generation forecasting system cannot be built. The Forecasting Future Disruptive Technologies Workshop convened by the committee on November 5, 2009, gathered experts in the fields of commercial forecasting, software design, graphic user interfaces, and social networks. At the beginning of the day, the participants were asked if they thought that it would be possible to build a forecasting system with the key design criteria set forth in the committee’s first report (NRC, 2010). The participants who commented all said that the managerial and technical obstacles to building such a system could be overcome. Even if faced
with a system model in need of substantial development or change to be of use, the committee agreed that it would be in the best interest of the sponsor to continue efforts to build a next-generation forecasting system.
Observation. The illustrative models developed at the workshop indicate that the design and building of a 1.0 version persistent forecasting system for disruptive technologies are possible using existing technologies and forecasting methods and can be achieved within a reasonable time frame using a modest level of human and financial resources.
FEATURES OF A NEXT-GENERATION SYSTEM
Six Functions of the Version
Independent of the forecasting model used, a version 1.0 system for forecasting disruptive technologies should provide stakeholders and decision makers with a current forecast of potential futures and the potential disruptive technologies and impacts that would be the drivers of those futures as the current forecast applies to the stakeholders’ and decision makers’ domain of interest. A 1.0 system should contain six important functions: (1) needs definition, (2) collecting and developing alternative futures, (3) roadmapping, (4) engagement, (5) tracking, and (6) feedback. All four 1.0 options described in Chapter 2 incorporate these six important functions in their various approaches.
The 1.0 system should provide a mechanism to help stakeholders clearly define their needs in order to maximize the utility of the forecast. Generally, a technology forecast starts with one or more high-level questions. For example: What will the U.S. energy needs look like in 20 years? What sources of energy will the United States rely on and what technologies are needed to exploit those sources? The questions generally include a description of the community (the United States) that is being affected, a time frame (20 years), a domain of interest (energy), and technological impact (exploitation of sources of energy). These questions should then be approached with an awareness of the stakeholders’ perspective. For a persistent system, especially one that is used by more than one stakeholder, there is usually a method to collect “big, impactful” questions and a way for users, both experts and the crowd, to inspect and add to the collection. Sometimes these questions are categorized and ranked on the basis of a predetermined priority of needs or potential impact.
Collecting and Developing Alternative Futures
In a persistent system, forecasters, experts, and the crowd can hypothesize about alternative futures. An effective forecasting system should seek from these groups a broad range of alternative futures. This can be accomplished using traditional forecasting approaches (workshops, meetings, expert interviewers, polling) as well as newer approaches (Web-based collections, crowdsourcing, data mining, gaming, simulation, and prediction markets). These alternative futures should describe what the impact of disruptive change on the selected community might be, what preconditions would be necessary for the disruption to occur, which technologies might contribute to the disruptions, and what might be the source of the disruptive technology. These alternative forecasts should stimulate discussion and debate in addition to providing new ideas for alternative futures. A useful persistent system will capture the dialogue and discussions generated around these alternative futures. In some systems, users rank the likelihood of each alternative future; the committee believes that it is as important (if not more so) to rank the importance and impact of each alternative future.
A useful forecast should show how each alternative future can evolve from the present. This is done in a process called roadmapping, in which experts look at each alternative future that is considered important enough for analysis and develop a roadmap of events between the present and the future. This roadmap can be used for tracking events as they occur, and it can also provide insight into the necessary conditions and technologies that would lead to a specific future.
The version 1.0 system should provide a variety of tools to engage stakeholders and decision makers. These tools should include dashboards, lists, narratives, reports, videos, simulations, and gaming to help communicate to stakeholders and decision makers the range of alternative futures (what could happen), their potential impact (who is affected by what, and how), the likelihood (probability of occurrence) of that future, and the path (roadmap) from today to the future. These tools should help stakeholders and decision makers better understand the possible futures and make actionable decisions (regarding resource allocations, policy investment strategies, organizational structure, goals and strategic priorities, and so on).
A version 1.0 system should collect and track new signals and compare them to the roadmap to detect early-warning signs of disruption. In addition to tracking technological advances, it should examine other forms of disruptive change (i.e., financial, social, governmental, environmental, and scientific).
A version 1.0 system should have a mechanism that provides feedback to the system development team, allowing the spiral development of the system as new knowledge is gained from operating the system and as new capabilities and requirements are added.
In addition to the forecasting system’s six primary functions, the skillful incorporation of narratives and the engagement of a broad audience through use of an open platform bring important enhancements to current data-gathering and analysis methods. These techniques will help to create a truly next-generation forecasting system.
The Use of Narrative to Initiate Analysis
In its first report, the committee used a traditional model of forecasting to postulate a process: “When forecasting a disruptive technology, forecasters should use reasoned analysis and seek expert advice to understand what are the required foundational technologies and tools to engineer a new innovation. Estimating the timing of disruptive technologies requires an understanding of the sequence and an estimation of the timing of the emergence of foundational technologies and enabling tools” (NRC, 2010, p. 15).
The participants of the November 2009 workshop commented that predicting the exact timing of the disruption was not critical. Instead, forecasters must understand the range of potential futures and paths that lead to predicted futures. One of the major observations of the workshop participants was the importance of looking not for technologies that would be disruptive but for compelling narratives of potential disruptive scenarios. After a scenario is defined, the technologies or other elements that need to converge to enable the disruption can be imagined. Importantly, scenarios do not have to be time-specific. They identify signposts to look for when gauging the likelihood of a particular disruption.
It is important that the narrative be emotionally powerful, projecting the extreme fears and aspirations of a society. To capture the aspirations of a society, it is important that it have regional representatives participate in the creation of these scenarios and in the development of their corresponding narratives. The committee believes
Narrative as Input and as Output
The following comments were made by participants at the November 5, 2009, Persistent Forecasting of Disruptive Technologies Workshop. The unedited workshop transcripts from which these comments are extracted are provided in full in Appendixes D and E on the CD included in the back inside cover of the report and in the PDF available at http://www.nap.edu/catalog.php?record_id+12834.
that those who are most likely to be affected by a disruption will write the most powerful scenarios and narratives. The narratives created by these scenarios should be moving enough to catalyze change in policies and resource allocation while describing the necessary technologies and applications that would enable the projected events. See Box 3-1.
Observation. A disruptive technology forecasting system focuses on technological wildcards: innovations that have a low or unknown probability of development but, if developed, would have enormous impact.
Observation. Beginning the forecasting process with narratives of potential futures rather than starting with a list of potential technologies produces more useful insights into possible outcomes.
Recommendation 3-1. The 1.0 version of a forecasting system should begin developing a forecast of future events or conditions by constructing structured narratives describing disruptive impacts within a specific contextual framework related to particular technology use. It should then use backcasting to roadmap potentially disruptive technologies and the triggers that enable these technologies, and then iterate the mileposts for the narrative.
Narrative defines and constrains a problem. A customer might want to have an answer to a question like, What is the chance that in 10 or 15 years there will be a way to provide troop transport that does not depend on gasoline? The question sets a process in motion. Participants start thinking about potential futures, such as a future in which the Armed Forces are not dependent on petroleum-based fuels. The next step might be using backcasting to analyze which enablers would be used to reach this future. These enablers may or may not be related to technology. They might include changing the use case for an established technology, the regulatory environment, or market conditions (i.e., price of oil), or a shift in social attitudes, for example.
The narrative idea initiates a dynamic flow. From that narrative idea, analysts or participants generate hypotheses, map and define potential scenarios of enabling technologies that could bring that future to pass, analyze scenarios and technologies, and then iterate narratives and hypotheses with additional data. Nothing is thrown away. Scenarios are kept and roadmapped with the necessary innovations, breakthroughs, and “miracles” that they would require. Enabling technologies are identified, and thresholds, signposts, and tipping points are marked for tracking. The system needs to mark these indicators and constantly scan for them. The threads that originate from the main narrative are the start of a broader, richer collection of variations of the narrative, all of which are added to the database and form part of the process. The richness of the ongoing story that unfolds defines the measurements, signposts, and tipping points to monitor and track looking for a convergence of miracles—of technology, social change, or other factors. The emergent signals will dictate where the narrative goes. The participants of the workshop observed that there are currently no large databases in which such narratives can be stored, retrieved, and used. This type of functionality could increase the likelihood of building successful forecasts of disruptive technologies.
Observation. Many factors affect alternative futures, and it is important to understand that more than just technologies need to be tracked.
Observation. There are no dedicated forecasting repositories that can be queried for data organized in narratives—potential future scenarios, impacts of a scenario, or implications of a scenario should it happen.
Recommendation 3-2. The responsible organization should develop a repository of narratives of potential futures, organized both globally and by region, that include potential economic, technological, and societal impacts.
In a persistent system, the narratives can be continually iterated and new data can be fed back into the narrative lines to inform and change them. Inputs generate outputs that become additional inputs as the storyline is furthered and refined or modified on the basis of new emerging signals. Each narrative describes a single potential future and can be used to generate a roadmap of possible events between the present and that future. The roadmaps are published and then refined using an iterative process of generating new narratives and generating updated roadmaps based on new signals and new scenarios. A useful output to the user would be to list the highest-impact narrative(s) along with descriptions of enabling technologies, and what conditions occur for the narrative to unfold. Integral to the impact of the technology is the context in which it is used and how it is used.
The use case for the technology is an important part of the narrative scenario. Unconventional uses of existing technology can provide disruptive effects as readily as new technology can. Use cases are a function of using technology to solve issues faced by society or by a particular group of people. The context of that group—its
Gearing Up and Gearing Down
The following are examples of forecasting narratives that might be generated to either gear up or gear down technology:
values, ideas, needs, pressures, worldview, economics, culture, and traditions—influences the uses to which it might apply a technology.
Technology can be “geared up” or “geared down.” To gear down is to use lower-level or earlier technologies to solve problems (see Box 3-2). In the science-fiction series that starts with the book 1632 by Eric Flint (2000), a modern community in West Virginia is transferred (through a criminal act of artistic negligence by a futuristic society) to Germany in the year 1632, during the bloody Thirty Years War. To adapt, the community must gear down to technologies that can be supported in more primitive conditions.
Following is a real-world example of gearing down: The Irish Republican Army in the 1970s devised homemade bombs using agricultural fertilizer; Semtex, a plastic explosive; and “shipyard confetti” (metal waste found in the shipyards of Belfast) for shrapnel in guerilla warfare against the British Army. These improvised explosive devices (IEDs), also known as roadside bombs, typically consist of an explosive charge (potentially assisted by a booster charge), a detonator, and a mechanism that initiates the electrical charge that sets off the device. IED designs are very flexible, using a diverse set of available materials to devise initiators, detonators, penetrators, and explosive loads.
There is danger in the human psychological inability to deal with ambiguity and potentially shocking scenarios. The use of commercial airlines as weapons was contemplated by both the intelligence community and novelist Tom Clancy years before September 11, 2001, but no forecasting system was in place to track enabling factors or traffic that would have indicated activity along this narrative path (e.g., students taking flying lessons to learn how to take off but not to land an aircraft). A narrative incorporating a strong use case would be a valuable tool in convincing stakeholders of the possibility of the extreme scenarios that a disruptive forecasting system is designed to help foresee.
Using an Open Platform for the System
Another important element reinforced in the implementation exercise for this report was that workshop participants considered openness to be critical for obtaining a diversity of inputs. The success of the system in uncovering potentially disruptive technologies relies on the inclusion of participants with various levels of education and from various cultures, classes, races, and age groups. Making this system a more open platform is a fundamental shift from traditional Department of Defense (DoD) forecasting. Many of the participants believed that the system should be open in every way—that it should have open analysis, open participation, an open loop, and open platform
products that include live interactions. There could be parallel closed loops1 for different users. For example, if a forecast involved classified information about nuclear weapons development, a closed system could be run on a classified network with strong access controls for cleared personnel only. The beauty of open-platform design is that it is aligned with the explosion of Internet applications and social networking media development. It would be critical to have participation be international, sourced regionally in the local language. Asking a native Chinese-speaking participant a question in Chinese could elicit an answer different from that received when asking the question in English due to changes in the participant’s comfort level, perception of the question, understanding of the question’s meaning, or ability to use nuance. Fundamentally, soliciting participation across languages provides access to different points of view. For classified forecasts, the committee believes that a similar system could be built that could be used with a broad range of cleared participants.
Observation. It is critical to the success of any forecasting system to engage members of different cultures in their native languages and in a familiar environment in order to reduce bias.
The system could be used as the equivalent of the wiki (collaborative Web site database) on best-in-class representation of science and technology, a virtual portal on science and technology narratives of the future openly contributed to, participated in, and drawn from. The membership would include those who are passionate about the future of science as input to policy and postdoctoral students hungry for other venues in which to apply their skills. If the outputs of the system are truly useful, they will provide the incentive for participation. The users might include planning departments and venture capitalists.
Observation. A persistent forecasting system can be built to serve many different customers, providing a continuously active and current forecast for users.
Observation. An open-platform forecasting system could generate a great deal of interest from corporate and other international users.
Observation. A number of organizations are currently working on a next-generation forecasting system. Early efforts by corporations such as IBM, SunEdison, and Shell should be closely tracked for insights and possible partnerships.
To be successful, those setting up the persistent forecasting system would have to work hard to balance Western bias against a wider worldview. This is explored in more detail in Chapter 4 of the first report (NRC, 2010). The system would have to be cross-cultural, multidisciplinary, and multigenerational. It would have to reflect a wide range of viewpoints of people, including those on the fringes of their societies. A challenge would be to include regions and classes that have little or no Internet access but might well be important for establishing their values, needs, and unique applications of lower-level or older technologies. The system set would also have to work to balance any DoD biases. The committee believes that too much government control would impede the ability to get broad participation and sponsorship. Separation of the internal and external teams could help bring together the best talents and capabilities of government and industry (see Recommendation 3-8).
The operational challenges presented by an open forecasting system were discussed in depth by the participants of the workshop. Consideration was given to several options, including a crowdsourcing approach to generate and collect data, ideas, and hypotheses, combined with expert analysis of the information. Another option limited the level of openness by inviting a large group of experts to produce forecasts, but with a far broader range of expert participation than in a typical Delphi forecast. The outcome of the evaluation of the various options centered on the concept of separate, interacting systems of open participation and closed use, with output fed back into the system to create a persistent loop. It was agreed on by the participants of the workshop that openness can be more
easily incorporated using Web-based technologies and applications, but that Web-based technologies are not a total solution in that a truly inclusive system would also engage people with little or no online access. Also, incentives to participate will be an important part of setting up an open forecasting system.
CHARACTERISTICS OF A NEXT-GENERATION FORECASTING SYSTEM
While all of the models outlined in Chapter 2 had the essential elements discussed previously in this chapter, the desired characteristics for the next-generation forecasting system will need to be defined both for the 1.0 system and for future spirals of development of the persistent forecasting system. Suggested characteristics include mechanisms for continual learning, success metrics for participation, and success metrics for outputs.
Building Learning into the System
The forecasting system for disruptive technologies needs to be designed for ongoing evolution to improve methodologies in all areas of sourcing, analyzing, and producing searches. While the frameworks outlined by the committee should work, metrics are needed to define success and guide forward progress and direction of growth. A learning system involves analyzing success against metrics for success to see what elements are enhancing the system and what might be lacking. Data in the system would be segmented so that different parameters could be measured—for example, whether contributions from different regions added insights more predictive of futuristic trends than did inputs from within the United States. The value of different inputs, contributions by different communities, or different methods can be evaluated and adapted. If a model like interactive gaming seems to result in future visualizations, it could be used more extensively. If indicators show up in communities that are not currently participating in the crowdsourcing, those communities can be invited to participate. If types of data are needed for analysis, ways to find or track the data can be devised or built.
Recommendation 3-3. The forecasting system for disruptive technologies needs to be a learning system in which midprocess system products are continually evaluated and used to refine concepts and methods, and final outputs can be collected and compared over the long term to evaluate system processes and build expertise among staff. The first version of the system should be thought of as a version 1.0, with the recognition that it may take successive phases of development to create a sustainable and useful platform.
Success Metrics for Participation
Early measures of success with respect to participation would include the establishment of a community that draws broad participation and attracts funding for the value of its outputs, which consist in part of the input of participants. One effect of engaged participation might be to train the next generation of forecasters and potential decision makers on a new way to produce and use technology forecasts. If corporations use the system, it is indicative of the value of the output. The success of the participation can be measured in terms of the following:
The quality and frequency of engagement, the quality of conversation or content, recurring subscribers;
Engagement with contrast, polarity, heat, conflict, and potential controversy;
Level of interest, community ranking;
Diversity of user population in terms of age groups, ethnicities, professions, and socioeconomic status;
External funding, receipt of grants;
The number of unsolicited narratives that meet criteria;
An improvement in the quality of forecasts over existing forecasts, unique and compelling forecasts that are truly disruptive narratives from the fringes of possibility;
The use of roadmaps that can be evaluated by users, rather than the use of predictions;
The education of policy makers to be more comfortable with how to navigate uncertainty and create advantages; and
The attraction of strategic partners.
The process is working when the system does the following:
Generates both scenarios and potential technologies that are different from the baseline;
Reduces unusual insights and potential insights into the impact of the technology;
Anticipates new applications of technology;
Tracks signposts and signals as to whether they will make a difference;
Awards recognition to participants to drive more forecasts;
Produces novelty of narrative but also the breadth that encompasses silos, subjects, and disciplines that are affected by the proposed scenarios;
Identifies signals and signposts for tracking and putting data into context;
Identifies triggering or threshold data points to warn of potential conjunctions of events and to indicate that an event is becoming more probable; and
Produces actionable forecasts that support decision making, resource allocations, and scenario generation.
The more high-level topics that a narrative hits the more interesting it is and the more likely that a conjunction of events will occur.
Success Metrics for Outputs
The forecasting system should produce high-quality information that includes high-impact scenarios, critical enabling criteria, scientific trends, trends in the signposts, and the representation of the environment of interest. If the forecasting system works, it should increase ambiguity and uncertainty and stimulate more questions and studies—one caveat being that ideas that challenge existing knowledge or touch on forbidden subjects may be uncomfortable to some audiences. People are more comfortable with known risks than with unknown risk. There needs to be some insulation between uninhibited inputs and analysis and the evaluation of the usefulness of the outcome. Every narrative generated needs to be plausible but not necessarily probable. To be useful, narratives of potentially disruptive events are more likely to be in the improbable category. They do not have to be right. They would be valuable for opening possibilities in people’s thinking, anticipating disruptive scenarios, and providing a useful framework for tracking the development of disruptive technologies.
Observation. Not every narrative needs to become a reality. In fact, it is an indicator of system failure if all narratives come to pass. Narratives must pass a minimal test of probability and plausibility, but otherwise it is essential that the collection of narratives push the edge of probability and believability. Focusing on narratives that are highly aspirational or horrifying could stimulate discussion of extreme scenarios that could have the greatest impact on the forecasting system.
Recommendation 3-4. Any forecasting system developed should be insulated to allow users to generate and investigate controversial or uncomfortable ideas. Participants and staff should identify the reasons that an idea is considered implausible and be able to understand what developments will be needed to arrive at that future. These developments should become signposts on the roadmap of the forecast.
Another measure of success would be the effect that public use of content from the system has on the real world, if a user or participant uses a scenario in the public domain to influence an action or decision. Longer-term success might be measured in terms of whether the outputs of the system affect policy, engagement with Congress, new technology concepts, or new applications of technology. Forecasting needs to change behavior to be successful. The outputs must be structured to communicate to the user that the narrative is possible and the forecast is
actionable. The information should be presented in a way that is compelling, inspires action before the fact, and convinces people of the usefulness of the forecast.
The forecast generated from the system must be usable and informative to the user. The output of the system is less about technology than about technology impact and use and about what applications could be enabled with the technology. The structure of the final narrative would describe the impact of a confluence of technologies, events, and creativity and describe how the scenario was arrived at. The presentation should make the case for the evidence that these events would be enabling and lay out the measurements of interest to monitor and the signpost and tipping points to watch for to indicate that a scenario might be coming true. What are the trends in the signposts?
Success measures for the output of the system include the following:
It demonstrates how a scenario affects people’s lives, how the scenario is a doable future;
It generates actionable outputs (the information is used);
It receives positive feedback from potential customers;
It generates value (might be intellectual, literary, as well as for future planning);
It has information incorporated into other organizations’ analyses, reverse citation;
It trains future policy leaders in the effective use of technology forecasts;
It causes new policies to be generated;
It survives: the system continues;
It improves the ability of decision makers to continuously ask the right questions; and
It reduces surprise.
Observation. Users of a forecast must have confidence in the quality of the underlying data and the analysis that led to the forecast. Measures that reinforce confidence include data transparency and the availability of multiple expert views. Success is measured not by how many accurate predictions are made but by the value of the insights and what actions were generated to reduce negative surprises.
With a persistent, open-source, narrative-driven system, it is possible to look at a broader picture of potential disruptions. With a repository for findings, possible scenarios, narratives, and every question asked of the system, it is possible to ask the right questions persistently until it is asked at the right time. It is possible to revisit scenarios with new data, to put pieces together differently, to mark which scenarios seem to keep coming up.
The system process is a broad radar. The narrative outputs can be more targeted for specific action and tracking. But the system could become more than just a forecasting system for future technologies. If successful, it could be an interactive platform that could be used to generate new concepts, a source that allowed people to flesh out and flush out ideas. It could provide data about how things impact other people around the world. How the system is set up will be critical for creating that dialogue and fueling narrative generation, some of which will become focused targets.
Recommendation 3-5. The forecasting teams should develop metrics of performance (i.e., for valuing and synthesizing) so that the process can be controlled, optimized, and improved.
LAYING A FOUNDATION FOR SUBSEQUENT STEPS
The day after the workshop, the committee met in private to discuss the results of the model-building exercises and discussions and to combine these results with the work of the entire project life span to project a set of actionable recommendations that might benefit the sponsors. Out of the four proposed models, certain elements were distilled into specific guidelines, which are described in detail in this section.
First, the committee strongly agreed that the human resource is key in making the proposed forecasting system work. There will have to be a careful alignment of purpose, technology, participants, and resources to create an optimally successful system.
Recommendation 3-6. The Department of Defense and the intelligence community should begin the process of building a persistent forecasting system by selecting leadership and a small, independent, development team. The team should be given seed-level funding to establish an organizational structure and business plan and build a working 1.0 version of a disruptive technology forecasting system. The organization should have to attract additional funds from domestic and foreign corporate, nonprofit, or government sources.
The range of options for the organizational structure presents a question of governance. Determining the governing structure is outside the scope of the committee’s task, but the committee looked at some of the questions that would need to be answered. A small, motivated, start-up group of people would have to be responsible for refining the methodology, determining who the participants are and how to provide incentives to motivate them, identifying what partners to seek out, and creating a business plan. The start-up group should consider asking itself some fundamental questions such as the following:
If there is an outside and an internal group, what is the synergism between the two?
What are the pros and cons of the various options and possible barriers?
What are examples of successful groups using various models (e.g., public-private partnerships such as Sematech and In-Q-Tel)?
How will the structure impact participation, governance, and funding?
How does an organization develop a persistent business model that matches its persistent forecasting mission?
Recommendation 3-7. The Department of Defense and the intelligence community should consider using a separate, independent, multinational, multidisciplinary nonprofit or dot-org group to run the crowdsourced platform. The organization should be structured correctly from the beginning to ensure trust and good working relationships among staff. The crowdsourced platform should have its own separate governance with leadership representing multiple ethnicities and disciplines.
As stated in Chapter 1 in this report, the workshop participants suggested that the organization of the open-platform system needs to be separate from the organization inside the DoD that would deal with evolving scenario information on a classified basis. This thought of two systems and the importance of not having bias was discussed in detail in the first report (NRC, 2010). The team inside the DoD would be independent but would collaborate with the external organization.
Recommendation 3-8. A forecasting system should have two separate teams, one team working on the open external forecasting platform and another team developing an internal forecasting platform that services specific needs of an organization. The external team should encourage broad and open participation and exchange of ideas and scenarios from a broad range of participants and experts. The internal forecasting platform should address scenarios that are specific to the organization and may involve sensitive, proprietary, or classified scenarios and data that it is only willing to share with trusted parties.
In the case of the Department of Defense, there are a number of possibilities for how to structure this arrangement: a joint venture between the government and a private entity, a joint project with another intelligence organization, partnership with an analytical institute, a contract with an existing forecasting group, a request to the National Science Foundation to sponsor it, a consortium, the creation of an independent nonprofit organization, the establishment of a research organization entirely outside of government such as a multidisciplinary university research initiative (MURI) at a university, a program sponsored by the Defense Advanced Research Projects Agency, or some unconventional approach. In the Institute for Analysis partnership with the DoD, the Institute is the external face, so it can do many things that the DoD cannot. Another possibility would be partnering with a
museum or network of museums that backcasts using science and technology. For the DoD user, the outside organization broadens its reach and vision. Connecting with academic, research, and commercial communities would be an early success for the DoD user that would improve on using stovepiped lists of emerging technologies. The DoD can subscribe as a shadow organization through the open platform.
The concept of a crowdsourced community for forecasting in itself is disruptive. On the one hand, the bipolar nature of the concept is that some of the hypotheses being generated could be about how to disrupt the United States or about the discovery of highly disruptive technologies that could radically change the world, and that these hypotheses would be discussed in public forums. A private partnership might be a way to mitigate some of that effect. On the other hand, the crowdsource is an opportunity to force a level of accountability on decision makers to deal with and prepare for highly disruptive scenarios.
There are several ways to get the system organized. One way would be to have a sponsor’s internal staff work with trusted external contractors to find the necessary people to form a start-up committee, similar to the way that In-Q-Tel was organized. A second way would be to find one person who would be the organizing chief and let that person find the start-up committee. Finally, a third way would be to put out a broad area request for proposals and then allow the winning proposal to form the start-up committee.
The start-up committee would have to define the structure for the persistent system, with recommendations for how to interface with whatever user organizations participated in the open platform and how those organizations might use the output for future planning. The relationship between organizations would need to be defined. Would staff of the open system simply maintain the system and allow the work product to be solely open-source-derived, or would the staff “add value” by doing analysis and generating internal narratives, hypotheses, needs, technology, and uses that run parallel to the crowdsourcing process? The participants of the workshop believe that the advantage of this crowdsourcing, persistent approach is the use of an iterative process in which new ideas and forecasts are generated through crowdsourcing and live data-gathering activities, followed by concept refinement performed by experts. This balance will have to be worked out. The strengths of other analytical methodologies can be used to complement the strengths of a crowdsource system.
There might be a need to define a particular structure or structural interfaces for user organizations. Demographic information might be required for participation. Regardless of who the members are, they will have their own internal mechanisms for reacting and responding to the open system. The outside groups might generate ideas, but the inside groups will have to decide what is relevant to them, redesign and interpret narratives for their own purposes, and put them back into the system as inputs. The start-up committee would need to consider sustainability in the structure for a persistent system that should be able to keep going without being shut down by any one participant. The structure of the narratives has been referred to often in this report, but it might be useful to design a format for narrative. The start-up committee might decide to backcast data to test for verifying and validating and refining the methodology.
The start-up committee would have to be clear about the following:
Who the initial sponsor(s) would be,
What the sponsor(s) want and how they will be educated about what they will get from the system,
The expected time frame to build version 1.0,
Measures of success for performance and metrics for signaling refresh, and
An investment plan and the discipline to carry it out: a large budget will be insufficient if not well implemented.
The initial development team should be small and carefully selected to ensure that the members work with the efficiency and flexibility needed to successfully develop a complex software system with limited resources. The team leader catalyzes both the forecasting process and the development of the system. He or she should head a core team of up to 12 subject-matter experts to guide analysis.
The estimated software application requirements will be defined or designed and given cost estimates. The
participants of the workshop and the committee members believe that the cost of the effort, including building the 1.0 version and the ongoing maintenance of the system, would be between $5 million and $10 million. The team would be responsible for managing the initial seed funding of $1 million to $2 million and promoting the system to other potential users and investors to attract the additional funds needed for the long term. See Recommendation 3-6.
How the System Might Be Implemented
After completing the model-creation exercise and reviewing the work of the three workshop subgroups and Stan Vonog’s proposed fourth option, the committee discussed a common vision of what a forecasting system would entail and the experience that it would provide to the user.
A user can either post or select a narrative to follow. The user can rate the hypothesis generated from the narrative (the equivalent of Facebook’s user rating “likes this” but with numbers) or join a discussion thread that contributes posts, which are flagged as technologies, uses, progression, or synthesis. This input is used to help flesh out the signals, signposts, enabling technologies, or whatever else is needed for a scenario to be realized and to identify potential intended or unintended “off label” uses and outcomes. The final step is writing, reading, rating, and commenting on output narratives and impacts.
Identified needs, technologies, or narratives are evaluated during process analysis, both for their potential importance and for their “oddness,” or distance from mainstream sources of innovation. The desired area of focus is on the outliers in a normal distribution curve of likely emerging technologies, or on an abnormal distribution. Distribution analysis could be applied by demographic groups, and concordance between groups could be estimated with confidence intervals. This moves toward convergence and divergence and identification of “heretics.” Once rated, entries of interest can be used to look for their uniqueness or recurrence/convergence in the system using an algorithm similar to those employed by plagiarism-detection software, which would tabulate the repeated ideas and associated demographic data. The level of automation of this system could be quite complex, with high and low thresholds of perceived importance selecting the text to be run through the convergence/divergence engine. The system could also use a timed process to run previously identified topics periodically to see if new inputs change the analysis of previous dated inputs. This process could be mirrored by the organization’s internal staff as allowed by the organization’s structure and mandate.
It is equally important to separate the geniuses and heretics from the charlatans and the “crazies.” This is especially true when a system relies on crowdsourced information. One approach is to use experts to roadmap scenarios through techniques such as backcasting to see how an alternative future can unfold. These roadmaps should be checked to see that they do not violate a specific law of physics. Only scenarios that can be roadmapped should be considered actionable. Relevant scenarios should be reviewed again if there is a significant breakthrough in science that might make possible an outcome previously assumed to be impossible. These scenarios should then be roadmapped on the basis of new knowledge.
As the committee discussed, a rating scheme built into the system would allow at each stage of the process for peer evaluation that can be fed back into the system. The danger is a ranking or rating system that starts to bias the input, intimidate users, or alter the direction in the building of the narratives. It might be possible to rank the value of a particular user’s contribution without creating a situation of “voting” on the most popular outcome. That would be the wrong bias on output. It might be necessary to designate some information for internal use only in order to avoid bias but still to have data needed for analysis.
To have the quantifiable and research-enabling inclusion of a diverse user community, the system might collect demographic data when a user opens an account. The data should include country (and possibly country of birth or upbringing), age, economic level, educational level, field, and level of expertise in that field. Contributing groups might have a consensus level of expertise. If demographic data are identified, it will be possible to measure the difference in value ranking of narratives coming from U.S. residents in comparison to non-U.S. residents, as well as to measure other important demographic distinctions. See Table 3-1.
TABLE 3-1 User Demographic Information: Examples of Useful Participant Demographic Data
Of origin, raised in, citizen of, current residence, secondary residence
Economic level (annual income range)
Keyed by country to match five income levels: low, low-middle, middle, middle-high, high
No school, primary school, high school, years of college, advanced degrees, doctorate
Field, level of expertise
Expert in field (field), generalist (fields), lay person (fields of interest). Labeling could vary for different fields.
Goals for Version 1.0
To ensure a robust foundation for a persistent forecasting system, version 1.0 should have seven fundamental goals:
Broad international and regional participation;
A broad range of future scenarios, including many improbable but possible alternative futures;
The narratives that tell compelling stories, highlighting the impact on society;
Backcasts developed by experts with credibility in their respective fields;
Robust and actionable roadmaps that illustrate how the present can develop into a potential future, indicating potential signposts, important observable signals, and tipping points;
The use of roadmaps for the ongoing tracking of disruptive technologies; and
The use of the forecasting platform by entities other than the U.S. federal government, including other governments, corporations, and organizations.
These goals should be reviewed regularly during both the development phase and deployment phase of version 1.0. The forecasting team should also develop midcourse evaluations and make midcourse corrections based on the ability of 1.0 to achieve these goals.
Forming a successful forecasting system for disruptive technologies is a task with several inherent challenges that are both a direct result of the new explosion of information exchange brought about by the ubiquity of the Internet and suggested solutions to the challenges posed by it. High-quality data must be collected in quantity, organized, and contextualized to be made meaningful. As demonstrated by the work performed at the Forecasting Future Disruptive Technologies Workshop and previously by the committee, there are many different strategies that can be used to meet this goal. This hitherto mostly uncharted territory should be approached with an open mind, a willingness to adapt, and confidence.
Recommendation 3-9. A persistent disruptive forecasting system should be built to help the intelligence community reduce the risk of being blindsided by disruptive technologies.
Flint, Eric. 2000. 1632. Riverdale, N.Y.: Baen Books.
NRC (National Research Council). 2010. Persistent Forecasting of Disruptive Technologies. Washington, D.C.: The National Academies Press.