As it has become easier to place computers on cameras (or cameras on computers); a large industry has grown around enhancing or improving photographs, including work in high dynamic–range images, red-eye removal, and face enhancement and improving camera auto-focus systems through such features as face tracking and predictive movement tracking. Such computational photography has roots in government-funded research into areas like image representation, intrinsic images, and optic flow. The social media and search industries have produced extensive research in face recognition and image tagging, which find wide application. These capabilities grew26 out of significant government-funded programs on face recognition, image databases, and digital libraries. The government is now a large customer for various surveillance methods rooted in this work. Today, a focus of attention of the public and the research community is understanding and addressing algorithmic and training data bias.27 Moreover, given the potential for misuse, much attention is being given to ethical and governance frameworks for their development and use.
The automotive industry in general is a significant producer and consumer of computer vision methods, applied to such problems as pedestrian detection, automated parking, and autonomous vehicle navigation. In many routine image-screening tasks, like mammography screening, identifying faces, fingerprint matching, and some kinds of object classification, one can now reasonably expect computer vision methods to outperform the average human operator. In recent years, nascent and promising applications of computer vision have been demonstrated in new realms, such as recognition of pathophysiology in radiology and dermatology. Almost all of this success is rooted in government-funded research, performed long before the discipline could deliver on its clear promise and current capabilities.
. . .
Chapter 5 turns to confluence, where the interweaving of information technology research and business transformation advance an industry sector, often through increased connectivity, scale, and optimization, and then, over time, lead the way to transforming that sector.
26 P. Viola and M. Jones, 2001, “Rapid Object Detection Using a Boosted Cascade of Simple Features,” pp. I-I in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, doi:10.1109/CVPR.2001.990517.
27 J. Buolamwini and T. Gebru, 2018, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” pp. 77-91 in Conference on Fairness, Accountability and Transparency, http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.
The economic impact of innovation in fundamental information technology (IT) research is illustrated by the uptake and integration of these discoveries into industry sectors, such as health care, agriculture, even sports. This impact becomes transformative when IT capabilities and the new business models they enable drive long-lasting fundamental change. IT is frequently no longer just business automation or a facilitator for new services, but IT innovations themselves are now woven into almost every major industry sector.
This report refers to this interweaving pattern of IT research with industry transformation as confluence: when fundamental advances in information technology first lead to increased economic impact in an industry sector, often through increased connectivity, scale, and optimization, and then, over time, lead the way to transforming that sector. This transformative impact derives from fundamental qualities of information technologies and IT research leading to new ways of delivering products and services, new manufacturing processes, new tools for engaging customers, and even new ways of computational thinking and discovery.
Many more mature IT innovations, such as the Internet itself, now profoundly drive economic returns and new innovations in industry sectors beyond IT; these effects are compounded by the enormous scale at which the Internet and services connected to it are used today. Confluence can arise from the direct integration of IT research in new sectors, such as new machine learning or visual animation techniques, but also can occur at a greater scale through the adoption of new IT products and services, themselves the result of research. These interactions set up
a positive feedback loop in which new IT capabilities catalyze innovations in other fields and, in turn, spur further innovation in IT. The impact of IT confluence is now accelerating due to advances in computing horsepower, the pervasiveness of networked and secure computing infrastructure, the scale and sophistication of data science and machine learning, and the increasing useability, appeal, and pervasiveness of computing interfaces, mobile devices, and social media.
This chapter presents examples of the confluence of IT research with timely needs and capabilities of other sectors leading to synergistic innovation and transformative economic impact. Each story and timeline is unique, but there are common patterns in the path from IT innovation and adoption to IT-led transformation, and the ever-richer connections between the tracks of IT innovation to increased relevance and importance for major economic sectors. Each of these narratives could easily fuel its own study and report. This chapter provides a glimpse into the increasing impact of IT in key sectors of the U.S. economy, paying attention to the recurring pattern of confluence, the breadth of impact played by tracks of innovations in IT, and the growing transformative role of these capabilities.
One long-standing industry sector that has been fueled by a steady stream of academic and industry research is entertainment media—spanning computer animation, special effects, and electronic games (see Box 5.1). These building blocks have fueled decades of creativity in immersive and engaging media content and experience. This chapter provides several examples of the confluence of IT research with other sectors leading to transformative economic impact.
- E-commerce. The impact of Internet-driven innovation is exemplified in unpacking a “simple” e-commerce transaction involving Internet connectivity (which led to the creation of the World Wide Web); computer architecture, programming languages, and databases (which led to cloud computing); algorithmic advances (which drive confidential and secure transactions); to artificial intelligence (AI) techniques (which, coupled with vast amounts of data, power search and recommendations). Taken together, the simple act of searching for and purchasing an item online now fuels about $600 billion annually in U.S. retail e-commerce sales,1 and every element of that action draws from a sustained portfolio of federally funded IT research woven together with industry innovation over a period of 50 years.
1 Total of U.S. Census Bureau quarterly figures for 2019. See U.S. Census Bureau, “Quarterly E-Commerce Report Historical Data,” https://www.census.gov/retail/ecommerce/historic_releases.html, accessed July 1, 2020.
- Health care. Providing an interesting juxtaposition of delayed adoption of modern IT (e.g., the resilient fax machine) with radical innovation in data-driven drug design, health care is both a leading indicator for IT-driven transformation and an example of how industry barriers such as regulation can stymie change. Even today, the necessity of transforming day-to-day health care alongside an unprecedented pandemic response is potentially paving and accelerating the path for the use of telemedicine as an effective alternative for many health care consultations. Looking ahead, health care also provides numerous opportunities for joint multidisciplinary research, such as the challenge of designing AI for clinical decision support that integrates the breadth of knowledge and the management of risk inherent in medical care. Similarly, AI and mobile computing will play a critical role in the delivery of care outside of a traditional clinical setting through engaging home technologies and AI virtual assistants. Likewise, advances in sensors and wearables will provide new ways to capture physiological information outside of a traditional clinical setting.
- Automotive. The automotive industry exemplifies how IT first penetrated the industry to augment and scale decades-old mechanical design and manufacturing processes that were the hallmark of the industrial revolution. Over time, the automobile itself, manufacturing facilities, and even the act of selecting and purchasing a car have been heavily supplanted with digital technologies, robotics, and Internet-driven e-commerce. More fundamentally, the processes of designing and testing automobiles has changed radically as computation and mixed-reality experiences allow designers to marshal an almost infinite set of options culminating in increasingly complex, efficient, and safer designs, with manufacturing and supply-chain innovations to match.
- Sports. Often outside of the view of the mainstream IT sector and computing research community, the performance, logistics, and fan experience of sports in the United States have been transformed by innovations in IT. The now pervasive virtual first-down line is just the beginning of computer-generated augmentations to viewing sports play. Coupled with the plethora of wearable and fixed sensors that monitor athletes’ performance, the sports industry is data driven from recruiting, to training, to coaching, to injury prevention and rehabilitation, and back to fueling information for fans. Finally, the rapid uptake of esports further blurs the line between virtual experience and physical play.
- Agriculture. On the surface, an industry sector such as agriculture seems far removed from IT, and yet farmers and the rest of the agriculture sector are increasingly using robots, sensors, machine learning, and novel communications technologies to address food and sustainability challenges across the globe.
Although each confluence narrative is unique, there are some effects that have contributed to more than one of the advances. While many of these factors also play a role in resurgence patterns, the impact for confluence is often seen in lowering barriers to IT innovation in other industry sectors, such as the following:
- Computing power. As also seen in the resurgence case studies, the surge of available computing power across industries over the past decade has moved the role of IT from industrial research into mainstream business across many sectors due to sheer scale and speed, coupled with miniaturization and decreased cost.
- Connectivity. Coupled with computing power, networked connectivity, including its pervasiveness, speed, scale, and security, has enabled large-scale industries to move to electronic transactions, collaboration across the world, and new forms of centralization and decentralization driven by business needs.
- Data. Whether driven by the increasing pervasiveness of sensors or by the wealth of information transmitted over networked computers, data as the “new oil” fuels many of the fundamental industry transformations depicted in this report.
- Cyber-physical integration and autonomy. Many of these applications take advantage of improving capabilities for integrating cyber-physical components of systems and in the increasing ability to perceive the current state of their world, plan a course of action, and execute the plan.
- Computational thinking. Algorithmic thinking, coupled with data-driven processes, are the hallmark of computational thinking.2 Creating new framings and abstractions of the world as a system of information processes alongside new algorithmic approaches has shaped many industry advances.
- Human experience. Once relegated to highly trained users, the human experience of computing has massively blossomed through innovations such as the World Wide Web, graphical interfaces, mobile devices, social media, and more. Simply the sheer scale of workers and consumers connected through modern interfaces has enabled new businesses and business models to emerge, and older industries to transform. While the design of user interfaces for desktop and mobile devices drove mainstream adoption, the concurrent advances in the accessibility of interfaces for people with visual, motor, and cognitive impairments also resulted in widespread benefits in the marketplace.
These capabilities draw from decades of federally funded research and sustained work in universities and industry laboratories. As described in Chapter 2, there are important key mechanisms that help propel innovation from research and into commercial sectors. For example, open-source software plays an increasing role in combining and coordinating shared interests and investments in common computational infrastructure and services.
Looking to the future, IT research innovations will intertwine more deeply with the businesses that make up the U.S. economy. For example, the confluence between robotics and automated driving, combined with e-commerce and services such as Uber and Lyft, are changing the fundamental cultural assumptions regarding car ownership and transportation. Data-driven innovations will continue to provide more and more honed treatments and capabilities across health care and agriculture. Finally, the education and training of our future workforce will rely more deeply on
2 J.M. Wing, 2006, Computational thinking, Communications of the ACM 49(3): 33-35.
access to and interaction with IT systems, a process that has been accelerated by the COVID-19 pandemic.
It is imperative to continue investing in the future through funding for academic research. A key opportunity is creating research programs that expose the challenges and opportunities in industry sectors, such as health care, agriculture, education, and transportation, to multidisciplinary research and innovation. New models regularly emerge, both in the United States and abroad, and warrant study to gauge their effectiveness. As previously described in Chapter 2, the relationship between invention and use-inspired research is bi-directional. Ideas and problems flow from IT-driven research and use-inspired research; these exchanges take time and dedication. As described in the following narratives, many of the seemingly rapid jumps in business innovation have long roots in decades of research.
Consider a simple scenario. Interested in what might be available in air quality monitoring for your home, you present the simple phrase “air quality monitoring for home” to your smartphone or laptop and are immediately presented with a variety of available options from around the world—products, comparative reviews, and digests of scientific information that may be useful to you. You click on one, say, Amazon, and are presented with several “Best Selling Indoor Air Quality Meters.” You click on the picture of one that particularly appeals to you and are presented with a wealth of information about it, as well as what other items customers buy after viewing this one. Following one of these recommendations, you find illuminating feedback from people who have experience with it. Satisfied with your informed choice, you purchase it with your credit card, seamlessly using a secure communications channel, and are pleasantly surprised to learn that it will arrive at your door later the same day, and you can track its approach. Looking forward to the comfort of knowing about the health of your environment, you tell your smart speaker to play an upbeat tune. It selects a new song that is reminiscent of one of your old favorites (see Box 5.2).
This is no longer futuristic fantasy, but routine everyday experience. Your purchase is but one of nearly $600 billion in U.S. retail e-commerce sales in 2019, which accounted for over 11 percent of total sales,3 expected to grow to over $740 billion
3 U.S. Department of Commerce, 2020, U.S. Census Bureau News, August 18, https://www.census.gov/retail/mrts/www/data/pdf/ec_current.pdf.
annually by 2023.4 And every element of this experience draws on a fabric of federally funded IT research innovations, weaving back and forth between academia and industry over a period of 50 years, during most of which this now-routine experience seemed fantastical (see Figure 5.1). You are but one of 4.5 billion Internet users worldwide (59 percent of the global population),5 performing one of 4 billion Google searches that day6 and one of 200 million who visited Amazon.com that month.
Tracking IT Innovation in E-Commerce
The Internet that connects you to the world began with the U.S. Department of Defense’s (DoD’s) Advanced Research Projects Agency Network (ARPANET) project—connecting three universities and a nonprofit research company in 1969
4 J. Clement, 2020, “Retail E-Commerce Sales in the United States from 2017 to 2024 (in millions U.S. dollars),” Statistica, https://www.statista.com/statistics/272391/us-retail-e-commerce-sales-forecast. accessed July 1, 2020.
to allow remote access to computers and to distribute software quickly—at the University of California, Santa Barbara, early systems for interactive computational mathematics; at the University of Utah, the early stages of computer graphics; at Stanford Research Institute, the early aspects of personal computing and connected documents; and at the University of California, Los Angeles, network measurement, utilizing hardware developed for the purpose (interface message processors) at Bolt Beranek and Newman (BBN). The ARPANET experiment led to a 50-year explosion of innovation and deployment of computer networks. The Internet grew a billion-fold over 50 years with U.S. Department of Energy (DOE) connecting researchers to its supercomputer centers in the late 1970s, National Science Foundation (NSF) funding CSNET connecting universities to the ARPANET in 1981 and forming its supercomputer centers in 1982, the standardization of TCP/IP protocols in 1983 for reliable data transfer over the (radical for its day) best-effort communications substrate. In 1986, NSF created NSFNET, promoting advanced research and education networking across the United States. Concerns of critical “congestion collapse” events prompted the development of essential protocol algorithms that permitted vast, scalable expansion. In 1989, NSF began the process of transferring the backbone of the still-emerging Internet to commercial Internet Service Providers (ISPs), which was accelerated by the High Performance Computing and Communication Act of 1991 (P.L. 102-194). Along with
the many networking research developments, many new uses emerged, including electronic mail (email) that could be transferred among people running on different computers. In 1993, at one of the NSF centers, a pioneering web browser (Mosaic) was invented, utilizing the protocols and formats developed by researchers collaborating on high-energy physics to integrate text and images together in remotely accessed digital documents. It almost immediately spawned numerous industrial replacements, including Netscape Navigator, Microsoft Internet Explorer, and AOL (formerly America Online). By 1995, commercial Internet traffic was allowed and encouraged; the transfer of the Internet from federal research agencies and universities to industry service providers solidified this continuous stream of federal research investments to scale to today’s ubiquity.7
And when your smartphone or laptop browser connects to Amazon.com, what you see on the screen is being delivered from one of dozens of massive data centers, each containing hundreds of thousands of networked servers. This cloud of computers is but another illustration of the rich, multifaceted infusion of federally funded IT research into the broader economy.
Although computers and operating systems had been developed and distributed by commercial vendors since the late 1950s, since the early 1960s, universities developed research systems, notably Multics, which were transferred whole or in part to industry for development, distribution, and support. Drawing on the development of a version of Unix on minicomputers, involving both industry and university research, in 1979, Defense Advanced Research Projects Agency (DARPA) took the unusual step of funding the creation and development of a Unix offering providing a large virtual memory on the newly released 32-bit VAX minicomputer with the goal of utilizing the resulting system in its various VLSI research projects. The Berkeley Software Distribution8 (BSD) program would later become one of the earliest significant pieces of open-source software. In 1983, this effort incorporated TCP/IP networking and later implemented Jacobsen congestion control, paving the way for the growth of the Internet, while creating the primary basis for modern file systems, schedulers, and networking, initiating a steady flow of innovations that directly gave rise to the workstation market (Sun Solaris, IBM AIX, HPUX) and created the enterprise server.9 The VLSI research investment gave rise to the birth of the RISC microprocessor, used throughout these markets, and with the 1991 High Performance Computing Act the
7 B.M. Leiner, V.G. Cerf, D.D. Clark, R.E. Kahn, L. Kleinrock, D.C. Lynch, J. Postel, L.G. Roberts, and S. Wolff, 1997, “Brief History of the Internet,” Internet Society, https://www.internetsociety.org/internet/history-internet/brief-history-internet.
8 Techopedia, “Berkeley Software Distribution (BSD),” https://www.techopedia.com/definition/6276/berkeley-software-distribution-bsd, last updated July 1, 2020.
9 Techopedia, “Berkeley Software Distribution (BSD),” https://www.techopedia.com/definition/6276/berkeley-software-distribution-bsd, last updated July 1, 2020.
development of large multiprocessor systems, with the commercial sector harvesting broad academic research of the 1980s.
In the early 1990s, NSF, DARPA, and DOE also invested in research into clusters, very large, scalable systems built out of complete commercial systems. With the emergence of the Web shortly thereafter, these became the dominant approach for global-scale search engines, while research into scalable Internet services transferred rapidly to industry, leading eventually to the current cloud. Perhaps as important as these specific technical advances was the emergence of the open-source paradigm of software development, starting with BSD and followed by the Free Software Foundation, Apache, and many others. In the open-source model, commercial offerings can directly incorporate the output of large open-source development communities, which often encompass both academia and industry. For example, each of the millions of servers in the Amazon cloud is running large bodies of open-source software (e.g., Apache, Linux) such that on a click, thousands of servers can be brought to bear in responding to you and figuring out what you might be looking for.
Ultimately, all of the information about those near billion customers, millions of suppliers, thousands of transportation and logistics providers, billions of products, and trillions of financial transactions is stored in various databases in the cloud and accessed transparently as you browse and shop. While databases were mostly commercially developed in the early years of computing, in 1973, IBM Research began investigating the relational model in which the user could express declaratively what they wanted, and the database software would determine how to perform the query efficiently. DoD agencies invested, along with NSF, in contemporary research at the University of California, Berkeley. These initial research projects spawned decades of research and industrial innovation in databases and storage systems.
While all aspects of this infrastructure are invaluable to the progress of science, education, engineering, media, and more, e-commerce would not be viable without the additional, fundamental research that underlies the little lock icon in the window that lets you know that you are in fact providing your identity and credit card to Amazon and doing so securely. Origins of the modern public key infrastructure derive from theoretical computer science research. Where mathematics and logic had long been concerned with what mathematical functions are computable, complexity theory was concerned with how hard they are to compute. Not only is your e-commerce experience relying on very fast algorithms, including highly parallel ones, its security is based on the concept that certain mathematical functions are extremely hard to compute—that is, the code cannot be broken any faster than testing all possibilities, which requires an amount of computing beyond the resources of an attacker. Confidentiality relies on encryption so that even if someone observed
the communication it would be meaningless, and for centuries encryption has been performed using a secret key that the two end parties shared. But this approach is not sufficient for e-commerce where, for example, a site purporting to be Amazon is accessed by millions, rather than the symmetric one-on-one relationship. In 1976, federally funded theorists formulated an asymmetric approach where one of the keys is private and held in secret and the other is public and shared with the world.10 The key idea was that determining the private key from the public one would be tantamount to finding a very fast solution to a provably very hard mathematical problem. A year later, a specific set of such problems was identified, and commercial public key infrastructures began to be built.11 Here again, an ongoing sequence of advances occurred in the back and forth between research and commercialization. A key advance was the concept of zero-knowledge proofs,12 which allows a party to prove that it possesses a certain secret (say, its key) without giving away anything about that secret. These techniques are seamlessly utilized as your browser verifies that the server claiming to be Amazon.com is in fact what it claims to be and then establishing a confidential flow of communication between you and it, over which your personal information and your credit card number may pass securely. Today’s Transport Layer Security (TLS) reflects years of iterative advancements as an interweaving of industry, academia, open source, and proprietary efforts.
Those search results and recommendations you are provided with, that rapid delivery, and that conversational exchange with your smart speaker, not to mention the advertisements and products that you see, are a few illustrations of the many ways in which AI and statistical machine learning are utilized throughout your e-commerce experience. The recommendations are drawn from an immense body of observations of customer buying histories, browsing patterns, and profiles, as well as how you have responded to what you have been presented with in the past, all fed into the algorithmic process of predicting what you will find most beneficial to see. Of course, there is a blurry line between providing you with what is most useful and with what will induce you into certain actions—for example, making a particular purchase. Your item can arrive so quickly because it had been predicted that a certain number of people in your geographic area would be asking for certain products at about this time, and as a result, a certain number of the products were
10 See SW. Diffie and M.E. Hellman, 1976, New directions in cryptography, IEEE Transactions on Information Theory IT-22(6), https://ee.stanford.edu/~hellman/publications/24.pdf. See also, R.L. Rivest, MIT Lab for Computer Science, 2002, “The Early Days of RSA—History and Lessons,” 2002 ACM Turing Award Lecture, https://people.csail.mit.edu/rivest/pubs/ARS03.rivest-slides.pdf.
12 S. Goldwasser, S. Micali, and C. Rackoff, 1989, The knowledge complexity of interactive proof systems, SIAM Journal of Computing 18(1): 187-208.
manufactured in time and shipped to your area in advance. Leaving you to merely satisfy a prediction from limited local inventory. Obviously, these capabilities draw on decades of federally funded research in optimization and operations research, steadily advancing since World War II and deeply connected to current issues in theoretical computer science, such as approximability (whether approximations of a hard problem can be solved easily). Federally funded research built the domain of AI starting from early days of computing. Many of the techniques used today were well-established in research by the 1980s, including recommendation, speech recognition, natural language processing, neural-networks, and reinforcement learning, but these algorithms could not achieve the effectiveness, ease, and interactivity needed for a satisfying experience until we were able to train on the immense data assets of our connected, digitalized world using the vast computing, communications, and storage of the cloud. This is the self-reinforcing nature of our connected world today—our natural speech interfaces, maps and vehicle navigation, predictive manufacturing and logistics, etc.—all work only because they are extremely widely used.
The COVID-19 pandemic has further exposed the importance of e-commerce as well as its logistical, privacy, and economic vulnerabilities. These forces will surely drive innovation further, but IT innovation by itself cannot address all concerns. Policies that structure market forces and the role of large-scale platforms are needed, for example, to address the economic benefits of small businesses in local communities. As more business moves online, growing expectations for privacy will present new research opportunities. As commerce and social media become more intertwined online, the dangers of misinformation will likewise increase. We expect these drivers, from online privacy and security to deeper personalized and localized experiences, to foster IT innovation across the board.
From enabling fundamental discoveries to transforming service delivery, health care is being revolutionized through the confluence of IT interacting with long-standing practices. Historically, computing and IT have long provided the tools to support the digitization of the health-care system, such as for scheduling,13 operations and
13 National Academies of Sciences, Engineering, and Medicine, 2019, Key Operational Characteristics and Functionalities of a State-of-the-Art Patient Scheduling System: Proceedings of a Workshop—in Brief, The National Academies Press, Washington, DC, https://doi.org/10.17226/25556.
billing, electronic health records, medical supply and logistics systems,14 and pharmaceutical delivery infrastructure. Every facet of the health-care system relies on IT infrastructure for the facilitation of care.
Now, on top of this systemic IT infrastructure, advances in computing are directly transforming health care. Many of the breakthroughs in clinical science, improved surgical procedures, the management of disease, therapeutics, and diagnostics have been enabled by major investments and advances in computing. For example, we now have safer surgeries through innovations in surgical robotics that date back to early research investments in the late 1990s, and advanced diagnostic tools like ultrasound enabled through fundamental research in automated imaging analysis leading to additional $25 billion industries in health care.
Information technology is now an instigator for change in health care through the introduction of IT-based approaches. Recent advances in machine learning and AI have enabled an unprecedented ability to predict and diagnose disease, discover new individualized therapies, and predict outcomes, thus giving way to the notion of “data as a drug.” Mobile technologies, sensors, and health informatics are transforming remote monitoring, telemedicine, self-care, and the management of chronic diseases. These technologies are helping challenge the fundamental bottleneck in the current system that lies in the implicit assumption that health care is something that can be performed only by clinicians in a face-to-face clinical facility. Always available remote physiologic monitoring brings the transformational possibility of health care that is bound by neither time nor space. Thus, much of the growth and the transformation of the $6 trillion health-care industry is toward these IT-based approaches.
The COVID-19 pandemic has shown the importance of telemedicine and remote care monitoring, which is now leading to a fundamental change in how we deliver health care moving forward. Mobile technologies have been utilized extensively in contact tracing efforts, vaccine studies, and screening procedures. Similarly, AI and data science have been used in drug discovery and spread modeling during this unprecedented pandemic.
Tracking IT Innovation in Health Care
Almost all aspects of core computing and IT have had an impact on health care (see Figure 5.2) including in the following areas: empowering providers and clinicians, empowering patients, and new diagnostics and therapeutics.
14 J.B. Perlin, D.B. Baker, D.J. Brailer, D.B. Fridsma, M.E. Frisse, J.D. Halamka, J. Levi, K.D. Mandl, J.M. Marchibroda, R. Platt, and P.C. Tang, 2016, “Information Technology Interoperability and Use for Better Care and Evidence: A Vital Direction for Health and Health Care,” discussion paper, National Academy of Medicine, https://nam.edu/information-technology-interoperability-and-use-for-better-care-and-evidence-a-vital-direction-for-health-and-health-care.
Empowering Providers and Clinicians
Cyber-physical systems and robotics have long had an impact on health care by allowing clinicians to perform much more complex procedures using surgical robotics and navigation, while making surgeries safer, more effective, and less invasive.15 Now widespread,16 the foundation for robot-assisted surgical systems has a long history of federally funded research on mobile manipulation. Looking forward, AI is helping clinicians better interpret medical scans and patient reports to improve diagnostics. For example, recent advances in AI have shown accuracy better than humans when interpreting mammograms.17 Advances in natural language and audio processing make it possible to automatically transcribe patient notes during a clinical visit, reducing errors in note taking and physician burnout.18,19 Automated analyses of
15 U. Mezger, C. Jendrewski, and M. Bartels, 2013, Navigation in surgery, Langenbeck’s Archives of Surgery 398(4): 501-514, doi:10.1007/s00423-013-1059-4.
16 As of December 31, 2019, Intuitive Surgical had an installed base of 5,582 da Vinci Surgical Systems, including 3,531 in the United States, 977 in Europe, 780 in Asia, and 294 in the rest of the world.
17 S.M. McKinney, M. Sieniek, V. Godbole, J. Godwin, N. Antropova, H. Ashrafian, T. Back, et al., 2020, International evaluation of an AI system for breast cancer screening, Nature 577: 89-94, https://doi.org/10.1038/s41586-019-1799-6.
Research investments in health informatics and mobile computing, coupled with advances in sensors and wearables, are enabling remote monitoring and the management of disease outside of the clinic that once was not possible. For example, wearable and network-enabled glucose meters have improved the management of diabetes. Human-computer interaction and health informatics have brought in new methods for preventative medicine, self-care, and behavior change.23 Mobile phones have provided a means to connect patients with peer groups and medical professionals in addition to providing a platform for screening and diagnostics outside of the clinic. Telehealth, which is fuel primarily by IT, was already utilized by 25 percent of U.S. consumers in 2019 and has a compound annual growth rate of 19 percent.24 Such advances can help expand the reach of better care to a broader segment of the population that might not have convenient access to a health facility.
New Diagnostics and Therapeutics
Computing is playing a central role in genomics from sequencing and assembling of DNA sequences to analyzing genomes in order to locate genes and patterns to inform new therapies. Similarly, data analysis is a central part of precision medicine that allows the application of the most effective therapies to specific groups of individuals. In addition, the ability to capture longitudinal physiological and behavioral data through consumer devices and technology provides new opportunities for early
20 N. Tomašev, X. Glorot, J.W. Rae, M. Zielinski, H. Askham, A. Saraiva, A. Mottram, et al., 2019, A clinically applicable approach to continuous prediction of future acute kidney injury, Nature 572: 116-119, https://doi.org/10.1038/s41586-019-1390-1.
22 On predicting risk of C. Difficile infections in hospitalized patients, see J. Wiens, J. Guttag, and E. Horvitz, 2016, Patient risk stratification with time-varying parameters: A multitask learning approach, Journal of Machine Learning Research 17(79): 1-23.
23 While these advances connect to substantial federal research, notably led by the National Institutes of Health (NIH), in the past decade, attention has turned to address the gaps between science-driven technology research and mission-driven health research, resulting in programs such as the Smart and Connected Health research program shared by NSF and NIH.
24 J. Harpaz, 2020, “5 Reasons Why Telehealth Is Here to Stay (COVID-19 and Beyond),” Forbes, May 4, https://www.forbes.com/sites/joeharpaz/2020/05/04/5-reasons-why-telehealth-here-to-stay-covid19/#656186c253fb.
diagnostics and screening,25 such as the use of social media to identify depression.26 Continuous, real-time sensors in the home, from wearables or consumer electronics, now provide an entirely new source of personal health data. For example, we could much better diagnose disease if we understand baseline metrics for a particular patient, have the ability to look at trends, and observe the patient cheaply and unobtrusively over longer periods of time with the ultimate goal to tailor therapeutics as appropriate for the specific patient and context.
There is still enormous potential for further breakthroughs in health care, thus arguing for continuing major investments in computing and research. We are just scratching the surface with emerging technologies. While the COVID-19 pandemic has spurred greater use of telemedicine, there are ample opportunities to improve these connections through intelligent agents, as well as ample needs to safeguard patient privacy.
The use of new sources of data and applying machine learning and inference to that data opens up new opportunities on how we think about safety and regulation. Recent guidance on Software as a Medical Device (SaMD) is becoming more like regulation of medical software systems. However, much of the process still involves taking a static, device-oriented approach. New advances in computing now offer the ability to frequently train new models, run tests in real-time, and process large amounts of personal health data, which, in turn, also requires new research in how we build safe and fair systems when some approaches may not be perfectly explainable.
Our digital footprints can tell us a lot about our health and well-being. Thus, we need to anticipate other sources of data that might not be directly observable by a biosensor and how to think about those sources from a data science and privacy perspective. Much of our personal health record that will be collected outside of a clinical encounter will need to be interpreted and translated in ways so that it is meaningfully integrated into the medical care of an individual.
As we look forward, we must rethink how we fund and invest in innovations in health care. Medicine is increasingly becoming more reliant on advances in computer
25 E. Wang, W. Li, D. Hawkins, T. Gernsheimer, C. Norby-Slycord, and S.N. Patel, 2016, “HemaApp: Noninvasive blood screening of hemoglobin using smartphone cameras,” pp. 593-604 in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UBICOMP ’16), doi:10.1145/2971648.2971653.
26 M. De Choudhury, M. Gamon, S. Counts, and E. Horvitz, 2013, “Predicting Depression via Social Media,” Proceedings of the 7th International AAAI Conference on Weblogs and Social Media, https://www.aaai.org/Library/ICWSM/icwsm13contents.php; J. Harpaz, 2020, “5 Reasons Why Telehealth Is Here to Stay (COVID-19 and Beyond),” Forbes, May 4, https://www.forbes.com/sites/joeharpaz/2020/05/04/5-reasons-why-telehealth-here-to-stay-covid19/#656186c253fb.
and information technologies, and thus becoming interwoven with cutting-edge IT capabilities. Additionally, more health-care delivery is moving outside of the clinical setting with increasing sources of health data that relies heavily on health informatics and human-centered approaches. Interdisciplinary efforts like the current NSF-NIH Smart Health and Wellbeing program is an example of the kinds of programs that incentivize the computing and health communities to work together directly.
The automotive industry remains a quintessentially American invention. Over the past five decades, innovations in IT have transformed the automotive industry, leading to the creation of more complex, sophisticated, and safer vehicles. Advances across IT have fundamentally changed all aspects of the automotive industry.
Just as the Ford assembly lines revolutionized the scale and production of automobiles, the growth of IT capabilities over the past decades has been absorbed into the automotive industry, including the technology integrated into automobiles, from braking to automated parking, to the business of assembling and selling automobiles across global supply chains. Over time, mechanical processes have been completely replaced by digital counterparts under the hood and inside the design and testing production teams. A car today relies on over 150 processors with 10 to 100 million lines of code and traditional physical drawing boards and physical models are few and far between.27
Over time, the role of IT has changed from a supportive role in a stable yet globally competitive industry to creating transformative opportunities as well as sparking major disruptions in a sector that has witnessed massive changes in the past decade. Computational tools for design, modeling, simulation, and testing have enabled tremendous significant improvements in the complexity, capabilities, and safety of modern automobiles. Inside the car itself are cutting-edge sensors operating over multiple networks powered by AI, offering phenomenal improvements in performance, safety, and efficiency. This virtuous cycle between computing that is driven by modeling and design and computing that is driven by performance has made it possible to build vehicles that bear more resemblance to science fiction than the Model T. IT has also disrupted the traditional creation and use of automobiles. Manufacturing is now dominated by robotic assembly lines creating new
27 B. O’Donnell, 2016, “Your Average Car Is a Lot More Code-Driven than You Think,” USA Today, June 28, https://www.usatoday.com/story/tech/columnist/2016/06/28/your-average-car-lot-more-code-driven-than-you-think/86437052.
economic opportunities, displacing some jobs while also changing the balance of offshore manufacturing and global supply networks. Even the cultural norms around car ownership and use are newly negotiated by consumers adopting ride sharing and contemplating autonomous vehicles.
Tracking IT Innovation in the Automotive Industry
Multiple tracks of IT innovation have played a critical role, as depicted in Figure 5.3.
Not surprisingly, innovations in data center technologies and cloud platforms have been swiftly adopted by these global companies. “What we see in the data center is Moore’s law writ large in terms of compute capacity and storage capacity, centralized computing to distributed technology. All of these things have fundamentally changed the face of the automotive industry.”28 Increased network capacity and advances in virtualization have allowed the consolidation of global data centers and the effective use of high-speed computing across the enterprise.
The design of vehicles is one of the most expensive and time-consuming processes in car manufacturing. Design has migrated from paper-based drafting and clay models to full digitization and greater use of immersive virtual reality (VR) simulation. Moving the design process to the use of computer-aided design (CAD)
28 Nick Bell, Chief Information Officer, General Motors (retired), presentation to the committee on October 3, 2019.
and now VR cuts project time and cost and also allows for a broader and more diverse set of designers to collaborate across the world. The automotive industry is a key adopter of computer-generated imagery, VR, and other advanced design tools that have a long history of pivotal federally funded research. While many consumer offerings for these tools are frequently driven by the entertainment industry, these commercial offerings are insufficient to the design demands for envisioning physical dynamic characteristics. In response, automotive companies acquire inhouse talent to shape these tools for their high-performance requirements (e.g., field of view, color fidelity, resolution, refresh rate). This acquisition of needed talent also draws from the research ecosystem, either through directly hiring talent or by acquiring recent start-ups. Although the transition from paper to CAD took longer than anticipated, there have not been “any drawing boards for the past 15 years.”29 Although moving to digital design tools has made the design process more efficient, the more recent adoption of VR technologies has further decreased time and increased quality by allowing the design team to work with models at scale, allowing them to better understand the interactions between different components and drastically reduce the number of prototypes needed for the vehicle development. For example, one automotive firm managed to reduce by 50 percent the number of prototypes they needed to produce before launching a new car model (2018).30 In another 2018 report,31 Renault reported saving 2 million per year by using VR, reducing the conception time of their cars by 20 percent. Likewise, Jaguar Land Rover reported that VR helped them save 4 million in only 5 weeks. The time savings and management of complex designs will have an even greater impact as automakers resume operations following COVID shutdowns, adjust summer retooling of manufacturing plants, and delay new product launches.
IT also has significant impacts in automotive marketing. Commercials for new cars are created using high-definition models and animation, as opposed to filming physical cars. Marketing and sales have been transformed on the heels of e-commerce and the availability of in-house data analytic capabilities. Dealerships are now offering VR32 immersive experience to advance sales while allowing customers to
29 Nick Bell, Chief Information Officer, General Motors (retired), presentation to the committee on October 3, 2019.
30 improovr3, 2018, “The Use Cases and Benefits of VR In the Automotive Industry,” https://www.improovr.com/blog/the-use-cases-and-benefits-of-vr-in-the-automotive-industry.
31 Capgemini, 2018, “Immersive Technology Has Arrived: AR and VR Set to Become Mainstream in Business Operations in the Next 3 Years,” September 7, https://www.capgemini.com/news/ar-and-vr-in-operations.
32 S. Killian, 2017, “Audi Launches Virtual Reality Technology in Dealerships,” press release, August 30, https://www.audi-mediacenter.com/en/press-releases/audi-launches-virtual-reality-technology-in-dealerships-9270.
individually customize their cars and experience their customized vehicle in different light, sound, and seasonal conditions.33
The transition of physical to digital modeling has led the way for advanced simulation and testing via computer-aided engineering and high-performance computing. These investments have tracked Moore’s law for the acquisition of teraflops of computing, gigabytes of storage, and high-speed connectivity across the globe. Full vehicle, integrated, solid models allow for full crash simulation, including mixed metals, adhesives, and polycarbonates. Full-scale simulation and modeling and virtual crash testing has enabled dramatic increases in vehicle safety. “You can’t build enough physical cars in real life for crash testing”34 compared to what is available through software simulation. Design through simulation proceeds faster and with more complexity through technology than what was feasible through physical modeling and traditional testing.
Improvements in automotive safety do not end with virtual crash testing but extend into the safety and quality assurance of car components, safety features integrated into vehicle performance, and into the useability and safety of the driving experience. Key safety features powered by IT include electronic stability control, adaptive cruise control, auto emergency braking, and a growing number of sensors to help detect and avoid collisions. These advances are commonly rooted in sustained science and defense federal research. Collectively, there were 7,700 fewer driver deaths in the United States in 2012 than there would have been had vehicles remained the same since 1985.35 While these components address the safety of the driving experience, innovations in IT are also helping assess the probability and consequences of failure for all vehicle systems, from the infotainment unit to braking, and are shifting to open standards to improve quality assurance while maximizing flexibility in their supply chain. Finally, digital design tools including VR are enabling a more robust design and evaluation process that assesses possible driver errors in a potentially infinite set of testing conditions.
From the industrialization and optimization of the factory floor for the production of the Model T to today’s robotic driven assembly operations, the construction of vehicles has followed new technical capabilities. An integrated supply chain and just-in-time logistics makes it possible to build a wide variety of product
33 M. Carlsson and T. Sonesson, 2017, “Using Virtual Reality in an Automotive User Experience Development Process,” https://pdfs.semanticscholar.org/9e06/899d42d13dc7f257b09e82ff92c689b16de2.pdf.
34 Nick Bell, Chief Information Officer, General Motors (retired), presentation to the committee on October 3, 2019.
35 S. Karush, ed., and K. Stewart, 2015, “Saving Lives: Improved Vehicle Designs Bring Down Death Rates,” Insurance Institute for Highway Safety Status Report Newsletter, Vol. 50, No. 1, https://www.iihs.org/api/datastoredocument/status-report/pdf/50/1.
variants to order. Akin to the airline industry, predictive analytics fueled by real-time data capture has led to major cost savings in preventative maintenance and process control. Indeed, the assembly plants themselves have been redesigned based on virtual design simulations to optimize ergonomics and material handling. As depicted in Appendix B, research innovations in robotics and predictive analytics have found their home in the modern automobile. With the majority of assembly being done by robots, the lighting and layout are optimized for machine operation and human controllers. Moreover, these plants also use complex network topologies and depend on sophisticated cybersecurity defenses to prevent network and physical breaches.
Vehicles themselves have shifted from complex mechanical inventions to more complex cyber-physical systems; each vehicle now includes tens of electronic control units, over 100 sensors, 100 million lines of code with 25 Gb data per hour streaming across multiple networks. Most of the added value of a vehicle today is driven by electronics technology; the mechanical parts are a diminishing percentage. Going forward advanced driver assistance integrates GPS and HD mapping, real-time map management and optical, radar, sonar, and lidar data in real time. Power management and the requirements for robust technologies working in more extreme temperatures (e.g., a parked car can surpass 150 degrees in the summer) creates challenges that surpass the capabilities of consumer technologies such as smartphones.
The innovations in IT that have driven these remarkable transformations have been spurred by a wide portfolio of federal research funding—ranging from robotics, AI and sensing, to VR and human-computer interaction. In fact, the resurgence of research and innovation depicted elsewhere in this report (virtualization, VR, and AI) have all contributed to the speed and impact of change in the automotive industry. Additionally, the students trained in these fields have filled a growing demand for IT professionals.
The confluence of these IT capabilities is generating another major point of change for the automotive industry. The ease of access and affordability of these platforms (data centers, virtual design, automated manufacturing) has spurred the entry of automotive start-ups.36 While Tesla dominates the field today, new players form Wheego, Coda Automotive, Fisker Automotive, and Tango point to increased innovation and disruption, especially for electric vehicles and “mobility as a service” (MaaS) capabilities.
36 D. Carney, 2013, “10 New Car Companies Aiming for the Big Leagues,” NBC News, http://www.nbcnews.com/id/40887273/ns/business-autos/t/new-car-companies-aiming-big-leagues#.X427VtBKh9A.
New safety features will continue to improve safety and lower the human and economic costs of averted and actual collisions. Image recognition will play a greater role in recognizing dozing drivers, drift, and nearby obstacles, including pedestrians and animals. Scaling further autonomous fleets will generate petabytes of data and require advanced analytics to optimize transport and supply chains. With continued research investments, there will continue to be a dynamic push of innovations in sensing, analytics, and security systems coupled with the pull of the demands of increasingly safe and autonomous vehicles. Image processing and object recognition capabilities are leading the way to fully autonomous consumer vehicles, but innovation gaps remain, from power management to human factors and interaction design.
“Wait! When you watched football as a kid, you couldn’t see the line of scrimmage or first down line on the field?”
Watching professional sports on television is a fundamentally different experience today from what it was in the era before real-time image processing made it possible to project computer-generated images on top of a live television feed. We take for granted that the image we see on television clearly shows the line of scrimmage and first down line in football, shows the line indicating an offsides player in soccer, or shows the strike zone with the placement of the baseball. Technology has slowly crept into every aspect of the sports world. From the use of cloud-based apps for fan engagement to wearable sensors to monitor athlete performance; from graphics and augmented reality to bring fans closer to the game, to data analytics to inform every aspect of recruiting, training, and coaching, IT has transformed every aspect of “real” sports and ushered in a new era of fantasy and esports.
IT is now transforming the business and experience of sports. Esports provides ready examples of computing, namely electronic games, both adopting and turning on its head the traditional practices of sports and fan engagement. While fans have grown to expect the real-time image processing that creates the dynamic first down line, these capabilities are also transforming officiating and player performance. In fact, technological umpires, rather than human ones, were slated to start calling balls and strikes in some minor league baseball parks in 2020.
Tracking IT Innovation in Sports
This section examines computing innovations in several aspects of the sports industry—the fan experience, recruiting, coaching and training, the weekend warrior, fantasy sports, and esports (see Figure 5.4).
The Fan Experience
Few fans can remember a time when television coverage of sporting events was not accompanied by instant replay, graphical overlays, and customized advertisement projected on green screens behind home plate. Fans are also likely unaware that these techniques connect back to decades of research in computer graphics and VR with the first mixed-reality “first and down line” occurring in 1992 as an overlay of sensory information on a workspace to improve human productivity.37
But these augmentations are only the tip of the iceberg. The same technology that brought apps on mobile phones gives fans round-the-clock access to their favorite sports teams with apps for both individual teams and entire leagues. Similar technology is used to provide fans in-venue apps as well, providing service from
37 M. Pesce, 2019, “Augmented Reality—The Past, The Present and The Future,” https://www.interaction-design.org/literature/article/augmented-reality-the-past-the-present-and-the-future.
food delivery to your seat to ticket sales to game day plans. Research in virtual and augmented reality gives fans a chance to compete with other fans in friendly competition or don virtual face paint in support of their favorite team. Practically every professional sporting organization has adopted some form of IT to augment their television and live audience experiences. Breakthroughs in machine learning and real-time analytics allow fans to follow their favorite players going far beyond tracking traditional player statistics.
Recruiting, Coaching, and Training: Welcome to the Age of Data Analytics
The publication of Michael Lewis’ Moneyball: The Art of Winning an Unfair Game38 was a watershed moment in professional sports. It documented Billy Beane’s application of quantitative models to recruit players for the Oakland Athletics (A’s) in the early 2000s. The A’s essentially fielded a top team for about one-third the cost of other teams. Much of the data on which Moneyball is based was collected manually. However, as analytics has crept into football, basketball, soccer, and virtually every other professional sport, IT such as wearable sensors, image and video processing, and GPS technology has transformed how professional athletes are recruited, trained, and coached. Football shoulder pads have embedded chips; race car drivers wear biometric gloves; soccer players wear clothing with embedded sensors;39 even weekend warriors track their athletic pursuits with meticulous regularity thanks to consumer grade devices such as Fitbits, Apple watches, and smartphones with apps such as Strava. This virtuous cycle of wearable sensors and data analytics drove research as wide ranging as spaceflight to the future of fabrics and fashion. The hugely successful FitBit broke into the commercial scene in late 2010.
Coaches and trainers have access to more data than ever before, thanks to these myriad monitoring devices. However, much work remains to be done. Predictive modeling is useful, but it does not reveal the underlying physiological phenomenon that coaches, trainers, and therapists need to keep athletes in peak form. Thus, data scientists work hand in hand with coaches and trainers to transform all the new modalities of data into actionable information to help athletes stay healthy, recover from injury, and perform better.
38 M. Lewis, 2003, Moneyball: The Art of Winning an Unfair Game, W.W. Norton & Company, New York.
Logistics and Security
A long history of research in optimization has found a home in the IT infrastructure and techniques employed by sports teams managing travel and stadiums.40
Stadiums are ad hoc cities: the largest in the United States, Michigan Stadium, holds over 100,000 people, and the top 100 U.S. stadiums each hold over 50,000 people. Safely moving that many people in and out of the stadium over the course of a few hours requires technology infrastructure akin to airports and other transportation hubs. Major sports teams utilize a host of information technologies to quickly screen attendees while camera-based systems scan for known offenders. Now in the era of safe distancing due to COVID-19, many new layers of monitoring will be needed to help ensure player and fan safety.
Road games41 present the most complex logistical problems that every sports franchise faces during the course of a season—an entire team’s training, operations, and coaching set up has to be assembled and transported and supported on the road and then disassembled and transported again in another couple of days. Advances in technology in transportation and supply chain have been addressing and continue to innovate to address these problems.
The Weekend Warrior
The same technology used to quantify and improve elite athlete performance has become available to even the most casual athlete. Consumers use wearable sensors to track the number of steps they take each day, how far they move, how frequently they move, and how well they sleep. These devices become more capable each year, blurring the line between wearable fitness tracking devices and medical tracking devices. AI algorithms have allowed us to glean activity insights from even simple and low-cost sensors like accelerometers that are becoming integrated into commodity devices like watches and shoes.
The combination of wearable sensors with social networking has produced virtual communities of casual athletes who can compete with each other or select the same physical locations for their workouts. Alternatively, thanks to telepresence, athletes can engage in group fitness classes without ever leaving the privacy of their own homes.
40 D.M. Herold, T. Breitbarth, N. Schulenkorf, S. Kummer, 2019, Sport logistics research: Reviewing and line marking of a new field, International Journal of Logistics Management 31(2): 357-379, accessed prior to publication at https://pdfs.semanticscholar.org/a584/22582eb010e8e182f52a292b07db23bc5996.pdf.
41 G. Wollenhaupt, 2017, “Baseball Logistics: Hitting a Home Run Every Trip,” Inbound Logistics, https://www.inboundlogistics.com/cms/article/baseball-logistics-hitting-a-home-run-every-trip.
Technology has evolved to provide immersive, real time group training for sports enthusiasts that works toward community building and training at the same time. Peloton leads the way in this regard but the methodologies are being adopted for other training sports as well, such as Crossfit.
The professional sports world and the nonprofessional world collide in the area of fantasy sports, where anyone can field a professional sports team. As early as the 1960s, sports fanatics gathered to create fantasy baseball teams, composed of players from the professional leagues. Fantasy players, playing the teams’ managers, would draft real players at the beginning of a season and trade them throughout. Fantasy players manually translated game day box scores into fantasy team performance to determine winners and losers. Once something done only by the most hardcore fans, this quaint hobby has become a big business thanks to the introduction of IT.
Every sport these days has myriad online fantasy leagues with varied formats, where you can create your own league with your friends or join one with strangers for fun or money. Some formats mimic the original season-long ones, whereas others operate on a game-by-game42 or weekly basis. For example, each week participants choose players constrained by a budget, and winners and losers are determined by that week’s actual games. IT allows for updating team scores in real time, so fantasy sports managers frequently watch games while refreshing their view of the statistics to show—for example, how many more catches do you need from your wide receiver to beat your opponent. Just as real managers, coaches, and trainers use analytics, the predictive analytics market for fantasy sports—that is, forecasting which players will score the most fantasy points—is now a billion-dollar industry. Online fantasy gambling has also become a billion-dollar industry. Broadcasters have embraced this model and cater to it exclusively at very high return on investment with offerings such as RedZone and Sunday Ticket that provide highlights of live games with fantasy score tracking.
Expanding the definition of sports just a bit further takes us to the world of esports, which is the professional version of video gaming. Esports have become a big
42 Daily fantasy sports (DFS) are an accelerated variant of traditional fantasy sports that are conducted over short-term periods, such as a week or single day of competition. The U.S. daily fantasy sports industry is currently dominated FanDuel and the DraftKings. As of September 2015, both companies have an estimated value of at least $1 billion, controlling 95 percent of the U.S. DFS market.
business, recently reaching the $1 billion mark. Esports also draw a huge fan base—the 2017 League of Legends World Championship attracted 80 million viewers or almost two-thirds that of the largest Super Bowl viewing audience ever. Professional esports organizations recruit and compete for top players, and over 50 colleges have varsity esports teams.
The entire esports world would not exist without IT. The games themselves are virtual, developed for the computer and online world. Competitions can take place either in large arenas, as with conventional sports where fans watch co-located teams spar in virtual worlds, or with remote contestants across the globe. Fans watch training and matches online, tuning in to a livestream, and also pack large arenas to watch on jumbo screens. Top players attract a fan base by regularly streaming themselves playing, and millions of fans tune in to watch their favorite players in action. Game success is a function of both its creative content and its visual appeal and realistic feel; the best games provide an immersive experience that borders on VR. Advances in computer hardware—for example, high-end graphics processing units (GPUs), graphics, VR, and human-computer interaction—come together to produce compelling game experiences. And, as with practically all other areas of sports, collection and availability of detailed play-by-play data has produced an industry in esports analytics.
As technology evolves and is further embedded into sports, we expect to see rapid innovation in sports technology in player development and training and, primarily, in fan experience. The ubiquity of communication technologies and increasing ways in which technology is leveraged in consumption of the sports experience will change the way fans experience sports. The stadium ambience will be replicated in the home environment and pushed onto mobile. Capture technology and production methodologies will evolve and adapt to growing consumer demand to further these experiences.
Sports teams and the major leagues have already sought to investigate these experiences, partnering with solutions providers in collaboration with research universities. As content providers evolve from the TV model to more of an on-demand streaming model, they too benefit from these collaborations, and we can foresee these providers leading the charge in funding and executing on these solutions.
These anticipated economic returns continue to tap into long-standing, federally funded academic research programs. Looking forward, advances in player safety, from detecting concussions on the field to advanced analytics to predict injury and recovery, will be driven by computing research that also informs health care and
military operations. Likewise, ensuring fan safety in increasingly monitored stadiums and compelling fan experiences at home will tap into research to ensure safety in public settings and mixed-reality research that connects to many markets in training, education, and entertainment.
On the surface, an industry such as agriculture seems far removed from IT, and yet farmers and the rest of the agriculture sector are increasingly using robots, sensors, machine learning, and other IT to address food and sustainability challenges across the globe (see Figure 5.5).
Farm equipment vendors sell equipment that automates the performance of tasks such as plowing, weeding, and harvesting by leveraging advances in areas such as machine vision, robotics, and autonomous vehicles following a long history of scientific and defense-funded research. Although the absence of other vehicular traffic in some ways makes it easier to autonomously control a tractor than an automobile, there are additional challenges of operating in a farm field. For example, the terrain is highly irregular, visibility is obscured by crops, and the environment is more
difficult to map because it is constantly changing. The capabilities now common in farm equipment frequently are rooted in an innovation chain that spans from basic research to applied research in robotics and automobiles—creating customer demand in more conservative markets such as agriculture. Much of the human factors research43 that historically brought together psychology and computer science to fuel early work in human-computer interaction has also continued to inform the design of usable and safe farm equipment. “A combine is basically a factory on wheels, with the cockpit of a plane.”44
Likewise, these robotic systems are driven by data from sensors, drones, and satellites and the Global Positioning System to couple mechanical automation with dynamic surveillance data. Complementing these advances in automation, IT companies45 are developing analytics and decision-making services that help manage farms. Long-standing agriculture suppliers are also introducing advanced sensors and data analytics capabilities to their increasingly automated tractors and combines. This data-driven or precision agriculture has been found to improve yields, reduce costs, and ensure sustainability by reducing water and pesticide use. However, precision agriculture does not come without its challenges. One of the barriers to widespread adoption is the cost of instrumenting farms with enough sensors to get the kind of spatial resolution regarding soil and moisture content they need. To address these gaps, companies use a combination of satellite and drone imagery, coupled with sampled sensors and machine learning models, to generate precision maps of soil parameters such as moisture level and pH that guide the best possible treatment. Multiple data sources are used along with weather predictions and models of best practice to generate crop management recommendations for farmers.
Such data fusion and modeling can be used, for example, for micro-climate forecasting, providing farmers with another tool for managing their crops. In a typical scenario, the weather forecast was for a temperature of 41 degrees Fahrenheit, but a microclimate forecast based on locally collected data predicted 31 degrees in the lower areas of a field. When the farmer checked the area in question, the actual temperature turned out to be 30 degrees, which stopped the farmer from spraying grass herbicide too early, avoiding unnecessary crop damage.
A fundamental challenge in precision agriculture is the limited communications infrastructure typical in rural areas. As a result, the network connection between
43 S. Reid, 2017, Down on the farm: Human factors psychologist Margaux Ascherl optimizes technology to make farming more efficient, American Psychological Association 48(11), https://www.apa.org/monitor/2017/12/job-ascherl.
44 S. Reid, 2017, Down on the farm: Human factors psychologist Margaux Ascherl optimizes technology to make farming more efficient, American Psychological Association 48(11), p. 66, https://www.apa.org/monitor/2017/12/job-ascherl.
45 See, for example, Microsoft Research’s FarmBeats project.
a farmer’s home or office has limited bandwidth and is prone to outages. Edge46 computing technology can be used to create an on-farm private cloud infrastructure linked to and remotely managed by a public cloud service. This configuration allows local processing when communications are limited, the use of greater computational capability in the cloud when desired and available, and eliminates the need for local IT support staff. Corresponding innovations in “WhiteFi” networking allow the repurposing of analog TV broadcast frequencies for wireless networks than span miles, allowing for more cost-effective installations. These innovations illustrate a virtuous cycle between innovation stemming from long-standing research in networking and computer systems and architecture alongside specific requirements and demands, driving new conceptualizations of networking and cloud computing for rural settings.
Another application of IT to farming is illustrated by a modern, high-tech dairy farm. For decades, livestock have been tagged with RFID (radio frequency identification) mobile units. Created in the 1970s at Cornell University, these “electronic cow tags”47 were an early civilian use of these “friend or foe” World War II radio transponders. Now in a much smaller and cost-effective passive form, the pervasive use of RFID tags helps track individual cows as they move through the milking parlor and records key attributes associated with the animal in a database. Newer robotic systems read the tags to determine which cow they are about to milk and use a combination of machine vision and laser scanners to automatically find, clean, and connect milk cups to a cow’s teats. The robot also tracks critical features of the milk, such as fat content, and records the information in the database so that farmers can adjust feed to maximize productivity. Such automation has many advantages, including the ability to milk more frequently, and a perception of a lower stress environment for the cattle. Perhaps just as importantly, the automation helps fill a labor48 need that is increasingly difficult to satisfy.
Although IT has played a pivotal role in transforming agriculture, including enabling the shift to large-scale agribusiness, researchers are considering the role of IT to transform agriculture in response to emerging challenges. The modern food supply chain is creating food waste, food deserts, hidden costs, and health economic risks.
46 University of California, Santa Barbara, “SmartFarm,” https://sites.cs.ucsb.edu/~ckrintz/projects/index.html, accessed July 1, 2020.
47 National Museum of American History, “Electronic Cow Tag,” catalog number 2013.0026.11, https://americanhistory.si.edu/collections/search/object/nmah_1437998, accessed July 1, 2020.
48 L. Grooms, 2019, “Robotic-Milking Systems Reaching Tipping Point,” Agri-View, January 17, https://www.agupdate.com/agriview/news/dairy/robotic-milking-systems-reaching-tipping-point/article_ec029e90-c741-5b23-9053-105df54e304b.html.