Skip to main content

Currently Skimming:

4 AI Resurgence
Pages 47-61

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 47...
... Formal checking technologies are now used to improve the quality of hardware and software systems, but neither are they used routinely nor do they cover all possible ways that computer systems can fail. Greater use will come as better tools are developed that are more smoothly integrated into programming languages and software development environments.
From page 48...
... It traces salient intellectual threads in these five areas and provides examples of federal research programs and individual grants that propelled these advances. MACHINE LEARNING AND NEURAL NETWORKS Machine learning is the subfield of AI that studies how computing systems can automatically improve through experience.
From page 49...
... .2 A variety of machine learning approaches were developed, including decision trees and support vector machines.3 There was a burst of related work in the mid-1990s on probabilistic approaches to time-series prediction and control problems that draw on earlier frameworks such as adaptive control models, Kalman filters, and Markov decision processes. Applications of machine learning and its impact on the economy have grown steadily since then, including applications to marketing and advertising, to recognition of addresses and automatic sorting of mail, to automatic spam email detection, email prioritization, online recommendation engines, medical diagnosis, and many more.
From page 50...
... Machine learning is now fundamental to operations of large cloud providers, Internet search companies, and social network companies, as well as customer modeling, robotic applications in factories and farms, and analysis of data in basic sciences from biology to physics. Machine learning lies at the heart of many AI applications that rely on computer vision, speech recognition, natural language, chatbots, and conversational interfaces to computers.
From page 51...
... Reasoning meth ods in AI include computational procedures for applying sets of inferential rules such as logical reasoning -- for example, to prove theorems from sets of axioms or rules of statistical or decision-theoretic inference to identify patterns, compute likelihoods of different outcomes, or determine best actions from multiple pieces of evidence. Reasoning can also be performed heuristically or approximately -- for example, via consideration, in a qualitative manner, of knowledge about causes and effects and of chains of causation.
From page 52...
... The models could be constructed by assessing distinctions and probabilistic relation ships from experts as well as directly from data. Researchers developed families of Bayesian inference algorithms for performing coherent probabilistic reasoning within Bayesian networks.
From page 53...
... In the 1990s, learning procedures were developed to learn the structure as well as the parameters of Bayesian networks directly from data. Prominent applications of Bayesian networks include medical diagnosis, machine troubleshooting, traffic prediction and routing, document analysis, userpreference modeling, financial modeling, and pattern recognition, such as identifying junk email.7 On the latter, spam filters demonstrated the power and value of the probabilistic methods in daily life.
From page 54...
... van den Driessche, J Schrittwieser, et al., 2016, Mastering the game of Go with deep neural networks and tree search, Nature 529: 484-489, https://doi.
From page 55...
... Government funding for natural language research started as early as the 1950s with early efforts to translate Russian into English, continued in the 1960s with support for two distinct threads of research -- one on speech recogni tion and to the application of speech-to-text systems and a second on text analysis. Research on speech-to-text transcription was consistently funded by a num ber of federal agencies (especially the Defense Advanced Research Projects Agency (DARPA)
From page 56...
... . Over the past decade, the adoption of deep neural network approaches to text analysis has yielded a significant improvement in accuracy for many language processing tasks, including information extraction and summarization, sentiment analysis, and machine translation.
From page 57...
... , and since 2018, the dissemination of deep neural networks (e.g., ELMo, BERT, GPT, and Turing) that have been pre-trained on billions of words of text data and that serve as a general substrate for developing more targeted natural language applications.
From page 58...
... And the technology research firm International Data Corporation estimates that the robot market was worth $135 billion in 2019.14 As the research investment pays off, and robots move from our imaginations into our homes, offices, and factory floors, they will become the partners that help us do so much more than we can do alone. Many of the advancements in autonomous robotics can be traced15 back to initial robotics research projects established in the 1960s and 1970s with growth supported by sustained federal funding.
From page 59...
... Further, with the rise of human-robot interaction and co-robotics research, socially assistive robotics offer the promise of new therapies for improved treatment of social disorders, such as autism, and physical rehabilitation. Over the past decade, through major cross-agency support such as the National Robotics Initiative, these advances have enabled robotics to begin automating tasks in the physical world similar to how computing has automated information tasks in the past.
From page 60...
... COMPUTER VISION Computer vision22 is the science of understanding images and videos. Computer vision methods seek to recover models or descriptions of objects and scenes (e.g., garden versus kitchen versus parking lot)
From page 61...
... Its conception and development were supported by NSF25 and multiple companies and other institutions. Although neural networks had been around for decades, researchers at the University of Toronto changed the field of computer vision by obtaining dramatically better results in the contest using deep convolutional neural networks than were possible with prior methods.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.