Skip to main content

Currently Skimming:

Artificial Intelligence and Machine Learning to Accelerate Translational Research: Proceedings of a Workshop - in Brief
Pages 1-9

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 1...
... This shift has created new research opportunities in predictive analytics, precision medicine, virtual diagnosis, patient monitoring, and drug discovery and delivery, which has garnered the interests of government, academic, and industry researchers alike and is already putting new tools in the hands of practitioners. This boom in digital health opportunities has also raised numerous questions concerning the future of biomedical research and healthcare practices.
From page 2...
... According to Rajkomar, "Teaching computers to interpret medical images, involving human experts interacting with a computer model, is not a trivial matter, but medical communities are already adopting the practice. In diabetic retinopathy -- one of the most rapidly growing causes of blindness -- there are not enough trained ophthalmologists to diagnose all of the cases, especially in developing countries.
From page 3...
... The researchers showed that if this is done in the context of active learning -- an iterative process in which the deep learning algorithm proposes a segmentation and then a human expert corrects that segmentation, updating the algorithm -- it is only necessary to use about one-sixth of the images that you would need otherwise to train the algorithm to do the segmentation effectively." In the pharmaceutical space, especially in drug discovery, researchers have used machine learning and deeplearning approaches for mappings between a chemical structure and some physical property, such as water solubility. An advantage to using deep-learning and machine learning in pharmaceuticals is that researchers are often dealing with large, homogenous data sets, so it is easy to generate large amounts of uniformly labeled data useful for training a machine.
From page 4...
... Figure 1 Comparison of effectiveness of three virtual screening models identifying discoveries of "hit" compound effect scores by a programmed active learning machine based on coverage of the experimental space. The active learning model was able to identify 57 percent of the active compounds by covering only 2.5 percent of the matrix.
From page 5...
... Research on deep learning and neural networks -- extremely effective pattern recognition techniques based on efforts to make computers function like the human brain -- has been happening since the 1970s, but according to Hodas this research fell out of favor in the 1990s and early 2000s because of its computational intensity. Eventually larger amounts of data, better computing, and better software coupled with the preservation of research from those early efforts has led to a resurgence in neural networks.
From page 6...
... SOURCE: David Gunning, Defense Advanced Research Projects Agency. One noteworthy DARPA project is called Big Mechanism, which is developing an AI system that can read research on cell biology, extract enough information out of it to construct a candidate model of cancer pathways in the cell, interact with a human biologist to try to refine the model, and develop a hypothesis and a causal model of what is happening.
From page 7...
... that says that although a computer program might someday produce an output so divorced from the original program that it raises the question of whether the human creator or user could be regarded as the author, that issue has not arisen yet and therefore current outputs should be copyrightable. "The more powerful rights, particularly for life sciences inventions, lie in patent law, and there the terrain is decidedly less hospitable," Feldman explained.
From page 8...
... "Employers started to understand the workforce implications of changing AI capabilities before workers and policymakers, because the strategic intent of all of this AI is focused on innovation and growth." Greer also warned against the inequity of perpetuating current data usage practices at the national level without thinking about its impact on citizens, and noted that some are considering a universal basic income to alleviate the economic pressures of job automation while reimbursing citizens for sharing their data. He concluded, "If you do not have data, it is harder to participate in the new AI economy, and this is driving inequity that will require systemic changes to resolve." 8
From page 9...
... PLANNING COMMITTEE: Jeffrey Welser, IBM Research-Almaden; Taylor Gilliland, National Center for Advancing Translational Science at the National Institutes of Health; Avery Sen, Toffler Associates. STAFF: Susan Sauer Sloan, Director, GUIRR; Megan Nicholson, Program Officer; Claudette Baylor-Fleming, Administrative Coordinator; Cynthia Getner, Financial Associate.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.