Skip to main content

Currently Skimming:

Task Group Summary 1--How would you design the acquisition and organization of the data required to completely model human biology?
Pages 5-12

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 5...
... This information, although generally perceived as highly accurate, is extremely hard to extract in reliable ways. On the other hand, high-throughput systematic biological datasets tend to be widely accessible, but are currently perceived as lesser quality information.
From page 6...
... TASK GROUP MEMBERS • Ananth Annapragada, University of Texas Houston • James Glazier, Indiana University • Amy Herr, University of California, Berkeley • Barbara Jasny, Science/AAAS • Paul Laibinis, Vanderbilt University • Suzanne Scarlata, Stony Brook University • Gustavo Stolovitzky, IBM Research • Eric Schwartz, Boston University
From page 7...
... The Initial Plan The group considered many options for collating and organizing data. It was decided that one of the most important steps is to find out what empirical data compilations already exist and organize them according to some basic principle to avoid covering ground already covered on a scale ranging from the molecular, protein, cellular, organ, and full-organism scales.
From page 8...
... The group decided the most important issue facing them was the many gaps in their knowledge. The different databases currently in existence aren't standardized and there is no consensus ontology or unified computational tools to deal with the data already compiled.
From page 9...
... The Five Year Plan The group resolved to create a list of goals that could be achieved within five years, should sufficient resources be applied to the work of a complete simulation of human biology -- "Google Human." Firstly, they wanted to create an inventory of all the data currently available and a preliminary inventory of all the missing data. Once the data have been created and compiled, a quality control check of all the data will be necessary to make sure that the data are correct and put into a format that is consistent for computer analysis.
From page 10...
... Multi-length and multi-time scale Models of subsystems and connections Variability Database of parameters Five Basic For base cases and perturbations Experimental data plus metadata Databases Connections Templates of appropriate subsystem choices Problem Categories In vitro and in vivo data are required Data Variability Acquisition Measurement error Put it on Google Documentation data Security and Privacy Issues Distro Create key Accessibility of results? components Need for open-source data and models Goals Subcell Editors Who curates?
From page 11...
... data Don't replace existing networks Model Create key Outreach components Educational Presentation tools Goals Applications Data Representation Tools Define problem Use templates to build model/ simulation structure Populate templates tools Workflow Populate parameters Validate model Apply model Apply perturbations andards predictively Sensitivity analysis Validate Key arisons Components Priorities 6 Figure 1.eps andscape the type large enough to be barely legible (4.64 pt)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.