IDR Team Summary 1
Develop a method to integrate neuroimaging technologies at different length and time scales.
The neurosciences and medical imaging have produced a diverse array of technologies that measure neural structures and signals. These methods acquire information over a wide range of length and temporal scales, ranging from magnetic resonance (MR) and electroencephalogram (EEG) data in the intact human brain (at the scale of centimeters) to electron microscopy and two-photon imaging at the sub-micron scale. Each of these imaging technologies contributes different but ultimately related understanding of the brain’s neural circuitry. There is fertile ground for the application of integration techniques; however, currently there is risk of dividing the data acquired using these different modalities into segregated fields. The challenge is to integrate the measurements obtained using these different technologies at different length and time scales. This must be possible because, in the end, all of these measurements provide information about the same basic neural circuitry. Combining the data across the variety of imaging technologies requires individuals and tools that are capable of understanding the neural circuitry and signaling; we need to develop a model that can integrate the data and the implications of these different measurements into a coherent whole.
Following are several examples of how progress might be made. First, it would be important to understand and quantify the relationship among key elements of neural signaling—such as resetting ion channel potentials, transmitter recycling, action potentials, sub-threshold synaptic potentials, glial signaling—and global signals such as fMRI (functional magnetic
resonance imaging), EEG (electroencephalogram) and MEG (magnetoencephalography). Second, it would be important to understand the implications of the dendritic and axonal arbors for the mean electrical field and its several frequency components (gamma band, alpha band, and so forth) as measured in clinical and scientific studies in EEG and MEG. Third, it would be important to understand the relationship between neurotransmitter concentrations, such as aminolbutric acid (GABA) density measured using MR spectroscopy, and circuit properties, such as the peak oscillation and coherence bands. Finally, it would be important to have the ability to generate a computational model of a circuit with specific anatomy so that the simultaneous prediction of the fMRI signal, the EEG signal, and the two-photon calcium images from this same circuit is possible given a particular input.
To systematically understand the relationship of data at different scales, it is necessary to establish theories and mathematical models to link the data and to validate these models with experimental data from in vitro settings and in vivo settings with animal models and human subjects. For applications to disease, it is also necessary to include pathological alterations of these models. Although there have been ad hoc efforts to combine data from different modalities, a systematic approach—which may lead to groundbreaking methodologies and science—is lacking.
How do we establish a common computational language that might be used by investigators using these diverse technologies to measure neural circuitry and neural signals?
Can we identify some key model systems that would serve as a fruitful environment for combining these techniques? Can these be human, or does the basic work have to be done in animal systems?
How do we educate investigators who are principally involved in one technology—say fMRI or two-photon calcium imaging—in the biophysics and modeling techniques that would allow them to understand the related fields and contribute to the complete modeling effort?
Appelbaum LG, Wade AR, Vildavski VY, Pettet MW, Norcia AM. Cue-invariant networks for figure and background processing in human visual cortex. J Neurosci 2006 Nov 8;26(45):11695-708. PMID: 17093091. Accessed online June 15, 2010.
Lichtman JW, Livet J, Sanes JR. A technicolour approach to the connectome. Nat Rev Neurosci 2008;9:417-422. Accessed online June 15, 2010.
Logothetis NK, Wandell BA. Interpreting the BOLD signal. Annu Rev Physiol 2004;66:735-69. Accessed online June 15, 2010.
Logothetis NK. What we can do and what we cannot do with fMRI. Nature 2008 Jun 12;453(7197):869-78. Review. PMID: 18548064. Accessed online June 15, 2010.
Ohki K, Chung S, Ch’ng YH, Kara P, Reid RC. Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Nature 2005 Feb 10;433(7026):597-603. Epub 2005 Jan 19. Accessed online June 15, 2010.
Sharon D, Hämäläinen MS, Tootell RB, Halgren E, Belliveau JW. The advantage of combining MEG and EEG: comparison to fMRI in focally stimulated visual cortex. Neuroimage 2007 Jul 15;36(4):1225-35. Epub 2007 Apr 19. PMID: 17532230. Accessed online June 15, 2010.
Sporns O, Tononi G, Kötter R. The human connectome: a structural description of the human brain. PLoS Comput Biol 2005 Sep;1(4):e42. Accessed online June 15, 2010.
Because of the popularity of this topic, two groups explored this subject. Please be sure to review the second write-up, which immediately follows this one.
IDR TEAM MEMBERS—GROUP A
Richard A. Baird, National Institutes of Health
Randy A. Bartels, Colorado State University
DuBois Bowman, Emory University
Joseph E. Burns, University of California, Irvine
J. Lawrence Marsh, University of California, Irvine
Gregory M. Palmer, Duke University
Steven G. Potkin, University of California, Irvine
Suzanne Scarlata, Stony Brook University
Mercedes Talley, W. M. Keck Foundation
Paul Vaska, Brookhaven National Laboratory
Lihong V. Wang, Washington University in St. Louis
Jordan Calmes, Massachusetts Institute of Technology
IDR TEAM SUMMARY—GROUP A
Jordan Calmes, NAKFI Science Writing Scholar, Massachusetts Institute of Technology
The current state of neuroimaging is reminiscent of the classic story of six blind men describing an elephant. One of the men has access to the elephant’s tusk, and concludes that an elephant is like a spear. The man standing right next to him, touching the trunk instead, decides the elephant must be like a snake. Each of the blind men has a detailed but limited view of their subject, and although each of them has access to factual information, none of them can claim a complete knowledge of the elephant.
The blind men are lucky, in that they only want to describe the outside of the elephant, whereas neuroscientists have to work from the systems level all the way down to the cellular level. A researcher looking at a magnetic resonance image (MRI) of a complete brain (at a centimeter scale) and a researcher looking at single-cell connections within that brain (at a submicron scale) have a gigantic barrier to overcome if they hope to collaborate.
As neuroimaging techniques have improved, there has been movement toward integrating various techniques so that one will reveal a more complete picture of the brain. Although this seems like a huge task, it is not impossible. Each technique runs at a different spatial and time scale, but they all measure the same basic circuitry.
Combining data from different technologies will require researchers and tools capable of understanding that basic neural circuitry in great depth so that they can create a model that can integrate the various measurements into data that makes sense to the investigator. First, the researchers will need an in-depth understanding of the relationships between key elements of neural signaling, processes like action potentials and glial signals, and techniques like functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). Second, the team will need to understand the effects of signals from different types of nerve cells on the brain’s electric field. Third, the team will need to understand the relationship between the properties of neural circuits and the concentrations of different chemicals in the brain. Finally, someone must be able to generate a computational model of a circuit that can predict fMRI signals and EEG signals at the same time.
To achieve these tasks, researchers would first need to establish a common computational language. Then, they would have to identify human or animal model systems that could be used for experimentation. Finally, some-
one would have to develop a program to educate experts who work with one technology on the other applications. These were the major questions that Interdisciplinary Research team 1A explored during the conference.
Defining the Challenge
Team 1A first worked to outline the advantages to integrating neuroimaging techniques. They all agreed that integrating the technologies would lead to a whole that was greater than the sum of the parts. Integrating imaging technologies across spatial and temporal scales should result in something that the simple ping-pong from one modality to another could not achieve. Already, there is quite a bit of interaction between people studying the brain at different scales. People use microscale techniques to develop new macroscale techniques in animals, which are then used to develop new techniques for use in humans, which lead to new questions that feed back into microscale research on animals. None of that is new.
The team had more trouble deciding how to integrate the technologies. Should the data be collected at the same time? Could data be better integrated with diagnosis?
More importantly, why should anyone go to the trouble? What is it that we could learn about the brain by integrating neuroimaging techniques?
Most methods of brain imaging use indirect contrasts. Often, scientists are not sure exactly what their tools are measuring within the brain. Linking modalities with indirect contrasts to those with direct contrasts, fMRI with EEG for example, could help improve our understanding of what the indirect contrasts are measuring.
Microscopic imaging can enable optimization of macroscopic imaging. Macroscopic imaging can identify regions of interest for microscopic imaging.
The team decided that by initially looking at a single disease, they would be able to see where the gaps existed between different technologies. They agreed to use Alzheimer’s disease as a model disease for the challenge, knowing that if they designed a good set of experiments, the procedure could be applied to other protein-folding diseases, and perhaps the challenge of integration in general as well
The Alzheimer’s Disease Neuroimaging Initiative (ADNI-2) currently under way includes an extensive neuroimaging battery, but no EEG. The group believed that the existence of the program demonstrated the desire
for more information on the effects of Alzheimer’s disease on the brain, but that the study could be greatly improved upon.
Developing “Ideal” Experiments
Most microscopic imaging techniques cannot be used on living human subjects. The adult human skull is currently too thick for optical microscopy or photoacoustic tomography (PAT) to penetrate. The group concluded that their studies would have to begin with an animal model. Both animal and human model systems would need to be developed in order to match up cognitive degeneration with brain images.
The overall goal of the animal experiments would be to identify imaging correlates of cognitive dysfunction and progression. Because several transgenic mouse models already exist for Alzheimer’s disease, the team would select one of those animals for use in its experiment. They would monitor cognitive impairment in the animal and conduct a battery of macroscale imaging techniques, including positron emission tomography (PET) to determine the time course of plaque formation and metabolic change, PAT of the hemodynamics, diffusion tensor imagaging (DTI), EEG, and MRI. At the same time, they would conduct in vivo microscopic imaging experiments, including dual-labeled PET/PAT of different stages of protein aggregation. As PAT is capable of both microscopic and macroscopic imaging based on the same contrast, it has the potential to bridge the gap between images acquired at vastly different length scales. At different stages of cognitive impairment, some of the study animals would be used for ex vivo and post mortem microscopic imaging to determine the intracellular and extracellular localization of aggregates and to confirm the pathology via identification of plaques and tangles. Finally, the team would conduct proteomics experiments. The data from invasive or post mortem microscopy techniques in animals could be integrated with the data from the macroscopic techniques and help improve those non-invasive techniques so that, when the non-invasive techniques are used in humans, researchers can extract more information from them.
After the experiments with the mice were finished, the researchers would move on to human subjects, following many of the same procedures, but using the results from their earlier work to limit the number of imaging techniques used on the human subjects. PAT has not been used in humans before, so experiments on the brains of infants and the retinas of adults may be necessary before the technique would be useful in studying an adult
brain. However, the thinness of the adult human cribriform plate could permit direct physiological measures at both microscopic and macroscopic scale in a deep cortical structure for the first time. These determinations would provide “ground truth” measures that can serve to meaningfully integrate across other imaging methods. The cribriform plate lies just below the orbital frontal lobe, which modulates reward and punishment processes.
After the imaging battery was completed, the team would be able to confirm the imaging correlates of cognitive dysfunction and disease progression. If the experiment led to unexpected findings, the data then would feed back into the animal model for further investigation. Finally, the procedure would require statistical methods for multiscale integration of high-dimensional data confirmation.
At one point, the group made a wishlist of all the technological features they wanted on the imaging modalities they currently use. They wanted a way to measure bioelectricity at high resolution, GPS-style scalability, with which they could use landmarks to identify an area studied in a microscale technique and also study it with a macroscale technique (or vice versa). Finally, they wanted a dye sensitive to depolarization in neural cells, which would allow for imaging of the early signature of the disease. The dye would be particularly important, because it would be necessary for the PAT/PET experiments, the crucial link between microscale and macroscale data.
Neuroimaging is expensive, and even while creating a wishlist of new technologies and talking about developing extensive batteries of tests for early disease detection, the team suggested that one long-term goal of the project should be to reduce the amount of imaging needed to diagnose Alzheimer’s disease.
In their concluding presentation, the team remarked on the need for a “two way street” between microscopic techniques and macroscopic techniques. “It’s a cycle of going back and forth, which we think is a solution,” the presenter said. “When you’re using one technique at one scale, you have to have the other techniques in mind.” The ability to work with multiple techniques will help researchers compare and contrast imaging data by concurrently collecting datasets in animal models and humans.
IDR TEAM MEMBERS-GROUP B
Chandrajit L. Bajaj, University of Texas at Austin
Robert J. Barretto, Columbia University
Graham P. Collins, Freelance Science Writer/Editor
Richard S. Conroy, National Institutes of Health
Scott T. Grafton, Director, University of California, Santa Barbara
Daniel P. Holschneider, University of Southern California
Andreas Jeromin, Banyan Biomarkers, Inc.
Allen W. Song, Duke University Medical Center
Kamil Ugurbil (IOM), University of Minnesota
Gordon X. Wang, Stanford University
Keith Rozendal, University of California, Santa Cruz
IDR TEAM SUMMARY-GROUP B
Keith Rozendal, NAKFI Science Writing Scholar, University of California, Santa Cruz
In 1990, President George H.W. Bush proclaimed the decade beginning January 1, 1990, to be the Decade of the Brain, pointing to “advances in brain imaging devices … giving physicians and scientists ever greater insight.” Twenty years later, further advances in neuroimagery continue to emerge at a rapidly accelerating rate, producing new challenges to realizing the benefits of brain research.
Neuroimaging techniques capture detail at sizes ranging from the atomic to the whole brain. Beyond the views produced by methods keyed to specific size scales, different imagery methods also track the nervous system over different time scales—from mere milliseconds to measurements taken across minutes-long experimental tasks or development courses that can span much of the lifetime of an organism.
Humpty Dumpty Has Fallen
As each method develops its own technology, training, literature, and theoretical paradigm, a real danger of fragmentation emerges. A global, comprehensive, and integrative perspective on the brain and nervous system may be more difficult to produce as more new imaging techniques emerge. A flourishing of new methods and technologies providing distinct insight
on neural systems produced this situation. But it is hoped that the technological and methodological ferment may also hold keys to developing a coherent brain science.
The challenge before neuroimaging can be addressed by locating points on the horizon where the possibility of integration dawns. An Interdisciplinary Research team (IDR 1B) tackled this challenge during the 2010 National Academies Keck Futures Initiative Conference on Imaging Science. Their discussion of integration strategies followed a series of key questions posed by the steering committee that shaped the conference agenda and assembled the teams.
An interesting provocation at the beginning of this team’s work helped to spark some creative tension that drove much of the early discussion. With a grin and perhaps a wink, one team member introduced himself as a serious skeptic of neuroimaging’s value. This group member asked: “For all of the government and private foundation investment in new neuroimaging technologies and studies—perhaps hundreds of millions of dollars—what has that bought society?” He argued that such a major research initiative should have long ago produced abundant evidence that it promotes quality of life, medical successes, and other broad social benefits. Such a skeptical perspective would therefore add the corollary “Why?” to each of the questions posed to this IDR team.
Can All the King’s Horses and Men Put It Together Again?
The team recognized aspects of the ancient debate on reductionism within the first of these challenge questions: “Can we establish a common language that unifies the data across all of the different levels of neuroimaging?”
Scientists have long recognized that reductionism, a powerful means of analysis, produces trade-offs with systems-level understandings. In the most successful cases, one can start from fundamental physical processes, like the kinetic energy of atoms in a gas, and fully reconcile this model with a larger scale model or measurement like air temperature, and beyond that to local air pressure, microclimate models, and on up. Could neuroimaging data be used to similarly integrate our understanding of the brain from the bottom up?
This would require the integration of models explaining ion channel processes, neurotransmitter actions, single neuron-biology, -genetics, and -signaling. These units in turn compose circuits and networks of neurons,
cortical column organization, and regions and lobes of the brain. The Human Connectome Initiative aims to connect every brain cell to the others, a comprehensive map of all of the potential circuits in the brain. Combined with functional data, the active circuits within these connections could be determined. Because the team included people studying the brain at a wide range of scales, proposals for top-down approaches and questions about the wisdom of pursuing bottom-up integration repeatedly emerged.
The framework the team adopted assumed that the ultimate goal was to put the pieces together again, but how? And with what tools? And, of course, Why? Shouldn’t the effort produce new discoveries, critical studies settling debates in the field, and the like? The team wanted to get more out of an integrated approach to neuroimaging data than could be produced by retaining the fragmented status quo.
Seeking Out the Right Glue
A recurring discussion point emerged concerning whether the integration should be a structural description of the brain or instead a functional or computational model? And wouldn’t each type of model, within each strata of detail, and also globally, need to integrate and constrain the others?
Team members repeatedly related these questions to the need for a “gold standard” or fundamental element of the brain around which integration can be built. The team insisted that this gold standard needs to incorporate both structure and functional aspects. Some of the fundamental units proposed included the electrophysiology of a signaling neuron, the connections and neurochemical specialization of receptors and nerve cells, small circuits of neurons connected together, or the mini-columns found to be core structures organizing the cortex and often serving specific functions.
A physical glue?
The team raised an issue with an assumption within their challenge questions: that the ultimate “common language” needed to unify the diverse data and subfields in neuroimaging will be computational. Applications of mathematics to this challenge did attract significant discussion, but the team spent some time discussing a perspective that instead sought out a physical property that could tie together the diverse methods of neuroimaging.
The specific physical indicator representing often very diverse leverage points revealing distinct processes, can produce divisive forces. The team sought out a signal that could be used for neuroimagery across wide time
and space scales. Progress on this front would facilitate integrating datasets because it would maximize the overlapping physical processes across the imaging modalities. An example of the difficulties that arise when linking incompatible signals can be seen in efforts to relate the BOLD signal of fMRI, primarily revealing metabolic processes, to neural signals, produced by electrochemical processes.
The team proposed focusing on using the electromagnetic fields produced within and between neurons as a unifying physical process to bridge the strata of measurement. There are static (field potential) and dynamic measures (spikes or EEG) of neural signaling at nearly every level of space and time resolution. The electromagnetic character of activities from the atomic to the tissue level should by necessity relate to one another according to well-understood physical laws. And this should help the integration process.
However, the team was wary of being seduced by the fact that current neuroimaging methods are heavily biased toward detecting signals in the electromagnetic spectrum. The historical success of electrophysiology methods in neuroscience may have led to this bias. Non-electrical physical processes also may hold some promise as a standard evident at every level of neural function. Some of these strata-spanning methods could be focused on the dynamics of chemicals within and between neurons or genetic inhibition and expression.
A computational glue?
Regardless of the physics of the signal, the team also pursued a potential common computational approach for mapping and integrating neuroimaging data between the different scales.
Here the team focused on the future promise of applying graph theory and other means of representing data in a common framework, abstracted from the underlying physical reality. Once neuroimaging data can be represented in the language of nodes and links, connections between levels of space or time become mathematically tractable. For instance, a graph-based model of several neural circuits could be used hierarchically with a higher level graph representing networks of circuits in a small volume of the brain. The lower-level model serves as an input influencing the state of a single node of the higher-level model. In this way, if all of the links between layers can be determined, the comprehensive model will unify the spatial levels.
Such a unification could be useful for what the mapping functions can tell scientists about how smaller-scale processes produce effects at a
larger level, and how feedback flows down the levels to influence the more microscopic processes.
Other advantages that the team discussed for this approach were that these mathematical models could be produced directly from data or validated with real neuroimaging data and that the models easily incorporate dynamic or time-based variables, which better model the ever-active brain. Calculating correlations observed between real imaging data that bridge levels of time and space in this manner will help identify some of the coherence in the nervous system’s structural and functional organization.
Thus, the abstract representation of information and its flow that graph theory produces could serve as a gold standard unit that helps align data from different levels of brain imaging.
Reassembling the Puzzle, Seeking Pieces That Fit
The team adopted an ambitious goal in seeking to unify all the layers of neuroimaging in a common modeling approach, but in the end recommended less ambitious sub-goals. Low-hanging fruit remain in the orchard of neuroimaging techniques awaiting integration. The team tried to identify areas where the integration between space or time strata seemed most promising in the near future. Out of some of these small-scale bridging successes, some general strategies useful for the other gaps could emerge. The team suggested that neuroimaging scientists should seek out ways of incorporating data at one level above and one level below their current preferred neuroimaging tool. These nearby methods should be most likely to give them insight into their current research questions.
Absent a physics-based gold standard that can simultaneously signal both structural and functional aspects of the brain, can another pathway be pursued to best produce integration? Simultaneous measurements at roughly the same spatial scale, using pairings or tripling of methods could help integrate across time scales as well as bridge the structure and function dichotomy. Here the discussion usually proposed solutions or discussed new developments related to combining a structural neuroimaging method like MRI with a functional method like EEG.
Several areas of fruitful convergence across one or two scales of time or space were discussed, including simultaneous EEG+MEG studies, fMRI and electrophysiology studies, studies of local field potentials as they relate to the BOLD signal used in fMRI, calcium fluorescence microscopy plus
electrophysiological measures, and two-photon calcium imaging combined with MRI.
Which Pieces Should Be Picked Up First?
Many of the neuroimaging techniques available require sacrificing the research subject, which obviously precludes all but the post mortem study of human beings. The team nevertheless wanted to push the limits of non-invasive techniques in order to use human subjects whenever practicable. An integration of the data between human and animal studies should be kept in mind, however. The choice of an animal model for the work demanding invasive techniques should be made for maximum compatibility with the research focus in humans. That research focus, moreover, should allow for imaging with as many methods as possible across the scales to be integrated.
Picking up the right brain building blocks
The team’s discussion of the most promising target systems for study in humans and nonhuman animals focused on smaller-scale neural systems, completely mapped and understood in terms of predictable outputs from known inputs. These would best support computational language development and testing. Sensory systems such as visual cortex or the retina or olfaction could fit the bill here. Many motor systems have the same detailed understanding already in place. Sensory systems also recommend themselves because previous studies have shown that they are organized both structurally and functionally in particular patterns, such as columns or bands of similar cells activated by similar stimuli. These may in fact prove to be organizational motifs in the nervous system replicated in other areas like the hippocampus. An integrated understanding of such a potential “building block,” and confirming its generality, could make rapid progress possible in other brain regions and systems.
The team shared a general consensus that the cortical columns level of detail represents a particularly “sweet spot” to target with multiple methods, mathematical modeling of the unit, and testing against empirical data. A fully integrated understanding of a cortical column as a target was described as “reachable.” Columns lie at the middle level of spatial scale, and models linking the column to smaller-scale structures seem to be on the verge of development. Functional MRI imaging can resolve detail at the level of the cortical column now, and the volume of a column is not unthinkably large
for higher detail structural and functional mapping using existing methods. With the connections mapped completely within a column, some electrophysiological models will apply that will integrate spike train data and produce predictions about the overall electrical signal that may be detected above the column by EEG, for example.
Picking up the right animal model
The team then discussed studies of animals aimed at supporting a building block effort. Team members discussed the advantages of using animals that are traditionally studied within neuroscience, such as the worm C. elegans. This simple animal’s nervous system has been comprehensively mapped using electron microscopy, which produced a synapse-level connectome. Furthermore, the functional operations of the worm’s connected neurons, usually referred to as circuits, are also known. However, the circuit, which seems to work as a coherent unit or building block in the worm may be different than the elements out of which human brains are assembled.
Zebrafish embryos, easy to study because of their transparency and rapid reproduction, are also a promising organism to study. Their nervous system has been studied in a way that helps us understand narcolepsy in humans. The gene disrupted in this disease regulates the development of a 10-neuron circuit that has been completely mapped with two-proton calcium microscopy. This reveals the circuit at the level of every connection, both internally and externally, supporting the modeling of inputs and outputs.
The team emphasized the necessity for checking emerging integrative models against empirical data which would allow for hypothesis-driven experiments to further validate the emerging model. This would be the final goal of model testing—seeking to move from correlation-dependent models to those that can successfully predict the outcomes of studies designed for high internal validity.
Here, the team saw great utility in conducting perturbation-driven experiments in real tissue, comparing the observed effects of lesions, transcrainial magnetic stimulation, optogenetic methods, and other means of selectively disabling key elements of the system. Parallel perturbations in the abstract integrative math model would also be pursued to validate the complete model. Such approaches are already widely used to test both structural and functional models in a wide variety of circumstances. These investigators are currently driving the development of perturbation technologies that are compatible with existing functional imaging modalities like MRI.
Plastic or fiber optic instruments that can cause temporary disruptions while functional data are being collected via fMRI could help to drive great strides in integrative research.
What pieces are missing?
The team also spent some time “pushing the creative envelope,” seeking glimpses of blue sky technologies and creative methods and having fun with the challenge. Still, they hoped to identify desirable new developments, still somewhat miraculous at this point. Foremost were technologies for neuroimaging that allow for imaging of natural behavior in awake animals. What possibilities exist for portable neuroimaging technology, either through miniaturizing existing technology or developing new means? Carbon nanotubes can be fashioned into highly portable recording electrodes that can be fixed into place, and some hope exists for building large field MRI and making portable only the necessary other elements like the field coil for functional imagery. Other suggestions, based on currently emerging research, suggest immobilizing animals but allowing them to navigate and receive feedback from virtual reality systems and microelectrodes implanted in key sensory and motor nerves. A current mouse spherical treadmill and toroidal display system was discussed as well as a system for studying head-fixed zebrafish “swimming” in response to false visual feedback. Finally, tracking neurochemicals and brain metabolic processes in real time could prove quite useful to the teams challenge. Microdialysis performed in a helmet that a rat could wear might be one way to achieve this goal.
Recruiting Women and Men to the King’s Army
The roles of scientific institutions must be addressed in meeting the challenge. What changes to science education and training should be implemented? What organizational, financial, and institutional developments will best serve progress toward integration?
The discussion here focused on clinical applications of neuroimaging technology, perhaps seeking more of an answer to the “why” challenge, and less to the “how.” It was generally agreed that the integration of neuroimaging techniques could produce important gains in medicine, but it was noted that the costs of neuroimaging block its adoption in the clinic, even of single modality imagery. PET and CT have a higher penetration
rate in community hospitals, to the detriment of MRI usage. In this case, the better method is not adopted, a fact attributed to differences in the reimbursement rate and the costs of the equipment itself. However, the costs of MRI are falling. Neurosurgeons still primarily rely upon electrodes and cortical stimulation mapping when operating on the brain, where fMRI may be applied. Only in the placement of deep brain stimulation implants is this imaging technology preferably used. This may not be entirely due to costs; the technique first used and more widely known and taught has the advantages. To encourage the development and use of new imaging techniques, including future integrative technologies, the team wanted to focus on easing the dissemination of the new technology by lowering costs and by increasing the ease at which the new technology is adopted. This should influence the design of the technology being offered to clinicians and the availability of training in medical schools and hospitals.
The team believed that adoption of new integrated methods by cognitive scientists will best be fostered by incentive-based approaches. That is, scientists will be inspired by other scientists who have already adopted more complete modeling and imaging approaches, achieved breakthroughs, and attracted funding. The cognitive science field quickly adopted imaging techniques after applications to cases in the field emerged. For example, fMRI’s impact on the field was significant once its successes were published and further studies were funded.
Early adopters of integrated methods could be recruited into training efforts, but the challenges securing funding for training and bringing together scientists with diverse backrounds remain. The team suggested introducing single methodology experts to one another at interdisciplinary conferences, targeting those pairs or triplets of technologies that show the most promise of integration. At this point, the work is a long way from creating a common language or model that integrates across all scales, but bridges and fusions across two or three levels are possible. Besides the salient example of this year’s Keck Futures Initiative Conference itself, the team noted that meetings and collaborations like those enivisioned by the team, are already occurring. As a research problem exhausts the utility of one image modality, investigators are spontaneously seeking out other methods at different time or space scales. The comprehensive modeling approach to neuroimaging should encourage and spur additional similar such activities, the team concluded.