Imaging science has the power to illuminate regions as remote as distant galaxies, and as close to home as our own bodies. Everything from medicine to carbon sequestration is the potential beneficiary of masses of new data, and researchers are struggling to make sense of it all and communicate its meaning to other researchers. Many of the disciplines that can benefit from imaging share common technical problems. Yet researchers often develop ad hoc methods for solving individual tasks without building broader frameworks that could address many scientific problems.
At the 2010 National Academies Keck Futures Initiative Conference on Imaging Science, researchers were asked to find a common language and structure for developing new technologies, processing and recovering images, mining imaging data, and visualizing it effectively. A common theme emerged: how do you find what matters in a sea of information that is varied, incomplete, or simply monstrously large in size and scope? This problem is particularly tricky because scientists may know the underlying truth that they are seeking, but are often unsure how it will look in a certain imaging technique. For some, the task was picking out the dim light of a tiny planet obscured by a sun billions of times brighter. Others aimed to mine satellite images to track tiny specks of land that are clear cut in a Brazilian rainforest. Still others hoped to turn the power of imaging inward, to find hidden tumors or signs of Alzheimer’s disease decades before people show symptoms.
The Keck Futures Initiative highlighted Imaging Science to spur researchers working on similar problems across disciplines to create common
solutions and language. It brought researchers from academia, industry, and government together into 14 Interdisciplinary Research (IDR) teams to develop creative thought outside the confines of any individual area of expertise.
IDR teams 1A&B grappled with how to integrate images of the brain with tools like MRI, PET scans, EEG, or microscopy, which each operate on different time and length scales. Some can capture signaling molecules that are just a few hundred nanometers, others map neurons that are tens of micrometers, while still others track the electrical impulses coursing through our brains. But there is no framework for combining this grab-bag of techniques to say how signaling molecules relate to gray matter, or how an MRI scan showing shrinkage in an Alzheimer’s disease–riddled brain corresponds to the lower oxygen usage shown on a PET scan. Some members quickly realized that to integrate data from the tiny to the large, you need to perform imaging using many devices at once. They proposed doing a panel of imaging tests on animals and humans, developing models of how those images related to brain function and to each other.
Teams answering challenge 2 discussed whether it was possible to create overall metrics to evaluate an imaging system’s performance. One team determined that no metric will be useful unless it can account for, and adjust to, the person interpreting an image. They developed the idea of creating a system that was tailored to an individual reader’s biases and preferences. They also emphasized that tasks like picking out the tiny tumor in an X-ray rely on key contextual information that isn’t available in the images themselves, and that good metrics need to account for this information. For instance, radiologists use context like the patient’s history and symptoms to hone in on the areas to scan.
Researchers in team 3 aimed to detect meaningful changes between two images. Some tasks, like mapping deforestation, rely on grainy satellite images that are often altered by cloud cover, rainy days, or snow. Although there are many powerful algorithmic tools available, most researchers develop ad hoc solutions for these tasks and don’t really share their approaches with others. One group decided that a web-based tutorial inspired by the much-loved Numerical Recipes textbook could be combined with a grand challenge competition to help standardize the toolsets researchers use in image processing. Another group decided that tracking a sequence of images over time, rather than just two images, would allow them to identify more meaningful trends in the data.
IDR group 4 was charged with finding exoplanets that circled distant suns. The physical devices needed to find these planets are already being developed, so the group focused on building image processing algorithms. This task is difficult because most of the exoplanets found so far haven’t looked anything like the predictions, so astronomers aren’t quite sure what they should even be looking for. They noted that an algorithm should account for the disturbances in the image caused by filtering out the starlight, should distinguish the blue dot of an exoplanet from streaks of star light, and should pick out the planet’s motion as it orbited its sun. They also hoped to adapt their observational methods, so that, instead of spending a fixed amount of time monitoring each portion of the sky, they could spend more time gathering light from promising areas while quickly moving on from less promising ones.
Although adaptive optics has already revolutionized astronomy, team 5 aimed to extend the approach to other arenas. In the classic adaptive optics set-up, light is sent out through a medium, and the altered wave is recorded; a lens can then correct for that aberration by altering its shape with deformable mirrors. The researchers decided that adaptive optics could be especially useful for peering inside the body. They envisioned expanding the technology from two dimensions to create volumetric imaging—looking at hearts, lungs, and brains in 3-D. They also thought the technique could be expanded to peer through tissue that usually scatters light waves, so that fuzzy objects inside the cell could be seen more clearly.
Team 6 focused on finding the robust markers of psychiatric diseases like autism spectrum disorder and schizophrenia. Although these diseases are usually diagnosed by their symptoms rather than a definitive test, the underlying structure and function of the brain is at the root of these conditions. Thus, imaging techniques like PET and MRI should be able to reveal the brain’s dysfunction. Unfortunately, all of these techniques can mistake healthy brains for diseased ones, so the team decided a panel of multiple markers would be needed to accurately find signs of disease. They also emphasized that, because behavior is the hallmark of these diseases, new techniques to monitor people in more natural environments, such as gaze tracking and portable electrical activity readers, could be developed to strengthen some of these biomarkers.
Team 7’s challenge was to incorporate several imaging methods to streamline disease treatment and diagnosis. The team quickly focused on cancer and imagined a future in which MRI, PET, CT, and other diagnostic imaging could be integrated into one, multipurpose device to facilitate
disease diagnosis and targeted treatment. One group imagined 3-D goggles that could continuously scan people’s retinas for signs of metastasis in their blood cells, instead of requiring patients to come in every few months for an invasive blood test. As one team member noted, “Who wouldn’t want to watch a 3-D movie with their family and decide if you have disease at the same time?”
Team 8 aimed to develop better architecture to store, curate, and make sense of the data deluge from imaging science. Currently, images collected in biological disciplines, including neuroscience, are stored in different formats, come from a constantly changing array of instruments, and look at different underlying physical phenomena. In addition, databases work well when you know what you are looking for, but they currently lack the tools to explore the data in images in a less directed manner. The team envisioned developing standards for data searches and also imagined an architecture that supports image processing and operates as part of the database. The team developed a concept of exploratory tools that let people collect and analyze image data and imagined using machine learning to anticipate what someone is seeking, even when they’re not quite sure themselves.
At the close of the conference, many researchers noted how valuable it was to speak with people outside their disciplines. Although the current field of imaging science is full of many different languages, for just a few days, researchers spoke a common language. With the avalanche of imaging data expected in the coming years, an ability to tackle broader problems systematically and to find meaning in the madness will only become more important.