What are the limits of the Brain-Computer Interface (BCI) and how can we create reliable systems based on this connection?
BCI includes a wide range of interface and signal processing technologies from direct recordings from brains to electroencephalography (EEG) and functional magnetic resonance imaging fMRI. BCI enables a wide range of applications that include helping those with impaired physical function, such as stroke victims, control everyday objects in their environment; analyzing awake and sleep brain states to monitor alertness levels and diagnose brain disorders, and understanding market preferences.
One of the most dramatic advances in recent years is “mind reading,” which uses BCI to decode brain states to reconstruct what a subject is experiencing. There is also a growing market for computer game and devices that are controlled by brain states (http://www.bcireview.com/). Wireless technology has made it possible to record from mobile humans.
Another active area of BCI is replacement of lost sensory interfaces. Cochlear implants were developed in the 1970s and over 219,000 people worldwide have received cochlear implants. Progress has also been made on retinal and cortical implants to restore sight in blind patients. Remarkably, blind patients have reported substantial “sight” using a camera to activate an array of electrodes on the tongue, one of the most sensitive sensory surfaces of the body.
• What are the technical problems with creating long-term, stable interfaces with brains?
• Can two humans implanted with BCI communicate directly with each other? What would be the consequences?
• There are many ethical concerns including: Consent, privacy remind reading, fear of hype, personality alteration, risks and benefit. As with other powerful technologies, BCI can be used for good and bad purposes. What impact will it have on society?
Berger TW and Glanzman DL (Eds.). Toward replacement parts for the brain: Implantable biomimetic electronics as neural prostheses. MIT Press: Cambridge, MA, 2005. Makeig S, Gramann K, Jung T-P, Sejnowski TJ, Poizner H. Linking brain, mind and behavior. Int J Psychophysiol 2009;73: 95-100.
Nishimoto S, Vu AT, Naselaris T, Benjamini Y, Yu B, Gallant JL. Reconstructing visual experiences from brain activity evoked by natural movies. Curr Biol 11 Oct 2011;21(19):1641-6. Epub 2011 Sep 22.
Because of the popularity of this topic, three groups explored this subject. Please be sure to review each write-up, which immediately follow this one.
IDR TEAM MEMBERS—GROUP A
- Paul A. Fishwick, The University of Texas at Dallas
- Joseph T Francis, SUNY Downstate Medical School
- Nikki Mirghafori, International Computer Science Institute at Berkeley
- Jacquelyn F. Morie, University of Southern California
- Adriane B. Randolph, Kennesaw State University
- Ravishankar Rao, IBM Research
- Aviva Hope Rutkin, Massachusetts Institute of Technology
- Paul Sajda, Columbia University
Aviva Hope Rutkin: NAKFI Science Writing Scholar Massachusetts Institute of Technology
IDR Team 7A was asked to explore the possibilities and limitations of brain-computer interface (BCI) technology. Brain-computer interfaces are devices that permit direct communication between the brain and an external machine. Such communication could involve tracking an individual’s mental state, altering the way the brain processes information, or even affecting neurological operations in real time.
Team 7A was fascinated by the many potential products that could flood a future BCI market. They opted to act as the board of an imaginary company called Brain Buddy, Inc. (Motto: “We help you think.”) By brain-storming numerous possible devices to push into development, the team was able to explore by proxy the potential applications and pitfalls of BCI technology.
First, the team considered some larger issues for its discussion. These questions, which they divided into the categories shown below, kept the members attuned to the many current constraints of BCI technology.
Our group had to consider the limits of human knowledge. There is much that we do not yet understand about the way the brain functions. We also do not understand the relationship between consciousness and subconsciousness. For many of Brain Buddy’s proposed products, the team looked for previous research or ongoing experimentation to suggest that a marketable BCI product could one day be possible.
Introducing new technology without deeply considering its impact on the world could have disastrous consequences. For example, some people might become physically or cognitively dependent on their BCI device. Though issues of social policy were a recurring theme throughout the
conversation, some of the team alluded that it is up to society to decide if a given kind of technology is acceptable or unacceptable. As one member put it, “the street finds its own use.” All agreed to the value of the conversation.
BCI technology raises many new ethical questions that will have to be answered in our legislatures and courts. Who owns the copious data generated by, e.g., a “lifelogging” device that continuously records your biological statistics? How can we ensure privacy of BCI devices? How do we prevent others from hijacking our minds for mischievous or even nefarious purposes? The intense public concern about President Barack Obama’s Blackberry in 2008 pales in comparison to the long-term possibility that terrorists could hack into his brain.
The existence of many of Brain Buddy’s devices is predicated on the development of quite a few new engineering techniques, including the ability to handle sensory input and motor output. Many devices would also require fully-closed loop BCI bi-directionality.
With these issues in mind, the team explored the rich and varied world of future BCI.
The team began by focusing on a single imaginary product: a labeling cap. This cap would monitor the brain for identifying signals, such as the P300, a well-researched electric signal that indicates when attention has been piqued. This signal would trigger the cap to take a picture of your surroundings and tag it with other important data like time and GPS location. The picture would then be automatically uploaded to the cloud, where a powerful algorithm would sort the picture into a category with others.
They debated possible users of this technology. An ordinary busy person like an academic or an artist might use the pictures to generate new ideas for their work. Someone with Alzheimer’s could use the cap as a kind of memory supplement. Others might use it to create a visual diary of their life.
One problem is that this device would generate a massive amount of data—far too much for any rational person to sift through on a regular
basis. The group agreed that very powerful computer algorithms would have to learn what information you want to keep or discard. This opened up the possibility for the computer to use the data to provide just-in-time feedback. Instead of manually sorting your own data, computer scientists could develop programs that deliver automatic notifications, or even provide unprompted direct brain stimulation.
Team 7A also considered ways in which other people might benefit from a labeling cap. If pertinent mental, emotional, and physiological data could be outwardly displayed in, say, a projection directly above our heads, how would that impact the way in which we interact with others? Such technology could ultimately lead to the kind of mind-reading “which uses BCI to decode brain states to reconstruct what the subject is experiencing” suggested in NAKFI’s prompt for IDR Team 7. (However, some members suggested that this kind of mind meld would be too high-bandwidth for humans to handle.)
The label cap was a launchpad for a long list of new Brain Buddy products. These included:
Brain Buddy (standard model)
Many individuals wish that they were more productive. However, we often get bogged down in time-wasting tasks and are easily drawn by tantalizing distractions. Now, by wearing the standard Brain Buddy, you will be notified about how to best portion your time. The device will track what settings tend to lead to your best work, and provide suggestions about how to spend your workday. It will also notify you through a gentle tone when to take a break from work, preventing you from becoming excessively stressed.
Imagine that a veteran with post-traumatic stress disorder (PTSD) needs to check her mailbox. The neighbor’s sprinklers unexpectedly turn on as she walks down the driveway, triggering a panicked emotional response. The Health-A-Wear device would be primed to handle these issues, perhaps through noise cancellation, post-stimulus masking, or even direct-brain stimulation. It could learn which settings were more likely to cause problems
and prepare the individual accordingly. Over time, the device would slowly decrease its functioning, allowing the user to wean herself off the BCI.
In an online course, the professor has little to no understanding of how well the students are learning the material. However, if all of the students were wearing Learn-A-Wear devices, he would be able to monitor each of their mental states. This would allow him to calibrate his lesson to the advantage of each individual listener, as well as provide meaningful feedback on his overall performance. The students’ caps could also filter an incoming presentation for information that is most pertinent or helpful to them.
Brain Buddy, Jr.
When a baby cries, his parents must guess at what’s troubling him. Is he hungry? Does he need his diaper changed? Does he need to be burped? Now, by outfitting your baby with Brain Buddy, Jr., you can instantly check your baby’s state and understand what is on his mind. Furthermore, software upgrades would become available as your child aged. When he reaches, for example, the often problematic middle school years, you would be able to monitor his emotional state for signs of anxiety or depression.
Brain Buddy Silver
As you age, it can be difficult to keep track of upcoming health problems. Brain Buddy Silver would be calibrated to track your body for signs of possible medical issues, catching problems before they became dire. It would also monitor your mental state, suggesting, e.g., memory games to keep you alert and boost your cognitive functioning.
Team 7A agreed that they were interested in developing some of the proposed products. While fine-tuning the details of the different prototypes, they encountered several more problems that needed to be addressed by their hypothetical company. In order to be successful, Brain Buddy would have to find ways of:
• making the product ergonomic and aesthetically pleasing
• ensuring that the product met some minimum threshold of reliability, as a technological malfunction could be troublesome or even dangerous
• developing an efficient algorithm to retrieve, store, integrate, and secure neurological data
• keeping the device relatively affordable
• finding a practical way to power the device
• calibrating a single device to different brains and thinking styles
Furthermore, ethical questions continued to loom large. It was not difficult to imagine deeply troubling scenarios in which subversive groups reverse-engineered BCI technology to conduct cyber-neuronal warfare.
Team 7A was also particularly concerned with the possible adverse health effects that could result from BCI use. For example, some users might become dependent on or addicted to their BCI; one member suggested that a lawsuit would result if one person took another somewhere out of range of wireless connection. Other users might develop “data-compulsive disorder,” becoming obsessed with the experience of lifelogging. Some could lose touch entirely with the real world.
Many of these issues came back to one large, overarching problem: we still don’t know how the brain works. We don’t know how wearing a BCI device would impact our nervous system. We don’t know if these unpredictable plasticity changes would be a positive or negative. Though the team ended its discussion feeling optimistic about Brain Buddy’s potential, they agreed that many questions must be answered to make BCI technology a reality.
IDR TEAM MEMBERS—GROUP B
- Dima Amso, Brown University
- Cynthia S. Atherton, Gordon and Betty Moore Foundation
- Jose M. Carmena, University of California, Berkeley
- John Doyle, California Institute of Technology
- Adam Gazzaley, University of California, San Francisco
- Ricardo Gil da Costa, Salk Institute
- Jay Lee, University of Cincinnati
- Chris Palmer, University of California, Santa Cruz
- Anna W Roe, Vanderbilt University
- Aaron L. Williams, University of Virginia
Chris Palmer, NAKFI Science Writing Scholar University of California, Santa Cruz
IDR Team 7B was asked to define the limits of Brain-Computer Interfaces (BCI) and determine the reliability of systems based on this connection.
One of the most active fields within basic biological sciences over the past 20 years has been brain science. The variety of methods to record brain activity is rapidly growing. Also, the spatial and temporal resolution of brain activity signals is improving, meaning it is easier to tell when a precise area of the brain is activated. At the same time, advances in electronics and computing have led to the miniaturization of robust and powerful computing devices. This convergence of technological innovation is making it easier to bring computers into close contact with brains to acquire reliable brain activity signals and control brain activity in meaningful ways.
The team quickly decided it did not want to focus on the well-trod topic of technical engineering challenges related to improving BCI. Instead, the team turned it attention to how computers can work with our brains to enhance cognitive ability.
Inspired by Clifford Nass’ plenary talk, the team focused on one cognitive function that is relatively poor in humans: directing attention. As multitasking increasingly becomes a part of our everyday experience, it becomes difficult to know what one should be paying attention to from moment to moment. Humans can benefit from an ongoing stream of computer-generated cues about which features in the environment to attend to. The team also emphasized a technological approach that maintains or even enhances face-to-face social contact, again inspired by issues raised in Nass’ talk. Nass, a professor of communication at Stanford University, presented experimental results showing that heavy multitaskers were actually poor at multitasking due to a deteriorated ability to focus attention. He also emphasized the necessity of direct social interactions for developing healthy emotional responses.
The Ultimate Brain-Computer Interface for the Digital World
Our team envisioned a closed loop BCI device that assesses a person’s environment, her life history and goals and her current brain and body
states, and uses this information to influence her digital environment. The BCI would then provide an input to the brain’s attention centers to focus attention in the desired manner.
Input signals to the BCI
There are a variety of real time signals about a person that can be fed into the BCI, such as electrical signals from the brain and physiological signals from the body (pulse, respiration, eye-movements, facial expression, etc.). BCI can also form an overall history for its user based on weeks and months of collecting real time data. Medical history can also be incorporated.
Types of BCI
The team did not specify what kind of BCI technology would read brain signals or stimulate attention areas. Some of the non-invasive options for reading brain signals include EEG and functional imaging. Invasive options include direct microelectrode array recordings and optical imaging.
Non-invasive options for brain stimulation include transcranial magnetic stimulation and transcranial electrical stimulation. Invasive options include optogenetics and direct microelectrode array stimulators.
How Does the Brain Focus and Maintain Attention?
Two aspects of attention
There are two primary aspects of attention that can be guided by the BCI. Selective attention indicates where, or to what, attention is directed. Sustained attention indicates a continued holding of attention at the selected location or object. The former can be manipulated to facilitate task switching and the latter can be manipulated to facilitate task-maintenance.
Brains areas involved in attention
Neurophysiological studies in monkeys show that electrical stimulation of visual cortical areas with microelectrodes is effective at directing visual attention to certain locations in the environment. Because visual
cortex contains map-like representations of the environment, it is relatively straightforward to figure out which small portion of the visual cortex to stimulate to selectively focus visual attention at a specific spatial location.
Like the maps in visual cortex, many brain areas contain spatially arranged maps of various features that have been well studied by neuroscientists. These areas can be similarly targeted with electrical stimulation to selectively focus attention on any number of specific environmental features. For example, precise stimulation of auditory cortex could bring attention to sounds of specific frequencies or tones—making it possible for the BCI to direct a person’s attention to a particular voice or a sound that conveys an imminent threat.
This type of stimulation could be very helpful in a number of clinical populations. For example: 1) Individuals with autism often have difficulty reading emotion and may benefit from stimulation to brains areas that process humans faces when they engage in social interactions; 2) Victims of stroke or other brain damage may experience hemifield neglect, in which they ignore one half of their body and the external world. Directed stimulation to the sensory cortex, which contains map-like representations of the body, can alert these individuals in cases where there is risk to a neglected body part—e.g., it is about to come in contact with a hot stove or sharp object; and 3) For those with emotional disorders, limbic structures may be stimulated to modulate positive and negative reactions to specific events or stimuli.
The above are examples of how a BCI could use electrical stimulation to direct selective attention. Frontal and parietal cortical areas in the right hemisphere, as well as clusters of brain stem nuclei, have been shown to be important for sustaining attention in certain behavioral tasks. Electrical stimulation of some collection of these areas could assist people with maintaining attention on important tasks.
The algorithm used by the BCI can be tuned, or designed, to fit the needs or desires of an individual. Specialized algorithms may be made for individuals engaged in specific tasks. Other algorithms may be specialized for special needs populations.
Giving children cues about where to focus their attention (selective attention) while engaged in learning something, as well as helping them sustain that attention, can accelerate the learning.
On the flip side, perseveration is an inability to disengage attention. Children who perseverate on an object or task could benefit from a BCI that quickly recognizes the behavior, as well as neural and brain signals that proceed the behavior, and direct the child’s attention to a new object or task.
BCI can be used to most efficiently direct attention during multitasking sessions. In a case where an individual has a primary and secondary task, the individual must occasionally break away from the primary task to perform the secondary task. The problem is that people are bad at knowing when is the best time to make these switches. Task switching during a period of sustained concentration can be counter productive. The BCI can detect oncoming, naturally occurring dips in attention during the performance of the primary task and induce a switch to the secondary task.
There are many disorders of attention that can be addressed with the proposed BCI system, including autism, mood disorders, and brain injury.
Team 7B designed a general framework for a human-centered BCI that can be useful for education, enhancing every day life and providing therapeutic interventions. Though we focused on one specific cognitive function, i.e. attention, the framework can tackle additional functions such as perception, motor control, memory, language, etc.
Though the team did not discuss specifics about when BCI systems to control cognitive functions may be ready for testing in humans, most of the system components already exist in some form. Over the past few years, flexible electrode arrays have been implanted in human patients to detect the onset of epileptic seizures. Similar technology could be implanted within the brain areas discussed earlier to control some aspects of attention via electrical stimulation.
Brain stimulation is also possible via non-invasive technology such as transcranial magnetic stimulation and transcranial direct current stimulation. Here, stimulation is delivered with electrodes placed on the scalp above the target brain area. However, further advances are needed to provide stimulation to precise locations in the brain.
IDR TEAM MEMBERS—GROUP C
- Todd P. Coleman, University of California, San Diego
- Vincent DeSapio, HRL Laboratories, LLC
- Satinderpall S. Pannu, Lawrence Livermore National Laboratory
- Thomas Serre, Brown University
- Kelly Servick, University of California, Santa Cruz
- Qi Wang, Georgia Institute of Technology/Emory University
- Byron M. Yu, Carnegie Mellon University
Kelly Servick, NAKFI Science Writing Scholar University of California, Santa Cruz
IDR Team 7C was asked to probe the limits of the Brain-Computer Interface (BCI) and to suggest how we might create reliable systems based on this connection. Because team members had a rich collective expertise in restorative systems (seeking to restore lost sensory or motor abilities in a clinical setting), early discussion focused on the state of the art and the technical obstacles to effective motor control through BCI.
To explore the outer limits of current technology for decoding information from brains, the team tried to envision scenarios of direct brain-to-brain communication. Potential benefits of such communication range from a new means of self-expression for “locked-in” patients to nonlinguistic communication among all humans. This technology might even enhance our social lives in a digital age where communication is increasingly carried out in an emotionally restrictive online environment.
However, questions of implementation quickly overshadowed the theoretical discussion. The extreme challenges of decoding neural signals into relevant information, maintaining the integrity of implanted interfaces over time, and generalizing among unique, individual brains all came to the fore.
The team struggled to address more abstract questions about information transfer without returning to the field’s current limitations.
In the end, the technical discussion laid a meaningful groundwork for developing a more creative application of BCI. The team put aside technical specificity, but retained the somewhat utilitarian spirit of BCI, to develop the forward-looking concept of “brainlogging” as a means of enhancing our shared digital experience.
Lessons from Neural Prosthetics: Two Fundamental Dichotomies
The field of BCI faces significant trade-offs between methodological alternatives. First is the question of invasive versus non-invasive technology. The team agreed that much information can be gathered from outside the brain, without the surgical implantation of electronics, but that such information can serve only very specific, limited purposes. The skull is ultimately a powerful insulator of signals, and the only way to record or stimulate precise neurons or neural populations is by opening the skull and interacting directly with brain tissue. However, such a radical procedure has limited potential for use in humans, particularly those not seeking solutions to a severe physical disability. In fact, even among amputees and paralyzed individuals, resistance to invasive BCI is common.
A second source of tension in the field concerns human physiology. There is debate about whether knowledge of the brain is a necessary component of BCI from an engineering perspective. For example, in an algorithm that transforms neural activity into the movement of a robotic arm, programmers need not understand the functional role of individual neurons or small groups of neurons. Most of the signals the neurons produce are collapsed or discarded in the process of translating brain data into meaningful information, like instructions for a robotic arm. However, the team also acknowledged that a better understanding of the neural physiology that underlies interrelated systems in the brain might lead to better computational models and more effective BCI.
Though these issues could not be resolved in the course of a conference, neither were they ignored. The question of invasiveness highlighted the need for considering the desires of the end user. The tension between brain physiology and device functionality expressed the many levels on which the brain can by explored and mined for useful information. Both questions would inform the evolving discussion.
Asking a New Question
Knowing that the limitations of current technology and computational modeling are constantly changing, the team struggled to find a question that would set these issues aside. The goal was to find an application for BCI that might have desirable effects for humans in the digital age, regardless of the technological platform on which it was implemented. Assuming that brain monitoring could gather large datasets, what information would we want? The team’s question was refined to: what information can we draw from neural activity that we’re not already measuring directly in some other way?
This line of thought led to the concept of lifelogging—the focus of IDR Team 3. Currently it is possible to record all kinds of data from digital users (GPS location, heart rate, sleep patterns, jogging speed, caloric intake, etc.) It is also possible to log our opinions through built-in features of the digital environment: the “like” button on Facebook and numerical rating systems like awarding “stars” to a Netflix movie or an Amazon product are examples. However, conveying subtle emotional reactions in the digital world can be challenging. By nature of their brevity, comment threads and status updates rarely contain rich, emotionally introspective content.
By monitoring brain activity, Internet users could record their complex mental states in real time, bypassing the task of formulating and broadcasting emotions as text.
These brain logs could be shared and compared, allowing a user to create a personalized online “signature” and match it with the signatures of other users. Two people who exhibit similar neural responses to a piece of online media might have other meaningful similarities. (Work by Hasson et al. in 2004 has already shown that all viewers have closely synchronized neural activity while watching the same movie.) A process that the group dubbed “brainlogging” could focus on subtle similarities and differences in brain activity to connect users who have parallel emotional responses to the same online experience.
Fleshing Out the “Brainlogging” Concept
The group envisioned a system that records and stores neural activity while a person navigates the Internet and interacts with digital media. They chose not to specify what type of brain imaging technology would be used, or which neural regions would be monitored, judging these technical considerations to be outside the scope of the meeting. However, it was
assumed for purposes of discussion that scientists could already chronically record neural activity for large populations using minimally invasive, low-cost technology—a scenario they agreed was futuristic but not unrealistic. The remaining challenges in creating such a system would be:
• Analyzing huge sets of brain data to extract the interesting or relevant features
• Timestamping online media so that each neural response could be connected to the content that provoked it
• Determining which brain activity is relevant, and if necessary, separating purely sensory functions (eye movements to track video, for example), from deeper emotional responses
Even if innovative computer science models could address these issues, broader concerns remain. This extreme form of datalogging raises questions of privacy, particularly if such data were incorporated into social media. Companies might be motivated to use brain data to target advertising more directly, capitalizing on subconscious emotional characteristics of users to manipulate buying behavior. A user’s data might also be used as a predictive tool to draw conclusions about character flaws or even criminal tendencies. Employers might use such data to discriminate among candidates based on their neural habits. As with any personal information collected and evaluated in the context of the Internet, questions of social policy abound.
Envisioning a “Brainlogged” Future
Beyond the originally envisioned benefits of neural recording, the team identified other parts of our lives that might be revolutionized in a “brainlogged” environment.
Users might reap health benefits from having memory banks to store neural data. Medical professionals could analyze these data as another diagnostic tool or as a way of monitoring patient wellbeing. The system might be able to identify the neural precursors to certain illness. In particular, the team suggested that diseases like Alzheimer’s and Parkinsons have distinct neural “warning signs” that might enable early detection. Individuals with mental illness such as depression might benefit from more thorough tracking of their emotional states.
The system also has potential as an educational tool. Online learning environments often lack sufficient feedback about student understanding,
attentiveness, and investment. This direct form of monitoring could allow educators to track the neural patterns of their students closely and adjust the learning environment to help students succeed.
Finally, an application that might resonate in an age of increasing social fragmentation is “bHarmony”—a romantic matching system based on neural similarities and shared emotional responses. (Perhaps neural logging would offer more personality insight than a series of multiple-choice questions….) The group took into account evidence presented at this year’s NAKFI conference about the possible decline in meaningful human interaction in the digital age. The members suggest that “brainlogging” technology could someday create a healthier, more emotionally enriching world around our changing brains.