Skip to main content

Brain Health Across the Life Span / Search Inside This Book
Return to Search Inside This Book results

170 matches found for How People Learn Brain,Mind,Experience,and School Expanded Edition. in 5 Measuring Brain Health

Select a page to see where your word(s) or phrase(s) are located in the OpenBook. Excerpts from the chapter provide context.


In the middle of page 65...
...5Measuring Brain Health ...
In the middle of page 65...
... will require avoiding potholes by improving reproducibility, improving predictive modeling (particularly for the development of biomarkers), and better understanding intraindividual variability over time so it can be couched within the context of a person’s life. (Russell Poldrack) ...
At the bottom of page 65...
... sample sizes in neuroimaging studies undercut the ability to believe a significant number of the results coming out of the brain health literature; mandatory preregistration would contribute substantially to moving toward a neuroscience that is aimed at translating findings into more effective ...
At the bottom of page 65...
...Purely behavioral or purely psychological measures can achieve the same standard of quality as a biological measure (Lis Nielsen), but measures and definitions should be arrived at through empiricism and evidence rather than through opinion. (Huda Akil) ...
At the bottom of page 65...
...A person who is resilient could be defined as (1) having a variety of neurocognitive tools and networks that can be activated in the context of internal and external environmental ...
In the middle of page 66...
...and psychological challenges, and (2) being able to adaptively activate these tools and networks to optimize function in response to environmental and psychological challenges. (Luke Stoeckel) ...
In the middle of page 66...
... with more flexibility in their cognitive toolkit will look less like themselves from time point to time point. He suggested treating this variability and flexibility as a phenotype. (Russell Poldrack) ...
In the middle of page 66...
...Resilience can be defined as maintaining access to a sufficient range of cognitive tools and an adequate degree of neuroplasticity over time. The ability to “roll with the punches” and rebound from adversity, for example, partly ...
At the bottom of page 66...
...This chapter features a presentation on grand views and potholes on the road to precision neuroscience by Russell Poldrack, Albert Ray Lang Professor of Psychology at Stanford University. He discussed the ... techniques and standards for reliability that are necessary for measuring brain health and resilience; he also described how quality metrics and criteria can improve measurement of brain health and resilience in future research. ...
At the bottom of page 66...
...GRAND VIEWS AND POTHOLES ON THE ROAD TO PRECISION NEUROSCIENCE ...
At the bottom of page 66...
... will require avoiding potholes by improving reproducibility, improving predictive modeling (particularly for the development of biomarkers), and better understanding intraindividual variability over time so it can be couched within the context of a person’s life. He opened by laying out ... idea of precision medicine, defined as prevention and treatment strategies that take into account individual variability in genetics, environments, ...
In the middle of page 67...
... and lifestyles (Goossens et al., 2015). Precision medicine promises to provide targeted treatments that are more effective for everyone than one-size- ... wave is being driven in part by success stories emerging from precision cancer drugs that are markedly improving outcomes based on genetic targeting and treatment. For example, Gleevec is a precision cancer treatment developed for people with a particular genetic mutation that causes chronic myeloid ... 5-year survival to almost 90 percent 5-year survival.2 We are seeing this substantial improvement in outcomes in several other aspects of cancer and other diseases, as well....
In the middle of page 67...
... Poldrack reflected on an optimistic dream of how a future precision neuroscience of brain health might process information from neuroscientific and other biological measurements to target the way that individuals are treated. A person who visits a physician with some kind of complaint related to ... ” along the road to achieving the dream of precision neuroscience: (1) irreproducibility of results, (2) use of faulty predictive models, and (3) lack of understanding of intraindividual variability....
In the middle of page 68...
... being true decreasing as the number of tested relationships increases. The third factor relates to flexibility in designs, definitions, outcomes, and methods of analysis—greater flexibility decreases the likelihood that findings are true....
In the middle of page 68...
... in their response to transcranial stimulation treatment. Although a replication of the full study has not yet been attempted, another group tried and failed to replicate the particular connectivity feature showed in the Drysdale study (Dinga et al., 2019). This reflects broader concerns about ... studies: (1) a lack of understanding about what replicates and (2) the inability to replicate results....
At the bottom of page 68...
... Estimating connectivity reliably requires substantial scan time, but the brain imaging studies currently being carried out are collecting far too little information about each individual. Poldrack estimated that achieving ...
At the bottom of page 68...
... imaging literature are not sufficiently reliable, said Poldrack. A number of meta-analyses looked at around 100 neuroimaging studies of depression and reported that there are significant differences in brain activity between people diagnosed with depression and healthy individuals (Müller et al. ... replication of the previous work. Across those same 100 studies, the group found no convergent differences in brain activity between healthy and depressed individuals, which was directly at odds with the findings of the previous meta-analyses (Müller et al., 2017)....
In the middle of page 69...
... neuroimaging, said Poldrack. A large proportion of neuroscience research is badly underpowered across both human structural neuroimaging studies and animal studies, undermining the reliability of neuroscience research (Button et al., 2013). If a study has less than 10 percent power, it means that ...
In the middle of page 69...
... The primary focus of research should not be the number of positive findings but how many of those positive findings are actually true. Positive predictive value is the probability that a positive result is true (Button et al., 2013). ...
At the bottom of page 69...
... Imagine you are going to do 100 studies, but your detector is broken so you only have random noise. If you controlled the false-positive rate at 5 percent, then on average, five of those 100 studies will come out with significant results. ...
At the bottom of page 69...
.... In general, published data with a sample size of 20 tends to reflect a more flexible sample size determination based on interim data analysis and other types of problematic analyses. A majority of studies in the meta-analysis had grossly insufficient N. In other words, researchers invited more ...
In the middle of page 70...
...This issue abounds in the brain health literature, said Poldrack. He carried out an informal search on PubMed for the terms brain health and fMRI, yielding 22 studies published in 2011 or later. Based on the published sample sizes for each group in each study, 22 percent of the groups ... the ability to believe a significant number of the results coming out of the brain health literature, he cautioned. Poldrack suggested that mandatory preregistration would contribute substantially to moving toward a neuroscience that is aimed at translating findings into more effective ...
In the middle of page 70...
... (Carp, 2012). Poldrack’s group is assessing the effects of this type of methodological pluralism in the Neuroimaging Analysis Replication and Prediction Study. They collected a dataset at Tel Aviv University on a ...
At the bottom of page 70...
...BOX 5-1Mandatory Preregistration ...
At the bottom of page 70...
...Mandatory preregistration could help to propel neuroscience toward findings that can be translated into more effective treatments. This involves ... preregistering their study plans—including sample size, inclusion or exclusion, analysis plan, and primary outcomes—prior to carrying out the study. This does not preclude exploratory analysis of data, but it does prevent those exploratory ... from being presented as hypothesis driven (also called HARKing) and open to criticism. This practice helps to prevent unfounded claims of positive effects, as demonstrated in other areas of research. In 2000, the ... Heart, Lung, and Blood Institute instituted a requirement that clinical trials for relevant drug or dietary supplement interventions must preregister the outcomes for ... they will be looking. Prior to this policy, clinical trials largely claimed positive effects, with virtually no claims of harmful effect and relatively few claims of null effect. During those years, the flexibility built into the process allowed researchers to claim positive findings that ... in many cases. In the years since the policy was instituted, almost every study reports a null effect, with very few showing a beneficial effect and one trial even showing a harmful effect (Kaplan and Irvin, 2015). ...
At the bottom of page 70...
...SOURCE: As presented by Russell Poldrack at the workshop Brain Health Across the Life Span on September 24, 2019. ...
In the middle of page 71...
... decision-making task and distributed the datasets to 82 different research groups, 70 of which returned their decisions on a set of given hypotheses3 using their standard ... methods, as well as providing thresholded and unthresholded maps. Economists helped to perform prediction markets to assess the researchers’ abilities to predict outcomes. The findings of ...
In the middle of page 71...
..., almost always—overestimates the degree that a prediction can be made in a new dataset, because the data are being reused both to fit the model and to assess how well the model fits....
At the bottom of page 71...
... An observed correlation does not equate to predictive accuracy (Copas, 1983). This issue is known as shrinkage in statistics and as overfitting in machine learning. In machine learning, out-of-sample predictive accuracy is generally quantified using cross-validation with a ... dataset. This process of cross-validation involves iteratively training a model on a subset of the data (the training data) and then testing the accuracy of the model’s predictions on the remaining data (the validation data). Poldrack’s group looked at the recent ...
In the middle of page 72...
... Small samples also inflate predictive accuracy estimates (Varoquaux, 2018). The decline of effect size over time and with respect to sample size is particularly problematic in the use of machine-learning tools (Varoquaux, 2018). In brain imaging, early studies with ... at sample sizes from publications claiming to show prediction based on fMRI, but more than half of the studies have sample sizes of less than 50 and 18 percent have sample sizes smaller than 20.4 “Doing machine learning with sample sizes smaller than 20 is almost guaranteed to give you ...
In the middle of page 72...
... As the field of neuroscience moves toward greater appreciation of the problematic issues related to developing predictive models and how it may be contributing to the development of invalid biomarkers, it may be instructive to look to other fields to see their requirements for ... biomarkers. Biomarkers for cancer or other diseases are generally validated with very large samples of tens of thousands of samples. Damien Fair asked if studies with small sample sizes should be measuring cross-validation within the sample, or whether the same result ... generated even with a small sample size for training and a completely independent dataset for testing. Poldrack replied that the results will be much more variable with a small sample size, but training on ... small sample size and then testing on an independent small sample will help to modulate the variability and avoid overfitting....
At the bottom of page 72...
... Poor Understanding of Intraindividual Variability Over Time...
At the bottom of page 72...
... Poldrack turned to his third pothole on the road to precision neuroscience, which is poor understanding of intraindividual variability over time. An increasing body of knowledge is providing insight into how brain function changes over time on both ... of knowledge in the middle of the spectrum, with respect to how brain function changes across days, weeks, or months. It has become clear that understanding brain disorders requires understanding individual variability in brain function. A person with schizophrenia or bipolar disorder,...
At the bottom of page 72...
... 4 Poldrack, R. Workshop presentation—Grand Views and Potholes on the Road to Precision Neuroscience. Available at http://www.nationalacademies.org/hmd/Activities/Aging/...
In the middle of page 73...
... for example, will tend to have significant fluctuations between high and low functional levels across daily life (Bopp et al., 2010). Over the course of a few weeks, an individual can go from completely disabled to ... functional (Kupper and Hoffmann, 2000). Labeling somebody as having a particular disorder glosses over the large degree of variability from day to day and week to week in ...
In the middle of page 73...
... Nearly all of human neuroscience assumes that the functional organization of the brain is stable outside of plasticity, development, and aging. But until very recently, this assumption has not been tested empirically. Understanding the variability and dynamics of human brain function ... multiple time scales is critical for the development of precision neuroscience and neuroscientific interventions, he said. In 2013, Poldrack engaged in a study called the My Connectome project by collecting as much data about ... as possible, including imaging data from more than 100 scans (resting fMRI, task fMRI, diffusion MRI, and structural MRI), behavioral data (mood, lifestyle, and sleep), and other biological measurements (Laumann et al., 2015; Poldrack et al., 2015). They ... in Poldrack’s primary sensory motor networks across sessions. Imaging studies across 120 individuals did reveal some variability in somatomotor and visual networks, but that was dwarfed by variability in default, frontoparietal, and dorsal attention networks....
At the bottom of page 73...
... factors that may drive variance in resting-state connectivity within an individual. The My Connectome project revealed that intake of caffeine and food affects large-scale network structure, for example (Poldrack et al., 2015). Figure 5-1 shows that on days in which he fasted and did not drink ... in the morning before the scan, his somatomotor network and the secondary visual network were highly connected; on mornings in which he consumed food and caffeine before the scan, those networks were essentially disconnected. He emphasized that this is not an overall degradation in connectivity—...
At the bottom of page 73...
... dynamics within individuals. An analysis of the data across the entire My Connectome session looked for patterns of connectivity recurring over time, and found two “temporal metastates” that were present throughout (Shine et al., 2016). Both seemed to be related to his being attentive, ...
In the middle of page 74...
...FIGURE 5-1 Caffeine and food consumption affects large-scale network structure.SOURCES: As presented by Russell Poldrack at the workshop Brain Health Across the Life Span on ...
At the bottom of page 74...
... were not significantly correlated with caffeine or food intake. When he was drowsy or sluggish, there was much greater integration of the visual and somatomotor networks, while the other networks became slightly more concentrated on the days when he was fed and caffeinated. Despite the wealth of ... scanned individuals remains at approximately less than 20. Denser data collection from individuals will need to increase in order to better understand variability at multiple scales with sufficient power. He suggested that the field of imaging should draw from the literature on aging about ...
At the bottom of page 74...
...Damien Fair, associate professor of behavioral neuroscience, associate professor of psychiatry, and associate scientist at the Advanced Imaging Research Center at the Oregon Health & Science University, asked about ...
In the middle of page 75...
... outcomes such as cognitive ability or clinical status. Poldrack replied that his group has recently published papers, including one on the nature and quality of behavioral measures used in the domain of self-regulation and self-control. He noted that measures used in experimental psychology are not ...
At the bottom of page 75...
... particular cognitive components are not useful as measures of individual differences. These types of measures, such as the Stroop task, can be useful and have a robust effect, but the effect is not robust at the level of test reliability; therefore, they are not reliable enough to provide any validity ... an individual difference measure. Huda Akil, codirector and research professor of the Molecular and Behavioral Neuroscience Institute and Quarton Professor of Neurosciences at the University of Michigan, added that just because a measure is useful as a group measure, it is not ...
At the bottom of page 75...
... Akil remarked that the field of brain science has generated a large body of real and actionable knowledge, but it would be helpful to know where the failures lie; for example, as to whether the nature of the measurements, the level of ... , the use of human versus animal models, and so on. Poldrack replied that for imaging purposes, certain findings about the organization of the brain are replicable and reliable. He drew a line, ... , between group mean activation and correlation of individual differences and group differences. The real failures of reproducibility are being seen in differences between diagnostic and control groups, as well as in ...
At the bottom of page 75...
... issues are the problems of analytic variability—because different methods of analyzing data will naturally lead to different results—and of publication bias, because journals are not willing to publish null results. This makes it tempting for researchers to perform many different ...
In the middle of page 76...
... Gagan Wig, associate professor of behavioral and brain sciences at the Center for Vital Longevity at the University of Texas at Dallas, commented that identification of changes in measures of brain ... behavior, or even assessment of reliability of measures of brain and behavior, could be confounded by differences in practice effects (unanticipated learning of the testing procedures), although he noted that this ...
In the middle of page 76...
... PANEL DISCUSSION ON THE WAY FORWARD IN MEASUREMENT AND RESEARCH...
In the middle of page 76...
... individuals. Monica Rosenberg, assistant professor in the Department of Psychology at the University of Chicago, emphasized the importance of sharing and integrating data and models (including feature weights and prediction algorithms) to allow for external validation and help move the field toward the ...
In the middle of page 77...
... illness than the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5). It focuses more on the need to study and measure dimensions such as cognitive function and affective valence, for example, instead of ...
At the bottom of page 77...
... Akil expressed concern that like the DSM, the RDoC project was also developed by committee and is not biologically based, so one orthodoxy is essentially being replaced with another. The advantage of RDoC is that it encourages people to think ... will require considering dimensionality of different phenotypes. However, the RDoC project would benefit from being more biologically informed and appropriately validated. The reliability of measures will need to be established in order for biology to become the framework to inform treatment and ...
At the bottom of page 77...
... Colleen McClung, professor of psychiatry and clinical and translational science at the University of Pittsburgh, remarked that the animal research community is struggling somewhat with the advent of RDoC. ... much effort building animal models with various characteristics and brain–body features of psychiatric diseases, researchers are now being asked to study each characteristic in isolation. RDoC has been helpful ... of concrete predictive modeling of continuous measures of behavior or symptoms—or data-driven subtyping—can capture symptom variability and complement binary or categorical classifications. For example, predicting attention-deficit hyperactivity disorder (ADHD) symptoms yields significant ...
At the bottom of page 77...
... Lis Nielsen, chief of the Individual Behavioral Processes Branch of the Division of Behavioral and Social Research at the National Institute on Aging, said that measures in psychological domains do not necessarily need to be derived from biology. ... can be based on functional or behavioral categories; mental health and subjective states, for instance, can be assessed by self-report or performance-based tasks. Purely behavioral or purely psychological measures can ... the same standard of quality as a biological measure. Akil clarified that she is calling for a system that is empirically evidence based, rather than committee ...
In the middle of page 78...
... RDoC also harks back to the issue of trait versus state, said Akil. She believes that coping and affective disorders contain nested concepts that could be disentangled biologically, genetically, and environmentally. With a broad lens, the ... ;such as responding to stress during adolescence—or in moment-to-moment responses. These time-nested ways of thinking, including behaviorally and biologically, could be helpful in thinking about measurement. Poldrack said that the Midnight Scan Club data have shown that in general, connectivity ...
At the bottom of page 78...
... McClung said that chronotypes are relatively stable after adolescence and before the age of 65 years or so. In fact, certain polymorphisms in circadian genes have been associated with traits of being a morning person or a ... person. Akil suggested that in the context of defining and measuring brain health, these are all examples of elements of the general framework or signature for how an individual is functioning; changes occur ...
At the bottom of page 78...
... definition of resilience in the context of brain health. A person who is resilient could be defined as (1) having a variety of neurocognitive tools and networks that can be activated in the context of internal and external environmental and psychological challenges, and (2) being able to adaptively ... these tools and networks to optimize function in response to environmental and psychological challenges. Poldrack suggested that resilience could be framed as the ability to bring a wide range of cognitive tools to bear in ... with more flexibility in their cognitive toolkit will look less like themselves from time point to time point. He suggested treating this variability and flexibility as a phenotype. Akil sketched her own definition of resilience. Neuroplasticity is a finite resource, but there may be ways to increase ... maintain reserves of neuroplasticity. Similarly, maintaining intellectual, emotional, and physical flexibility is a component of being resilient that speaks directly to having more affective or cognitive resources to draw upon....
At the bottom of page 79...
... In early life, the brain has a large degree of flexibility and many available options and tools, so the natural pruning that occurs is necessary. However, this pruning should not be excessive to the point of eliminating too many of those ... and coping tools, which would preclude the ability to respond effectively to adversity. In this context, resilience can be defined as maintaining access ... a sufficient range of cognitive tools and an adequate degree of neuroplasticity over time. The ability to “roll with the punches” and rebound from adversity, for example, partly depends on having more than one coping strategy available. However, this personalized definition does ...

A total of pages of uncorrected, machine-read text were searched in this chapter. Please note that the searchable text may be scanned, uncorrected text, and should be presumed inaccurate. Page images should be used as the authoritative version.