Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
3 Improving Training in Protocol Design, Experimental Rigor, and Quantitative Skills Key Highlights Discussed by Individual Participants â¢ Steps to improve reproducibility of experimental results can be im- plemented by graduate programs (additional curriculum courses on experimental rigor and design), publishers (publish both positive and negative experimental results as well as detailed experimental meth- ods), and funding agencies (issue longer awards and support replica- tion studies) (Landis and Mason). â¢ Experimental design can be enhanced by incorporating discussions of best practices into every course taken by trainees, regardless of topic, thus ensuring ongoing learning that can be applied in a variety of con- texts (Chesselet). â¢ Trainees need to have an understanding of what outcome measures are needed to appropriately test their hypotheses when designing their experiments (Marder). â¢ Scientists often do not have the statistical training required to deter- mine the most appropriate analyses to perform for their experiments (Brown). â¢ Statistics modules tailored to the key subdisciplines of neuroscience can augment general courses on statistics, providing a more specific set of analytical skills that might benefit trainees according to their concentration (Brown). â¢ In addition to enhancing training in statistics, the field of neuroscience could benefit from more collaboration with statisticians (Brown and Weber). NOTE: The items in this list were addressed by individual participants and were identified and summarized for this report by the rapporteurs. This is not intended to reflect a consensus among workshop participants. 35
36 DEVELOPING A 21st CENTURY NEUROSCIENCE WORKFORCE Over the course of the workshop, participants discussed several opportunities for trainers to think differently about how to train the next generation of neuroscientists, in areas such as cross-discipline collabora- tions, handling of large datasets, and the development of new tools and technology, as described in the previous chapter. Despite the inherent challenges, including significant changes to the culture of neuroscience, training in these areas appears to have generated an overall sense of posi- tivity and excitement within the neuroscience community. In contrast, many workshop participants identified noticeable gaps in three areas of neuroscience training: protocol design, experimental rigor, and quantita- tive skills. Many participants discussed several challenges associated with train- ees not learning the fundamentals of conducting rigorous experiments, including the risk of irreproducible findings. Marder led the discussion on protocol design, emphasizing the importance of using common sense and deep intuition about data to design and execute the right experiments as the landscape of neuroscience continues to evolve. Emery Brown, pro- fessor of computational neuroscience at the Massachusetts Institute of Technology, outlined gaps in the training on quantitative skills and the need for recruitment of experts in statistics to the field of neuroscience. ENHANCING EXPERIMENTAL RIGOR AND REPRODUCIBILITY The Problem The enterprise of science is based on making discoveries that can capture new knowledge about the world. That knowledge may be ex- ploited to make predictions about the occurrence of natural phenomena, or it can be the inspiration to potentially manipulate the natural world. However, if discoveries are not repeatable, those predictions are insignif- icant and manipulations will be ineffective. Several workshop partici- pants addressed scienceâs irreproducibility problem, which has been well documented in the past few years, notably in an article in The Economist, as well as in several journal articles (Begley and Ellis, 2012; Chatterjee, 2007; The Economist, 2013; Perrin, 2014; Prinz et al., 2011; Scott et al., 2008; Steward et al., 2012).
IMPROVING TRAINING 37 The Causes Participants discussed some causes of irreproducibility. Brown said the inability to reproduce results boils down to scientistsâ inability to rea- son under uncertainty and understand how to analyze data. Martone also mentioned the role of data, specifically how the data are handled and tracked, as being critical for reproducing findings. Carol Mason, profes- sor of pathology and cell biology, neuroscience, and ophthalmic science at Columbia University, informed participants about the Enhancing Re- producibility of Neuroscience Studies1 symposium that occurred at the 2014 Society for Neuroscience meeting, during which invited speakers listed several likely causes of poor reproducibility to include (Landis et al. 2012; Steward and Balice-Gordon, 2014): â¢ Difficulty of generating cutting-edge science â¢ Confounding variables â¢ Unreliable resources (cell lines, chemicals, antibodies) â¢ Deficient experimental procedures â¢ Lack of transparency in reporting findings â¢ Randomization, blinding, sample size estimations â¢ Publication bias Potential Solutions One workshop participant placed the responsibility for changing the culture around experimental rigor primarily on faculty members within graduate programs. He said that as journal reviewers and editors, grant reviewers, mentors, and hiring and promotion advisory board members faculty are in the best position to demand change and to model it to train- ees. Landis suggested the development of curriculum for courses about experimental rigor that can be shared across universities to ensure that all students and faculty alike receive the same training in best practices (see program example in Box 3-1). In addition, Mason suggested that enhanced 1 See http://www.abstractsonline.com/Plan/ViewSession.aspx?sKey=014e2bf7-f60a-41e3 -aaa6-668d88a03ad9&mKey=54c85d94-6d69-4b09-afaa-502c0e680ca7 (accessed October 29, 2014).
38 DEVELOPING A 21st CENTURY NEUROSCIENCE WORKFORCE BOX 3-1 Program Example: Harvard University Data Boot Camp The methods used to teach the data analysis skills that Brian Litt mentioned are critical, according to Michael Springer, assistant pro- fessor of systems biology at Harvard Medical School. Along with col- league Rick Born, professor of neurobiology at Harvard Medical School, Springer runs a Matlab-based boot campa in programming and quantitative skills that employs innovative training methods. Boot camp students not only learn how to use programming tools, but they also learn the best tools to use for specific problems and how to evaluate if their tools are working properly. The boot camp focuses on image development, statistics, and bioinformatics and modeling. Through lectures, long examples, and hands-on experiences, stu- dents learn how to visualize data and how to approach them from dif- ferent directions. Based on responses that students give during in- lecture quizzes using an interactive tool, teaching assistants can identify who would benefit from one-on-one interactions. Springer al- so finds that peer-to-peer mentoring can be more effective than classroom lectures in some situations. ______________________________________ a http://springerlab.org/qmbc (accessed October 28, 2014). SOURCE: Michael Springer presentation, Harvard University, October 28, 2014. training in ethics might help to address the increasing manipulation of data and plagiarism. She added that webinars on statistical reasoning and prop- er experimental design might help raise awareness of issues related to randomization, blinding, and calculating sample size. DESIGNING EXPERIMENTS WITH COMMON SENSE AND INTUITION The biggest challenge facing neuroscience is training students to comprehend the data they are collecting, said Marder. As next-generation technologies proliferate and experiments become more complex and multifaceted, she opined that common sense and intuition will be in- creasingly critical. Without the basic understanding of what their data should look like and what they mean, scientists will be unable to deter- mine what experimental design details matter for what problem. Part of
IMPROVING TRAINING 39 this understanding includes knowing the appropriate statistical tests to use for a study ahead of time, and what outcome measures to test. Accord- ing to Marder, simple prestudy power analyses should inform how much data should be collected rather than collecting data until a p-value reaches significance. She noted that intuition and communication are also im- portant when working collaboratively; a statistician might not know how the data were collected or their meaning after conducting the statistical analyses. Marder contends that it is difficult to comprehend oneâs data without working with raw, unprocessed data. Neuroscientists too often become separated from their raw data by models embedded in the hardware they use to collect data and in off-the-shelf programs they use to analyze those data. Students working with functional magnetic resonance imaging (fMRI), for example, may not fully understand the algorithms that pre- process their data before they examine them. As a result, some students have difficulty with experimental design by not understanding their data analysis tools better, said Marder. She added that a lack of intuition about data also makes it challenging to troubleshoot problematic data, and can lead to faulty interpretations of the experimental results. Marder suggest- ed that the development of new methodologies to help visualize large datasets and reduce the dimensionality might facilitate the comprehen- sion of oneâs data. According to several workshop participants, another challenge to de- signing the right experiment is knowing what tool to use to collect the appropriate data. Akil warned that âfalling in love with a toolâ can get in the way of asking the right questions. Another participant quoted a col- league who told participants at a recent conference that âjust because you have optogenetics does not mean you can turn your own brain off.â It is not always the case, that participant continued, that the best way to probe the function of a circuit is through an inducible knock out. Sometimes an âold-fashionedâ pharmacological agent or antidromic activation is a bet- ter way. The goal, said the participant, is to get students to really think about what the question is and to have a broad enough perspective to say what the right technique is for that question. Marder added that trainees should ask themselves, how do I design experiments to capture the data needed to inform my understanding? What are the outcome measures needed? One way to encourage thinking of the right experimental design questions was offered by Indira Raman, professor of neurobiology and physiology at Northwestern University. She described a class she teaches in which students discuss classic neuroscience papers to get a sense of
40 DEVELOPING A 21st CENTURY NEUROSCIENCE WORKFORCE the history of experimental design and how people express ideas. Marie- Francoise Chesselet suggested that rather than offering a single course on experimental design graduate programs should incorporate discussions of best practices in design, as well as statistics and ethics, into every neuro- science course, regardless of the topic, thereby ensuring ongoing learning that can be applied in a variety of contexts. Marder further emphasized the importance of theory in experimental design. She pointed to a directive of the BRAIN Initiative, which states that experiments of the future need to be an interaction between theory, modeling, computation, and statistics. The BRAIN 2025 report also lists the following three important outputs of using theory to support experi- mental design (NIH, 2014): â¢ Predictions: âTheoretical studies will allow experimenters to check the rigor and robustness of new conceptualizations and to identify distinctive predictions of competing ideas to help direct further experiments.â (p. 90) â¢ Integration: âTheory and modeling should be woven into succes- sive stages of ongoing experiments, enabling bridges to be built from single cells to connectivity, population dynamics, and be- havior.â (p. 7) â¢ Multiscale models: âNew analytic and computational methods are required to understand how behavior emerges from signaling events at the molecular, cellular, and circuit levels.â (p. 90) Marder concluded by noting that no matter how well students are trained, graduate programs have difficulty today in training students to design the experiments they are going to be doing 20 or 30 years from now. The only way for scientists to stay relevant is to build on a base of common sense and intuition and continually develop new skills and knowledge throughout their careers. DEFINING THE GAPS IN THE TRAINING OF QUANTITATIVE SKILLS At the heart of scientistsâ ability to determine whether models accu- rately and reliably describe data and the inferences that can be made from data, Brown said, is statistical reasoning, which itself is derived from a deep understanding of probability. However, he noted that most
IMPROVING TRAINING 41 people lack intuition about statistics and probability. Such intuition takes longer to develop than the single graduate-level statistics course that most students take. According to Brown, developing intuition about probability should begin in elementary school and develop throughout the studentâs education. The current Common Core State Standards used for developing U.S. math curricula2 do not expressly address training in probability understanding. However, Brown suggested that teachers could incorporate training in probability into existing lessons. Due to inadequate training in statistics, said Brown, students often consider what analyses they will employ on their data after the data have been collected. The result can be a study published with insufficient sta- tistical power to properly test a hypothesis, unfortunately an all-too- common problem in neuroscience, especially human fMRI studies (Button et al., 2013). Another common pitfall, occurring in a significant number of neuroscience journal articles, is failing to account for the clus- tering, or dependency, of data from nearby or otherwise similar neurons, an error that produces false-positive results (Aarts et al., 2014). In addi- tion to enhancing overall training in statistics, Brown suggested that graduate departments develop unique statistics modules tailored to five or so key subdisciplines of neuroscience. For example, electrophysiolo- gists could learn techniques for decoding spike trains while students working with fMRI could focus on techniques for calculating spatial cor- relations in images. Bringing like-minded students together to focus on a specific set of analytical skills might enhance their training and sense of community, said Brown. The NIH BRAIN Initiative Working Group, of which Brown was al- so a member, formalized objectives and goals focused on improving quantitative expertise at all levelsâfaculty, postdoctoral, and graduate student. As laid out in the BRAIN 2025 report, the goals are to (1) ensure that all neuroscience postdoctoral fellows and graduate students become proficient with basic statistical reasoning and methods; (2) ensure that trainees are able to analyze data at an appropriate level of sophistication, for example, by writing code; and (3) encourage trainees to construct models as a way to generate ideas and hypotheses or to explore the logic of their thinking. Finally, Brown said that in addition to taking steps to improve the average traineeâs skills in statistical reasoning, the field of neuroscience should examine how to increase collaborations with expert statisticians. 2 See http://www.corestandards.org/Math (accessed October 29, 2014).
42 DEVELOPING A 21st CENTURY NEUROSCIENCE WORKFORCE Engineers, physicists, and computer scientists have been increasingly working in neuroscience laboratories, but not statisticians. Bringing this important expertise to neuroscience is critical, said Brown. A recent white paper by an American Statistical Association working group of- fered a handful of suggestions to encourage the successful integration of statisticians into neuroscience training programs as it relates to the BRAIN Initiative (American Statistical Association, 2014). The authors note that statistician trainees should be â¢ taught to design data collection and analysis strategies, â¢ required to âtake neuroscience classes and embed themselves in neuroscience labs,â (p. 6) â¢ held to the same writing standards as neuroscience graduate stu- dents, and â¢ âeducated in principles of ethical and effective collaborative be- haviorâ (p. 6).