Issues Related to Personnel Reliability
For those concerned about the security of laboratories conducting research with biological select agents and toxins (BSAT), personnel issues are among the most difficult and controversial. Many of the proposals and new policies that followed from the July 2008 conclusion by the Federal Bureau of Investigation (FBI) that Bruce Ivins, a longtime employee at the U.S. Army Medical Research Institute of Infectious Disease, was responsible for the 2001 anthrax attacks focused on how to be more proactive in order to prevent another such incident by identifying individuals who may pose a threat before they can act. For example, much of the time and attention of the Executive Order (EO) Working Group on Strengthening the Biosecurity of the United States and its public consultations and site visits were devoted to the challenge of how the nation could guard against such threats. In fact, several of the reports offered in response or related to the EO process focused only on personnel issues (NSABB 2009; AAAS 2009; Leduc et al. 2009). This committee’s charge includes both personnel reliability issues and physical security. But because current practices and the prospect of additional measures related to personnel assurance have caused so much anxiety about the impact of the measures on the ability to attract and retain high-quality research and technical personnel and conduct the best science, the committee has devoted an entire chapter to these specific issues.
The first part of this chapter discusses screening, that is, the process of identifying whether or not someone should be eligible to have access to BSAT materials. The second part of the chapter, recognizing that individuals accused or convicted in a number of major U.S. terrorism and espionage cases had already passed the screening phase, addresses how one might monitor employee
behavior and performance and manage the workplace to reduce the risk of an insider either carrying out thefts or sabotage or acting to assist others.
Personnel screening seeks to identify individuals who may pose a potential security risk as early as possible, ideally prior to hiring. Identifying security risks can be considered part of the broader challenge of hiring competent, trustworthy, and reliable employees, and most organizations have a selection procedure to identify education and training, competencies, aptitude, and experience among potential employees.1 Many private- and public-sector organizations also conduct background checks to identify (in)appropriate actions or to assess personal qualities that are considered desirable or necessary for effective job performance. As discussed in this section, screening for security risks poses special issues.
The current screening process for individuals to work in facilities conducting BSAT research is based on identifying any of a set of disqualifying behaviors/activities that would automatically and permanently deny a person access (see Chapter 2 for additional details). Most of the policy discussions about the current Security Risk Assessment (SRA) screening process focus on four issues:
the adequacy of the information used to assess individual applicants;
the necessity to make changes in the types of information collected as part of the background checks;
the need to make changes in the way the current SRA process makes decisions about granting access; and
the possibility of adding additional forms of screening, in particular various types of psychological tests.
It is vital to acknowledge the formidable challenges posed by screening individuals for potential security concerns. The proportion of the population of job candidates who represent true security risks is unknown, but likely to be very small. This low base rate makes it difficult to detect true threats because “screening in populations with very low rates of the target transgressions (e.g., less than 1 in 1,000) requires diagnostics of extremely high accuracy” (NRC 2003:5), and these do not exist for the problems we are trying to address (or for many others). There is no way to escape the risk that good candidates will be screened out in order to detect a small number of people who pose genuine threats to security. This is not a new issue and, as discussed in Chapter 2, the U.S. government attempts to address this dilemma through a number of approaches aimed at assuring personnel reliability.
Efforts at screening for rare individuals or behaviors will therefore inevitably struggle with concerns about either failing to identify someone who has the disqualifying background or behavior or identifying someone as having disqualifying background or behavior when she or he does not. These two concerns are inversely related: the more one tries to avoid letting a security risk get through the screening, the more one increases the number of innocent individuals who will “fail” the test. The 2003 National Research Council (NRC) study The Polygraph and Lie Detection illustrates the difficult trade-offs facing policymakers with the example of a polygraph screening exam with an accuracy index of 0.90 for a hypothetical population of 10,000 government employees that includes 10 spies:
If the test were set sensitively enough to detect about 80 percent or more of deceivers, about 1,606 employees or more would be expected [to] “fail” the test; further investigation would be needed to separate the 8 spies from the 1,598 loyal employees caught in the screen. If the test were set to reduce the numbers of false alarms (loyal employees who “fail” the test) to about 40 of 9,990, it would correctly classify over 99.5 percent of the examinees, but among the errors would be 8 of the 10 hypothetical spies, who could be expected to “pass” the test and so would be free to cause damage. (NRC 2003:6)
In addition to the general dilemma of such trade-offs, the impact of unnecessarily excluding someone who does not introduce a security risk poses a special problem for the technical and research personnel in the BSAT workforce. If there is a large pool of potentially qualified applicants, a manager could decide that she or he can “afford” to incorrectly exclude someone who is in fact qualified because there are many others from whom to choose. (Even if the employer is not affected, “failing” the test could have harmful conse-
quences for the innocent individual involved, especially if there is a risk of any lasting career impact.) But tangible costs may be incurred when highly skilled workers are incorrectly excluded from consideration. Because there may be a relatively small number of qualified candidates, especially for senior research positions, turning away a good candidate will entail at least the costs of finding a replacement, if one even exists. Moreover, SRA screening will only take place after an individual has been selected for other reasons. Even graduate students considering work with BSAT materials have already been selected for advanced study because of other, desirable characteristics and have undergone significant periods of training.
In addition, difficulties during the screening process may also create a disgruntled applicant who may continue to be part of a relatively small specialized research community. Experts in personnel screening have long been concerned with the challenge that a system applicants find too intrusive or unfair could make even successful applicants feel the selection process is unjust, creating negative feelings or attitudes that could ironically contribute to someone’s becoming disgruntled and potentially susceptible to the very behavior screening is intended to prevent (Murphy 2009). Although there does not appear to be clear empirical evidence that screening systems actually affect the subsequent behavior of selected applicants (Sackett and Lievens 2008:438), the perception of the research community should be considered in designing screening procedures for those working with BSAT materials.
Finally, the Society for Industrial and Organizational Psychology (SIOP) has recognized a potential negative consequence from the use of testing—creating complacency or a false sense of security—that could apply to any form of screening. Testing may prompt institutions to relax other procedures, for example to reduce theft, because they believe the threat to have been eliminated:
An organization that introduces an integrity test to screen applicants may assume that this selection procedure provides an adequate safeguard against employee theft and will discontinue use of other theft-deterrent methods (e.g., video surveillance). In such an instance, employee theft might actually increase after the integrity test is introduced and other organizational procedures are eliminated. Thus, the decisions subsequent to the introduction of the test may have had an unanticipated, negative consequence on the organization. (SIOP 2003:7)
With this brief introduction to the challenges of screening for security risks, the next two sections consider (1) the current SRA and whether changes to either the disqualifying background/activities or the operation of the process is warranted; and (2) whether other screening methods, in particular testing, would add to the confidence that one could identify problematic potential employees.
Identifying Individuals with Backgrounds or Activities That Could Pose a Risk
Issues with the Current SRA
The committee considered the appropriateness of current criteria included in the SRA as disqualifying factors2 and whether changes should be made to the implementation of the screening process (see Chapter 2 for a description of the current SRA). The very small number of rejections and appeals reported by the Select Agent Program—192 rejections out of a total of 31,349 applications processed and 58 appeals, of which 22 resulted in the denial being overturned3—can be interpreted either that the screening is not restrictive enough, allowing potential risks to gain access to BSAT facilities, or as effective institutional pre-employment screening that weeds out those ineligible for access to BSAT materials prior to the SRA process. Without baseline information about the actual number of high-risk candidates, there is no empirical basis for using these rejection rate data to infer that the process is flawed.
Before offering its assessment of the current SRA, the committee notes the need for the Select Agent Program to clarify what constitutes some of the background/activities considered disqualifying factors. The public consultations held by both the National Science Advisory Board for Biosecurity (NSABB) and the EO Working Group revealed a substantial lack of understanding of how issues related to sexual orientation and mental health are addressed by the SRA. This confusion appears to be increasing the concern of the research community about whether the criteria are appropriate. Contrary to the expressed concern, however, individuals who are separated from the armed services as a result of the current “don’t ask, don’t tell” policy or because of personality disorders that, with proper medication and/or treatment, could permit effective
functioning in a nonmilitary setting, would not receive dishonorable discharges unless they committed offenses that resulted in conviction by a court marshal.4 The restriction on an individual who “has been adjudicated as a mental defective or has been committed to any mental institution” also raised concern in the public consultations. However, the SRA does not affect people who, for example, are: (1) suffering from problems such as bipolar disorder or forms of depression; (2) voluntarily undergoing mental health treatment; or (3) have been voluntarily hospitalized for mental health problems in the past.5 The committee believes the Select Agent Program can help reduce these concerns by providing more specific guidance about what is meant by these terms and perhaps by including clarification on the SRA form itself.6
In making its assessments, the committee considered how the SRA compares with other basic security and suitability screening carried out by the federal government. The broader context is important for understanding the committee’s conclusions and recommendation; it seems reasonable, for example, to ask how the SRA compares to a process that enables 2.4 million people to have access to various levels of classified information (GAO 2009b).7 Although the committee did not have time to conduct a thorough review of processes in all parts of the government that affect scientists, there could be important lessons or cautionary tales from the long experience in several areas, such as the nuclear weapons laboratories or the National Security Agency, which carries out research in cryptography. As described in Chapter 2, the committee did consider personnel security requirements beyond the SRA that various federal agencies have adopted for their staff and, sometimes, for their contractors and grantees.
Potential Changes to the Current SRA
Of the various changes to the current SRA discussed in the public consultations and assessments of the program to which the committee had access (e.g., NSABB 2009; DSB 2009), four particular issues garnered enough attention that the committee decided to address them.
Adding Additional Databases to the Current Screening One question that has arisen in policy discussions is whether the FBI, which carries out the back-
ground checks for the SRA, is taking advantage of all the appropriate databases to which it has access. (Chapter 2 contains a list and discussion of the databases currently being used.) Discussion among the members of the committee, including several who have experience with the databases used for this type of screening, consultation with a number of outside experts, including in a public session during the committee’s second meeting devoted to federal security and suitability screening practices, and public discussion about these issues suggested that the SRA may be consulting even more databases than those in the routine federal security clearance process. Although there may be specialized databases held by other agencies to which the FBI would not have access, information available to the committee suggests that the databases used for the current SRA are equivalent or comparable to those used for most other federal screening processes. The committee concluded that the databases being used in the SRA are consistent with current U.S. government practices in determining the eligibility of persons to have access to classified and proprietary information and sensitive sites and are adequate for assessing whether applicants possess disqualifying background/activities.
Adding a Mandatory Drug Test The current SRA addresses past use of illegal drugs only through database checks that identify anyone convicted of a crime carrying a potential prison term greater than one year, which would include drug-related crimes. By contrast, the general application form for a federal security clearance (Form SF86) maintained by the Office of Personnel Management (OPM) asks about illegal use of any controlled substances or prescription drugs since the age of 16 or within the past seven years, whichever is shorter. A number of federal agencies—and private firms as well—have concluded that experimentation with illegal drugs is so common among U.S. young adults that the agencies do not consider that admission of past use necessarily makes someone ineligible. Acknowledgment of past illegal use of drugs is not automatically disqualifying in these cases, although any applicant who did admit to past use could expect to be questioned further. In any case, agencies would terminate an employee who continued that use once on the job.
As opposed to past and noncontinuing use, the current SRA policies with regard to current use of illegal drugs are consistent with the broader federal approach. Public Law 110–181, Section 3002, prohibits any officer or employee of a federal agency, including active-duty military and federal contract employees, from being granted, or maintaining continued eligibility for, a security clearance if they are an unlawful user of a controlled substance or an addict. No waivers are permitted, which is consistent with the SRA.8 The SRA assesses current use through a question on the application form (see Appendix D), but the issue is
whether to add a mandatory test to verify an applicant’s statement that he or she is not using illegal drugs.
In addition to potential security risks, use of illegal drugs could be regarded as a safety issue. Successfully passing a drug test could also be considered a sign of reliability or evidence of respect for the law. This type of testing is becoming more common in industry and government, but not in academia. Routine drug testing could also be part of ongoing monitoring of employees.
The committee concluded that there was insufficient information to say that routine or random drug testing would significantly reduce the risk of an insider threat. The committee noted, however, that use of illegal drugs provides insight into a person’s judgment and reliability, which are critical attributes for those with access to highly pathogenic infectious agents. If the select agent list is stratified, consideration could be given to adding a mandatory drug test for those who would have access to agents in the highest risk group.
Adding a Credit Check or Financial History An obvious omission from the current SRA is querying an applicant’s financial and credit history. At least some consideration of credit history is common in many sectors as part of pre-employment screening and standard practice in federal security clearance and suitability investigations. In most cases, however, the issue is not one of an individual’s level of debt per se, but whether spending patterns provide a means to assess judgment and reliability and possible vulnerabilities. This information would be a logical element for inclusion in ongoing assessment and monitoring of employees, which is discussed later in the chapter.
A major reason for considering addition of financial information is that greed or susceptibility to bribery has been found to be a factor, in some cases, in the decision to become an accomplice to those undertaking illegal acts. Most espionage cases during the end of the Cold War, for example, involved spies acting out of economic rather than ideological motivation (Herbig and Wiskoff 2002). However, a 2008 analysis of data collected by the Defense Personnel Security Research Center on about 170 U.S. citizens who committed espionage between 1947 and 2007 showed a more complicated picture:
“Since 1990, money has not been the primary motivation for espionage. While getting money was the sole motive for 47 percent of the first cohort9 and 74 percent for the second cohort, since 1990 only 7 percent (which represents one individual) spied solely for the money. Money remained one of multiple motives in many recent cases as well” (Herbig 2008:ix).
Since 1990, 35 percent of the spies that were apprehended were naturalized citizens (as compared to 80 percent native-born before that time), 58 percent had “foreign attachments” (relatives or close friends overseas),
and 50 percent had foreign business or professional connections, with the result that, whereas “divided loyalties” were the sole motive for less than 20 percent prior to 1990, since 1990 that number has increased to 57 percent (Herbig 2008:vi-vii).
“Since 1990, the proportion of American spies demonstrating allegiance to a foreign country or cause more than doubled to 46 percent compared to the 21 percent in the two earlier cohorts, reinforcing the sense that globalization has had a noticeable impact and that the influence of foreign ties has become more important since 1990” (Herbig 2008:x).
A formidable barrier to adding financial history as a consideration to the current SRA, however, is that there is no clear indicator or threshold from which to base a decision about whether someone should be automatically disqualified. Any assessment would need to be appropriate for the particular segment of the applicant pool. For example, many students and those in training will have student-loan debts, some of them very heavy. Trainees are also likely to have relatively low salaries, especially relevant to their educational attainment. Scientists from outside the United States may not have a credit history that would permit them to obtain credit cards and other normal measures of financial responsibility. Judgment would inevitably be required, and the current SRA process does not include that kind of discretion.
The committee concluded that the difficulties of establishing a meaningful baseline make adding credit or financial history to the current SRA screening process too challenging. In any event, signs of sudden, unexplained affluence or evidence of irresponsible financial behavior would be appropriate to consider as part of the process of monitoring employees’ behavior.
Adding an Adjudication Process The current practice of automatic and permanent denial of eligibility for anyone who reveals or is found to have any disqualifying factor has raised concern. The current SRA system has no statute of limitations on disqualification: it does not matter how long ago the offense was committed. There is also no consideration of extenuating circumstances. The only appeal is to permit correction of factual errors.
By contrast, information collected under other current federal suitability and security screening is subject to an adjudication process, whereby issues such as how long ago the offense occurred, whether recent behavior shows positive or negative trends, and mitigating circumstances are taken into account to determine whether to grant access to protected information. Guidelines for making these determinations are available and periodically reviewed and updated (White House 2005). The appeal process also can take these factors into account in assessing whether a decision to deny access was justified.
The committee considered whether the SRA process should more closely mirror the security screening process by introducing adjudication to provide
an opportunity for considering the circumstances of a disqualifying offense. Although the reform measures undertaken in response to EO 13467 are reducing the processing time for security and suitability investigations, introducing judgment into the current process would almost certainly make the screening longer and more expensive (see Chapter 2 for discussion about security clearance investigations). The research community has already expressed concern about the length of time needed to clear personnel for access, so adding to the time would likely be perceived as a further inconvenience. Moreover, the small number of exclusions suggests that adjudication need not be incorporated in all cases, but only as part of the appeal process.
The committee concluded that the questions raised about the current automatic and permanent disqualifications were sufficiently serious that it would be worthwhile to change the system to incorporate a broader appeal process more aligned with personnel security practices already in place across the government.
The committee’s conclusions with regard to potential changes to the SRA are conditional because we believe the appropriateness of additional measures, in some cases, depends on whether or not the recommendation in Chapter 5 to stratify the list of select agents and toxins by risk groups is adopted. A stratified list, which presumably would restrict the highest level of security measures to a smaller set of agents and toxins, would also dictate a stratified SRA that could add additional requirements only to those who would work with those agents and toxins in the most stringent risk group.
RECOMMENDATION 5: The current Security Risk Assessment screening process should be maintained, but the appeal process should be expanded beyond the simple check for factual errors to include an opportunity to consider the circumstances surrounding otherwise disqualifying factors.
Identifying Potential Insider Threats through Testing
Policy discussions have included the issue of whether to require more extensive testing and evaluation of applicants to work with BSAT materials, perhaps as part of a formal Personnel Reliability Program. Some government agencies and private entities, including academic institutions, have considered undertaking additional screening using psychological or psychophysiological
tests. This section discusses the types of testing available and what is known about their appropriateness and effectiveness for these purposes.
Given the various definitions of what constitutes an insider threat (see Chapter 1), at least two different types of problems need to be addressed when individuals are screened to identify those potentially posing a threat. One set of problems arises in determining the normal range of adult personality; persons outside of this range would be identified as those who either might attempt deliberate deception or those who might be susceptible to corruption or recruitment to aid in the theft of materials or acts of sabotage. Another set of problems involves identifying individuals suffering from a range of serious personality disorders that might lead to their using BSAT materials to deliberately cause harm or assist others in doing so. In making these broad and admittedly inexact distinctions, we are not addressing individuals who might provide unwitting aid through a lack of awareness and those who might be subject to coercion that background checks can help identify.
There is an extensive literature on approaches to identifying insider threats, including from terrorists (see discussion below). There is also extensive experience from government and the private sector using various types of tests for screening purposes. Currently available tests fall into two broad categories. Polygraph exams are the best known and most commonly used example of psychophysiological tests, which rely on assessing the body’s physiological responses. Psychological tests include both “normal range” testing and tests that measure possible aberrant or psychopathological traits.
Polygraph testing is described here because it is used by some government agencies for national security screening—including some who may conduct BSAT research. A polygraph is an instrument that measures and records several physiological responses such as blood pressure, pulse, respiration, breathing rhythms, body temperature, and skin conductivity while the subject is asked and answers a series of questions; it is based on the theory that false answers will produce distinctive measurements that a skilled examiner will be able to recognize and interpret. The polygraph is used in a variety of settings for (1) investigation of specific incidents—such as in law enforcement situations, (2) evaluation of current employees, and (3) assessment of prospective employees.
The NRC produced a report on The Polygraph and Lie Detection in 2003 at the request of the Department of Energy, which had begun using polygraph testing for some personnel at nuclear weapons laboratories in response to the alleged spy activities of Wen Ho Lee at Los Alamos National Laboratory. In addition to its extensive review of polygraph testing, the study also examined
alternatives to the polygraph that might provide other means to detect deception in job applicants, current employees, or in investigations (NRC 2003). This committee found the 2003 report useful for its work.
The study noted an important distinction between the use of polygraph testing in the context of a specific investigation (e.g., whether a person was or was not involved in a particular incident of wrongdoing) versus broad use to assess risk for future involvement in wrongdoing. The study found that polygraph testing in such specific investigations could produce accurate results at “rates well above chance, though well below perfection” (NRC 2003:4), for those not trained in deceptive tactics. Polygraphs used for investigations of a particular occurrence are quite focused, concentrating on one event, and retrospective, so that precise true/false questions of fact are the focus of the exam. The study found that polygraphs were far less reliable for other purposes; for example, in national security screening, the exam covers a range of past behaviors, which might include ambiguous or speculative situations where the examiner and the subject do not have the same picture of a situation, even when asking true/false questions. The subject’s responses are then the basis for making inference to his or her future behavior. The polygraph study committee concluded that:
Available evidence indicates that polygraph testing as currently used has extremely serious limitations in such screening applications, if the intent is both to identify security risks and protect valued employees. Given its level of accuracy, achieving a high probability of identifying individuals who pose major security risks in a population with a very low proportion of such individuals would require setting the test to be so sensitive that hundreds, or even thousands, of innocent individuals would be implicated for every major security violator correctly identified. The only way to be certain to limit the frequency of “false positives”10 is to administer the test in a manner that would almost certainly severely limit the proportion of serious transgressors identified. (NRC 2003:6)
A more recent NRC study on the use of newer technologies to detect deliberate falsehoods found that, “to date, insufficient, high-quality research has been conducted to provide empirical support for the use of any single neurophysiological technology, including functional neuroimaging, to detect deception” (NRC 2008b:4).
The 2003 polygraph study committee recognized that polygraphs might have other uses, even if they are not accurate—such as deterring poor security risks from applying in the first place or making employees more likely to confess violations that they believed would be detected by polygraphs. These effects
could be obtained whether or not the polygraph was accurate in detecting a falsehood and might, in fact, account for why some federal agencies continue to use polygraphs.
Normal Range Testing: Integrity Tests
Normal range psychological testing covers a wide variety of assessment strategies. “Integrity tests” include a variety of instruments used to assess attitudes and experiences related to an individual’s honesty, dependability, trustworthiness, reliability, and pro-social behavior. These are the tests most commonly used to identify potentially counterproductive workplace behavior. According to SIOP, integrity tests “typically ask direct questions about previous experiences related to ethics and integrity OR ask questions about preferences and interests from which inferences are drawn about future behavior in these areas. Integrity tests are used to identify individuals who are likely to engage in inappropriate, dishonest, and antisocial behavior at work” (SIOP 2009a). A survey conducted in 2001 by the American Management Association (AMA), which reflects practices at the large organizations that are AMA members rather than the population of all U.S. employers, found that 29 percent of employers surveyed use one or more forms of psychological measurement or assessment, which would also include personality tests (SIOP 2009b).
Although integrity testing was originally developed to detect dishonesty without having to make use of polygraph tests—with a particular focus on reducing theft—its applications have expanded over the years to cover broader concepts of theft (e.g., “time theft” through absenteeism, low productivity) and other types of counterproductive workplace behavior (Berry et al. 2007).11 Reviews of published research concerning integrity testing suggest such testing can produce valid predictions of potential counterproductive behavior (Sackett and Harris 1984; Sackett et al. 1989; Sackett and Wanek 1996; NRC 2003; Berry et al. 2007). Integrity tests have also been shown to predict job performance, which is not surprising: “employees who engage in a wide variety of counterproductive behaviors are unlikely to be good performers” (NRC 2003:173).
Most relevant to this report is how useful integrity testing is for detecting potential insider threats. As the NRC study of polygraphs concluded, “There is no literature correlating the results of these tests with indicators of the more specific kinds of counterproductive behavior of interest in national security settings” (NRC 2003:173). Because counterproductive behaviors studied are often correlated (i.e., a person willing to engage in one is more likely to also engage in another), one might posit that there would be a relationship to other specific counterproductive behaviors that have not yet been studied. It is not clear, however, whether this applies in the context of bioterrorism: how likely would someone who could be recruited to steal equipment or other materials from a lab as an accomplice in a “normal” theft be to steal BSAT materials when it would presumably be apparent this was being done for purposes of terrorism or sabotage?
Personality Assessment Tools
Concerns about insider threats also include those who are suffering from mental disorders severe enough to potentially cause them to commit illegal acts. In this section, we address the issue of whether such problems could be identified at the point of hiring; the challenge of identifying and responding to such problems once someone is already working in a facility is addressed later in the chapter.
A number of standardized tests have been developed to aid in the effort to identify employees who suffer from psychopathology or personality disorders. The original personality assessment tests were developed during World War I by the U.S. military for screening draftees, and such standardized tests are commonly used in a number of government and private settings (Butcher et al. 2006). A number of high risk or sensitive occupations, such as the military and the police, make use of such tests; for example, the use of a clinical test instrument is required by law for candidates for jobs as law enforcement officers in 50 percent of the states (Cullen et al. 2003).
One of the most widely used clinical personality assessments is the Minnesota Multiphasic Personality Inventory (MMPI). It is used in nonclinical settings to identify a range of psychopathologies and to assess persons who are candidates for high-risk public safety positions, such as nuclear power plant personnel, police officers, firefighters, pilots, and air-traffic controllers. Originally developed in the 1930s, it has been refined over the years and has been the subject of extensive research.12 Results are interpreted by examining the relative
elevation of factors compared to the various reference groups studied.13 Other frequently used assessment tools are the Millon Clinical Multiaxial Inventory-III (MCMI-III) and the Personality Assessment Inventory (PAI). It is considered good clinical practice not to rely on one test exclusively, and judgments about any individual are more reliable when tests are used in combination and test results are supported by other methods of assessment.
A key question is how well the many standardized tests developed to assess personality are able to identify potential problem employees. And even if the tests are effective for this purpose, one then needs to ask whether the traits they identify are related to the specific problem one is trying to solve: excluding potential insider threats and terrorists from the laboratory. Unfortunately, whatever clinical diagnostic instrument one might choose to screen for potential insiders and possibly a terrorist, the test will be vulnerable to the same difficulties that beset polygraphs and integrity testing when trying to identify rare behaviors.
There is little evidence that potential bioterrorists are more likely to come from among the ranks of those with a given specific psychopathology than those motivated by some other reason, such as commitment to a cause that uses terrorism or those who would undertake terror for financial gain. In fact, research suggests that, however abhorrent their actions may be to most people, “the outstanding common characteristic of terrorists is their normality” (Crenshaw 1981:390). An extensive recent review of the research on the “psychology of terrorism” for one of the U.S. intelligence agencies concludes that:
Research on the psychology of terrorism has been nearly unanimous in its conclusion that mental illness and abnormality are typically not critical factors in terrorist behavior. Studies have found that the prevalence of mental illness among samples of incarcerated terrorists is as low or lower than in the general population. Moreover, although terrorists often commit heinous acts, they would rarely be considered classic “psychopaths.” Terrorists typically have some connection to principles or ideology as well as to other people (including other terrorists) who share them. Psychopaths, however, do not form such connections, nor would they be likely to sacrifice themselves (including dying) for a cause. (Borum 2004:34-35)
This brief review demonstrates the variety of tests that might be considered as part of a screening program to identify those individuals who pose a potential insider threat before they enter the laboratory. The committee concluded that there is no “silver bullet,” that is, no single assessment tool that can offer the prospects of effectively screening out every potential terrorist. Although it can be appropriate for organizations to employ integrity testing and clinical personality assessments as part of screening to serve other purposes, the committee reached the same conclusion concerning polygraph testing as was reached by another NRC committee that applies even more broadly, namely to its use in security screening: “Polygraph testing yields an unacceptable choice for…employee security screening between too many loyal employees falsely judged deceptive and too many major security threats left undetected. Its accuracy in distinguishing actual or potential security violators from innocent test takers is insufficient to justify reliance on its use in employee security screening in federal agencies” (NRC 2003:6).
MONITORING AND MANAGEMENT TO ACHIEVE A SAFE AND SECURE RESEARCH ENVIRONMENT
The current SRA process is built upon screening an array of databases for certain disqualifying behavior/background factors. Once an individual is cleared, certification is in effect for five years. However, the FBI continues to monitor cleared individuals using selected databases; the FBI also receives automatic notices in some instances, for example, when an individual is arrested and fingerprinted (NSABB 2009:3).
Sustained database monitoring can help identify that a cleared individual has incurred at least some of the disqualifying factors that would make him or her ineligible to work with BSAT materials. But the process cannot be expected to address all disqualifying factors or, perhaps more importantly, all significant issues and personal changes that could occur in an individual’s life during the five-year period of certification, including those that could potentially result in his or her becoming a security risk. The conclusion that one should not rely exclusively on screening to identify potential insider threats before hiring makes this recognition both important and troubling. It implies that policymakers will not have easy or easily measurable remedies for the concerns about personnel reliability. More importantly—and positively—it suggests that efforts to ensure personnel reliability will have to come from the laboratories where BSAT research is being conducted, in the form of increased engagement by managers and staff. To appreciate the potential of such engagement, it is necessary to
address a persistent belief that affects how the impact of monitoring laboratory personnel is commonly viewed.
Dispelling a Myth about Spontaneous Action
Over the years, an extensive literature has accumulated on preventing insider threats, covering a wide range of types, from espionage, fraud, corruption, and misuse of information technology or other systems containing secure or proprietary information, to threats and acts of violence that include the workplace and schools (Turner and Gelles 2003; Herbig 2008; Fein and Vossekuil 2009; Brant and Gelles 2009). The research includes many case studies of terrorism and some bioterrorism incidents in particular.14
One important lesson from this research is that, even in circumstances where one might assume an individual would attempt to conceal his or her malevolent intent in order to escape detection, in many cases there will be signs or signals that something is wrong prior to an event. Those cases in which an individual’s action is genuinely spontaneous are rare. Most people follow a psychological path from idea to action and give signals along the way (Fein et al. 1995; Fein and Vossekuil 2009; see Borum 2004 for a discussion focused on terrorists). The warning signs occur often enough that it is reasonable to believe that active, sustained monitoring and management could detect many of them and provide the basis for prevention (Turner and Gelles 2003; Fein and Vossekuil 2009). No system can guarantee success in preventing an illegal act, but the results of the research on insider threats just discussed are hopeful. The research also suggests that training people to watch for and recognize the warning signs is essential and that, in the absence of such training, these signs are likely to be missed (Cascio 2009). This leads directly to one of the committee’s most important recommendations.
RECOMMENDATION 1: Laboratory leadership and the Select Agent Program should encourage and support the implementation of programs and practices aimed at fostering a culture of trust and responsibility within BSAT entities. These programs and practices should be designed to minimize potential security and safety risks by identifying and responding to potential personnel issues. These programs should have a number of common elements, tailored to reflect the diversity of facilities conducting BSAT research:
Consideration should be given to including discussion of personnel monitoring during (1) the initial training required for all personnel prior to gaining access to BSAT materials and annual refresher updates and (2) safety inspections to obtain a more complete assessment of the laboratory’s ability to provide a safe and secure research environment.
More broadly, personnel with access to select agents and toxins should receive training in scientific ethics and dual-use research. Training should be designed to foster community responsibility and raise awareness of all personnel of available institutional support and medical resources.
Federal agencies overseeing and sponsoring BSAT research and professional societies should provide educational and training resources to accomplish these goals.
The remainder of the chapter describes how the most important parts of the recommendations can be implemented, including:
The types of education and training needed to foster a culture of responsibility and support effective monitoring;
Examples of systems for peer and self-reporting; and
Other resources, such as occupational health and employee assistance programs, that can assist monitoring efforts.
The Importance of a Process or System
The recommendation above is supported by research from a variety of situations and settings about the general importance of having systems or processes in place to support positive action, including monitoring of potential problems among employees (Turner and Gelles 2003). Studies of organizations, such as those focused on fostering a productive organizational culture or understanding the dynamics of “high reliability” organizations where the costs of failure would be extremely high (e.g., air traffic control systems, nuclear power plants, the airline industry), also identify the importance of processes (Schulman et al. 2004; Weick and Sutcliffe 2001). All processes are not equal, as one would expect, and there may be significant challenges to creating those that can be both trusted by those involved to protect individuals and accepted by management as not posing a threat to their responsibilities and authority. The committee heard presentations about some types of processes in other sectors; Box 4-1 offers examples from the airline industry.
The literature on insider threats further argues that an organization should have the necessary processes in place before the problem occurs (Turner and Gelles 2003). An already existing process is much more likely to be effective
Examples of Screening and Peer Reporting Systems from the Airline Industry
Screening: After a conditional offer of employment, a number of checks: FBI fingerprint check/criminal history; National Driver Registry; previous employer; and Department of Transportation Drug and Alcohol Testing. Educational credentials may or may not be checked and references may or may not be checked.
Reporting: The Airline Pilots Association, the pilots union, operates a two-tier reporting system.
Reporting: The flight attendants union also has a two-tier system.
SOURCE: Damos 2009.
than an ad hoc response, both for prevention and for responding when warning signs of imminent trouble appear. Identifying early warning signs will not necessarily reveal an insider before an incident occurs, but it can help identify individuals who might require assistance from trained professionals. Without this intervention, a particular individual may or may not resolve the situation on his or her own, but having measures in place to assist—rather than automatically exclude individuals showing signs of trouble—benefits everyone. It is important for those considering the creation of monitoring processes to note that one may find parts of the system are already extant within many organizations but serving other purposes; these systems can, therefore, be supplemented or adapted for this purpose.
Finally, a common message in National Academies reports on topics as disparate as medical error and assessment of U.S. democracy assistance programs (IOM 2000; NRC 2008c) is how important it is for an organization to be able to learn from mistakes and less successful endeavors as well as from triumphs. This is a broader organizational challenge than creating mechanisms to prevent insider threats, but the literature on “learning organizations” offers a range of models and lessons that provide some useful context for the specific problems addressed in this report (Schön 1973; Senge 1990).
Given how rare instances of attempted bioterrorism have been, the committee believes it would be helpful to develop case studies to explore examples of potentially relevant behavior (e.g., complacency, exploitation, theft of materials, scientific fraud) that have occurred specifically in the biosciences. Case studies already exist on some of these issues (e.g., Macrina 2005; NRC 2009c), so an important task is to identify and supplement relevant existing case studies, commission new ones, and examine them all comparatively to highlight lessons relating to personnel reliability and security.15 As in evaluation efforts described in Chapter 5, these case studies will help move the policy discussions toward a better understanding of how to address the risk of the insider threat.
The Challenge to Management
As with screening, reducing the risk of an insider threat can be viewed as part of the larger set of challenges facing any manager. Successful programs to monitor and manage problems in the workplace unfortunately involve hard work and diligence. But there are reasons in addition to security to improve the quality of management in the workplace. In the case of BSAT, safety is clearly a primary reason, because that which improves safety generally will also enhance security. The changing environment in many laboratories, with greater emphasis on teamwork and larger groups of researchers, also makes management and
mentoring important components of the job for any supervisor. Implementing changes to improve managing for security belongs within such existing systems. It is important to remember, however, that some actions taken to enhance security may not be relevant for what is done for ensuring safety. For example, will the “culture of trust” discussed below, which accepts peer reporting of potentially disqualifying behavior to ensure everyone’s safety in the laboratory, necessarily extend to security, which introduces issues of criminal behavior and national security?
With this brief introduction, consideration is now given to research, evidence, and experience that can inform development of systems to improve personnel reliability at institutions working with BSAT materials through fostering active monitoring and management.
Fostering a Culture of Trust and Responsibility
A goal in any organization where safety is a central challenge should be to foster a culture where individuals watch out for each other and take responsibility for both their own performance and that of others. When this works well, the environment and culture reinforce a positive and inclusive ethic that promotes excellent performance. In turn, a balance of formal and informal processes will help to maintain the culture. Many of the components of a safety-oriented culture will serve security goals as well.
A successful culture of trust and responsibility relevant to personnel reliability requires the engagement of everyone in the laboratory. A key component is a climate inducing self- and peer-reporting and providing mechanisms for such reporting. On a cautionary note, understanding of the culture within a particular workplace or an organization before trying to use it to foster new practices is essential. Not taking culture into account can doom the effort to failure or inadequacy (Morgan 1997; Schein 2001).
As discussed in Chapter 1 and below, the culture of science already contains many of the elements conducive to fostering trust and responsibility. In addition, education and training provided to life scientists at different stages of their careers provide venues for the information the committee recommends. Fortunately, there is already movement in this direction for other reasons, upon which the Select Agent Program can build.
Essential Role of Education and Training
Good mentoring and training are important ways to develop a culture of responsibility, providing the essential foundation on which other elements of effective monitoring can be built. They should be viewed as necessary but not sufficient conditions, with continuing efforts by laboratory managers, researchers, and staff needed to sustain the culture and reinforce expectations
of appropriate behavior. Training and educational experiences will have to be multifaceted to address the many interrelated issues in promoting the culture of trust and responsibility that Recommendation 1 seeks to instill. Although some training will need to be tailored to particular segments of the BSAT community, at least some discussions need to include everyone. It seems particularly important, for example, to foster discussions among and between the scientific and technical staff and those with responsibility for security so that common understanding can be built.
Incorporating engagement as a critical factor in managing a safe and secure workforce should be part of leadership development, but the requirement for engaged management may not come naturally to laboratory managers and officials. Most scientific laboratory managers attain their position by intellectual achievement. The qualities that lead to success as an outstanding researcher do not necessarily relate to management skills. Moreover, many who are promoted to supervisory positions in laboratories are not provided opportunities for training in management. Nevertheless, there are many good laboratory managers who do provide engagement and oversight.
Since the principal investigator is the most likely individual to interact regularly with a broad cross-section of research and technical staff, he or she will need particular support in the form of resources to acquire the skills needed for effective engagement and monitoring. The diversity of facilities carrying out BSAT research makes it difficult to offer generalizations about approaches to this kind of leadership development that would apply nationally. Such resources may be more readily available in federal laboratories or private and commercial entities, where management training is more often provided and encouraged, than in academic environments, where managers maintain a greater degree of independence. Moreover, the opportunity to develop and/or sharpen management skills seems less likely to be seen as important for an academic scientific career than for a career in private or government environments.
Education to Raise Awareness and Foster Responsibility
Building a culture of trust and responsibility to reduce the risk that BSAT materials might be stolen for use by terrorists or used in acts of sabotage in the laboratory can draw upon longstanding traditions in the life sciences as well as more recent efforts focused on security risk. The iconic example of the life sciences’ exercising responsibility is its response in the early 1970s to concerns about potential safety risks arising in the then newly developing field of recombinant DNA research. The 1975 Asilomar Conference on Recombinant DNA brought scientists together to discuss risks of manipulating DNA from
different species. The results of the meeting led to the National Institutes of Health’s (NIH) issuing its Guidelines for Research Involving Recombinant DNA Molecules and creation of a process for reviewing proposed experiments that continues today.16 The Human Genome Project created an ethical, legal, and social implications program to explore how advances in genetics intended to improve human health could proceed without undermining other dimensions of human well-being.17
More recently, concerns have been raised about the so-called “dual use dilemma” of the life sciences, in which results of research intended for beneficial purposes, such as therapies against infectious diseases, might be misused for biological weapons or bioterrorism. This has led to calls for educating the life sciences community about its responsibility to reduce such risks. Dual use research is a broader concept than BSAT, but it is reasonable to assume that much of the research conducted under the Select Agent Program could potentially be considered dual use. A series of NRC reports has endorsed education on dual use issues (NRC 2004ab, 2006, 2007b, 2009ab). The NSABB has proposed that all federally funded researchers in the life sciences receive training about dual use issues (NSABB 2007) and, at the time of this report, the proposal is under review by an interagency working group. The American Association for the Advancement of Science (AAAS) and the Federation of American Societies for Experimental Biology (FASEB) have also recommended training programs, although both stop short of recommending that such programs be mandatory (AAAS 2008; FASEB 2009). This suggests that education for BSAT researchers might be able to draw on and fit within at least some of these initiatives, especially if the NSABB’s recommendation of mandatory training is adopted.
In most cases, recommended training on dual use issues is viewed as becoming part of other, broader training for life scientists on responsible conduct, rather than as standalone activities. In the United States, there are three types of existing education to which the kind of training envisioned by the committee naturally might be added.
Biosafety training has not traditionally included security issues, but there is evidence that some training programs have added discussions and modules (AAAS 2009; the appendix includes a list of training programs). This might be the venue best able to reach the full range of laboratory technical and research staff, as well as those outside academia.
NIH mandates training in the responsible conduct of research (RCR)
The current version of the Guidelines is available at <http://oba.od.nih.gov/rdna/nih_guidelines_oba.html>. The first revisions to the scope of the Guidelines are currently being made to reflect the implications of the new field of synthetic genomics.
See <http://www.ornl.gov/sci/techresources/Human_Genome/research/elsi.shtml> for more information.
for those who are supported by its training grants (NRC 2009c). RCR training is frequently cited as the most promising U.S. venue for dual use education; although its scope would have to be expanded beyond the current focus on research integrity and beyond those supported by its training grants to reach a much broader segment of life scientists, there are signs the RCR community is interested in the opportunities dual use education offers (AAAS 2008).
Bioethics training, which largely reaches those in biomedical research including BSAT researchers, offers another potential venue, and, again, there are signs of interest from some in that community in taking on the additional issues (AAAS 2008).
It was not within the committee’s charge to offer highly specific recommendations on how best to undertake the education and training needed to foster the culture of trust and responsibility that is recommended. The committee believes, however, that whatever venue is chosen—and all of them might be appropriate for particular contexts in order to reach the range of BSAT research entities—educational materials will need to be developed and resources provided to support and sustain implementation. Box 4-2 offers two
Sample Educational Materials for Considering Dual Use Research Issues
The Federation of American Scientists’ Case Studies in Dual-use Biological Research illustrate the “dual use” potential of actual life science research. The case studies provide a historical background on bioterrorism and bioweapons and the current laws, regulations, and treaties that apply to biodefense research. They include interviews with researchers as well as the primary scientific research papers and discussion questions meant to raise awareness about the importance of responsible biological research. The case studies are available at <http://www.fas.org/programs/ssp/bio/educationportal.html>.
The Policy, Ethics and Law Core of the NIH-funded Southeast Regional Center of Excellence in Biodefense has developed an online module to assist those involved with the biological sciences to better understand the “dual use” dilemma of some life science research. This module is intended for graduate students and postdoctoral scholars, faculty members, and laboratory technicians involved in biological research in microbiology, molecular genetics, immunology, pathology, and other fields related to emerging infectious disease and biodefense. The module consists of an approximately 20-minute online presentation followed by a brief assessment and has been used by more than 600 people. The module is available at <http://sercebtraining.duhs.duke.edu/>.
examples of current online resources to illustrate some of the types of materials that would be needed. The report of a workshop on ethics education held in 2008 by the National Academy of Engineering’s (NAE’s) Center for Engineering, Ethics and Society (NAE 2009) offers an introduction to some of the current thinking about the components of effective ethics education in science and engineering. The NSABB’s strategic plan for outreach and education on dual use issues (NSABB 2008) and the AAAS workshop on dual use education (AAAS 2008) offer ideas focused more directly on BSAT-related issues. Here, the agencies overseeing and supporting BSAT research and the professional societies, separately and in collaboration, can play a major role in supporting and disseminating materials and sharing successful practices.
Current Training by Registered BSAT Entities
The Select Agent Program requires all registered entities to provide training in biosafety and security before individuals can enter areas where select agents and toxins are handled or stored (7 CFR 331.15(a) and 9 CFR 121.15(a)). The training “must address the particular needs of the individual, the work they will do, and the risks posed by the select agents or toxins.” Annual refresher training is required, and an entity’s training program is included in inspections conducted by the Centers for Disease Control and Prevention (CDC) and the Animal and Plant Health Inspection Service (APHIS). Current training programs are primarily technical, focusing on biocontainment and biosafety practices and the details of a facility’s security plan.
Required training offers an opportunity to reach all participants in BSAT research with at least some essential messages that will promote personnel reliability. Support for this type of training was strongly endorsed during one of the committee’s site visits. The committee believes that, without substantially increasing the time entailed in security training, a module focused on the risk of an insider threat could be added. At a minimum such a module could include the likelihood of warning signs and examples of what they might be, the expectation for peer and self-reporting, and the resources available to make a report. CDC and APHIS could work with federal security agencies or with outside experts to develop relevant materials for use by entities or provide resources that entities could use to develop their own. Discussions about individual responsibility and updates on available resources could be part of the required refresher training.
Systems for Peer and Self-Reporting
Specific examples of programs already exist in many laboratory settings to assist with some of the aspects of monitoring behavior as part of safety that can support monitoring for security as well. When warning signs appear, peers and
colleagues are most likely to be in a position to notice them. Part of the culture of trust and responsibility includes individuals’ feeling encouraged to report on themselves and others if they find signs of trouble or feel that an individual poses a safety or security risk. To enable coming forward, it is important to provide reporting mechanisms that individuals trust. Management plays an essential role and has important responsibilities. It is management’s responsibility, for example, to provide or permit mechanisms for people to self-report problems and relay concerns about others via a safe mechanism (e.g., ombuds offices, hotlines, and/or confidential reporting systems). Management may also provide mechanisms for individuals to obtain help in dealing with concerns proactively via employee assistance programs (EAPs). Although often focused on safety concerns, these processes can serve security as well.
In creating “safe” reporting mechanisms, it is important to be sensitive to management’s need for information and its ultimate responsibility for whatever happens. In some cases, including the BSAT program, there may be legal requirements, including potential civil or criminal penalties for noncompliance, as part of managers’ responsibilities to assure security. Encouraging reporting can be difficult even if most of those working in a facility believe that they can trust their managers. Where managers are considered “part of the problem,” the difficulties of creating effective reporting mechanisms multiply. Simply requiring such reporting is not an answer if the basic culture of trust is absent. In fact, imposing reporting requirements in the wake of an incident may have negative consequences, unless those affected believe it is part of a positive change and is not punitive or palliative.
Reporting Systems: The Ombuds Many organizations that made presentations during the public consultations for the NSABB report and the EO Working Group described reporting systems for identifying problems. In addition, the committee heard a presentation from two experienced ombuds at its first meeting, who reported their research on why people do or do not report “inappropriate” behavior:
Most people consciously or intuitively consider the context when they perceive behavior that they think is wrong. They may consider the rules—and also the actual norms—of their organization, about acting on the spot or “coming forward.” They may review their own and their colleagues’ perceptions of the local supervisor. They may, consciously or intuitively, evaluate their complaint system and its options, in terms of safety, accessibility and credibility. Recent events may also affect peoples’ actions.
Personal factors include how people understand the issues at hand, their personal preferences, gender and cultural traditions, and their perceived power or lack of power. People also may behave differently depending on their role in the situation—as an injured party, a perpetrator, supervisor, senior officer, peer or “bystander.” (Rowe et al. 2009:10)
Although not directed at the problem of preventing an insider threat of bioterrorism, the findings are informative for thinking about the design of systems for reporting and self-reporting. The research cited above concluded that: “There is no single policy that will make an organization seem trustworthy and no single procedure or practice that will guarantee that people will overcome all the barriers to coming forward. A well-publicized commitment to fairness and to procedural justice may be a good beginning” (Rowe et al. 2009:24; italics in original). The findings of an extensive review of the literature on reporting systems identified five “core” characteristics of the most effective ones, which are summarized in Box 4-3.
Occupational Health Programs A monitoring program intended to identify problems before they occur may take advantage of programs already in existence for other purposes. In situations where the health and safety of workers is a
Core Characteristics of Effective Reporting Systems
Elegance—simple to understand, apply to a broad range of issues, and use an effective diagnostic framework. Those who manage the system should be able to respond definitively to the issues raised.
Accessibility—easy to use, with information about how to report or file a complaint widely advertised and readily comprehensible.
Correctness—(1) relevant input about the problem can be reported, (2) the organization can investigate and call for more information if needed, (3) a system exists for classifying and coding information in order to determine the nature of the problem, (4) employees can appeal lower-level decisions, and (5) both procedures and outcomes make good sense to most employees.
Responsiveness—at the most basic level, responsive systems let individuals know that their input has been received. Responsive systems provide timely responses, are backed by management commitment, are designed to fit an organization’s culture, provide tangible results, involve participants in the decision-making process, and give those who manage the system sufficient clout to ensure that it works effectively.
Nonpunitiveness—essential if employees are to trust the system. Individuals must be able to present problems, identify concerns, and challenge the organization in such a way that they are not punished for providing this input, even if the issues raised are sensitive and highly politicized. If the input concerns wrongdoing or malfeasance, the individual’s identity must be protected so that direct or indirect retribution cannot occur. Employees as well as managers must be protected.
SOURCE: Sheppard et al. 1992.
major and continuing concern, institutions may have ongoing relationships with occupational health physicians.18 The current edition of Biosafety in Microbiological and Biomedical Laboratories, which is directly relevant to BSAT research, discusses the need for an occupational health program, although it does not specifically include mention of resources to address mental or emotional health (CDC/NIH 2007). Several representatives of BSAT research facilities who spoke at the public consultations for the NSABB and EO Working Group described these types of arrangements. In some cases, physicians may be responsible for periodically reviewing and certifying the continued fitness of workers, including their mental health (perhaps in consultation with mental health specialists as needed). In others, occupational health professionals are on call to provide assistance to management or employees. When these arrangements work well, they provide a source of assurance that those working in BSAT laboratories are physically and mentally fit to be carrying out their research.
Employee Assistance Programs In addition to, or in some settings instead of, occupational health programs, employee assistance programs may play a role. EAPs are benefit programs offered by employers, usually at no cost to employees and as an adjunct to health insurance plans. Very generally, EAPs are intended to assist employees in addressing personal problems that might negatively affect their performance at work (e.g., substance abuse, major life event, financial or legal issues, family relations, workplace relations, etc.). Some degree of assessment, short-term counseling, and referral services are typical components of an EAP. Many employers contract with an outside firm to provide EAP services, since the range and variety of issues that may arise are likely to be beyond the expertise of a normal human resources office. Most EAPs have toll-free numbers that provide round-the-clock access, which is also beyond the capacity of most institutions. In addition to providing confidential resources that employees may seek on their own, in some cases employers may refer employees for performance-related issues. EAPs have the advantage of relieving a manager of the expectation that he or she will be able to diagnose specific problems; instead, the manager’s role is to identify declining work performance and then refer the employee to the EAP.
Summing Up Having occupational health specialists available and active in monitoring laboratory personnel could provide genuine assistance in monitoring for insider threats, at least for the type of behavior that is most likely to
This section does not address legal requirements that might be imposed by the Occupational Health and Safety Administration (OSHA). Note that the Select Agent Program includes regulations and guidance from OSHA as one of the set of suggested standards for ensuring biosafety requirements of the program (see <http://www.osha.gov/pls/oshaweb/owadisp.show_document?p_id=3359&p_table=OSHACT>).
be detected. An EAP could support such efforts, and the fact that these types of programs already exist for other purposes has a significant cost advantage. There is the possibility that someone suffering from personal stress that might lead him or her to undertake, or be an accomplice to, terrorism might seek help and in doing so provide an alert.19 If employees believe that seeking help, which might include taking themselves out of the laboratory for a period of time, will not have a significant impact on their careers, then the existence of such programs could have substantial benefit in avoiding larger problems. It is important to note that standard practice in industry is to return people to their safety-sensitive jobs after treatment. This is consistent with the Americans with Disabilities Act and also assures affected employees that partaking in interventions such as through an ombuds office, occupational health program, or EAP does not have “career-ending” consequences.
Implementation Because the committee did not conduct a detailed review and assessment of current and potential programs for monitoring, this report can only raise a number of issues that need to be considered in the development and implementation of such programs. The committee notes that these programs could contribute substantially to a safe and secure research environment, although we emphasize that no system can provide complete insurance against the risk of an insider threat.
One of the most difficult issues involved in creating a monitoring program is ensuring that peer or self-reporting can be done in a manner that is not damaging or “career-ending.” This applies across the board for all kinds of behavior but becomes increasingly difficult as the behavior in question moves toward instances where someone’s conduct is potentially negligent or even criminal. In such instances, reporting in a system built for safety rather than security becomes problematic. What will be the consequences for a person who reports on him- or herself? Is it more important to learn about the behavior, correct any damage, and perhaps find ways to avoid similar behavior in the future or to be sure that there are consequences for inappropriate actions? When are “zero tolerance” policies productive by establishing clear rules and when are they counterproductive by making people feel they dare not report even unintentional lapses? What happens to a person who “blows the whistle” on a colleague? Which disincentives, such as a fear of being sued, might keep managers from acting on warning signs, which in addition to any security risk, could undermine the integrity of the reporting system?
Moving beyond the particular facility or organization, what is the role of those charged with regulating BSAT entities? How do hotlines or other
reporting mechanisms maintained by regulatory agencies fit into the picture? What is an appropriate mix, if any, between “compliance assistance,” which might include permitting entities to report violations provided culpability is acknowledged or a plan is in place to prevent future transgressions, and “enforcement”?
One message from many of the presentations and the committee’s own discussions was the importance of keeping oversight for reporting systems at the local level to the extent possible. Some of this may reflect the natural reaction of the regulated or potentially regulated to yet another requirement from higher authorities. But there was also a strong sense that many if not most of the problems identified by a reporting system would be most readily and effectively dealt with at the local level. And there is already considerable information flowing upward: the current incident and theft and loss reporting requirements, for example, already provide CDC and APHIS with information about operations in BSAT facilities.
For a number of reasons, including increasing the chances of an appropriate and effective response in the event of an incident, it may also be important to use opportunities provided by internal institutional reporting systems to establish constructive relationships with local law enforcement and FBI officials. Anecdotal evidence suggests that building these relationships is challenging, yet the effort involved may be worth it if it contributes to overcoming some of the concerns and “culture clash” found in a 2008 survey by the Federation of American Scientists and the FBI on how scientists view law enforcement (Hafer et al. 2008).20
One of the most important recommendations in this report is to foster a culture of trust and responsibility in the laboratory and undertake education and training to support it. The BSAT research community—and the life sciences community more broadly—has a responsibility to help reduce the risk that the results of the knowledge, tools, and techniques developed for beneficial purposes are not misused. Given that no personnel screening process can be
expected to predict the behavior of employees in all contexts at all times, active management, monitoring, and support for those working in BSAT laboratories are key components of a comprehensive approach that builds trust and ultimately leads to safer and more secure BSAT research. Such programs are already in place in some of the laboratories carrying out BSAT research, but to be fully effective this type of program needs to be transformed into standard practice throughout the Select Agent Program.