National Academies Press: OpenBook

Improving Safety Culture in Public Transportation (2015)

Chapter: Appendix A - Literature Review

« Previous: Chapter 9 - Guidelines for Improving Safety Culture and Recommendations for Additional Research
Page 85
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 85
Page 86
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 86
Page 87
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 87
Page 88
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 88
Page 89
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 89
Page 90
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 90
Page 91
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 91
Page 92
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 92
Page 93
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 93
Page 94
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 94
Page 95
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 95
Page 96
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 96
Page 97
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 97
Page 98
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 98
Page 99
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 99
Page 100
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 100
Page 101
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 101
Page 102
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 102
Page 103
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 103
Page 104
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 104
Page 105
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 105
Page 106
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 106
Page 107
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 107
Page 108
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 108
Page 109
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 109
Page 110
Suggested Citation:"Appendix A - Literature Review." National Academies of Sciences, Engineering, and Medicine. 2015. Improving Safety Culture in Public Transportation. Washington, DC: The National Academies Press. doi: 10.17226/22217.
×
Page 110

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

85 Literature Review Introduction Little has been written about the role of safety culture in public transportation. The research team was therefore lim- ited to the literature on the theory of safety culture and its application to aviation, nuclear power operations, natural resource extraction, and related fields. In deciding which material to include in our review, the research team used its experience in improving safety culture to assess the applica- bility of prior research to public transportation, the degree to which the material has stood the test of time or holds promise for the future, the rigor with which the material was produced, and the extent to which the conclusions reached appear to be reasonably supported. The first step was to examine the theoretical foundations of safety culture. Then the team: • Addressed the challenges of defining safety culture, one of which is to distinguish it from safety climate, • Examined various competing theories and models, • Detailed the various components of safety culture included in these theories individually and combined into sets that varied significantly in terms of individual components included or excluded in different theories and models, and • Discussed the various methods of assessing the state of safety culture in a given organization. Theoretical Foundation Background Early accident investigations and discussions of safety sci- ence focused on technical failures and human error. There were some exceptions: a few studies focused on organiza- tional and social factors. For example, Turner (1978) used case studies to produce a theory of socio-technical accidents. However, most of the literature revolved around hardware or human failure. In searching for a theoretical foundation, the research team discovered two separate research streams that turned out to provide almost all of the theoretical foundation for the research. These are the fields of safety climate research and safety culture research. Origins of Safety Climate Research The concept of organizational climate is grounded in psychological research. It is a line of study that goes back to Lewin et al. (1939), who examined social relations and inter- actions in boys’ groups. The next significant step was a work by Argyris, Personality and Organization (1957). Argyris’s contention was that employees were infantilized by indus- try practices and reacted by behaving as children, as man- agement expected them to do. Shortly thereafter, McGregor (1960) developed his Theory X and Theory Y, a construct that posits that managerial behavior has direct bearing on employee behavior. Likert (1961) introduced four systems by which organizations might function, ranging from com- pletely autocratic to completely participative. In a later book, Likert (1967) called these System 1 (exploitative autocratic), System 2 (benevolent authoritative), System 3 (consultative), and System 4 (participative). Argyris, McGregor, and Likert each focused on how people were treated by organizations and how they responded as a means of understanding orga- nizational effectiveness. Katz and Kahn published The Social Psychology of Organizations in 1966. It looked at a wide array of factors that determined behavior, emphasizing “the total social situation encountered by employees rather than a more focused leadership perspective” (Schneider et al., 2010). Schein’s Organizational Psychology (1965) summarized most of the conceptual work that had been accomplished up to that point. The essence of this work was its analysis of the human issues surrounding organizational effectiveness. Schein studied perception, motivation, and attitudes toward work, but “the focus was on the design of organizations that were effective through collective human attitudes and action A P P E N D I X A

86 and not on individual employees as the unit of theory or analysis” (Schneider et al., 2010). For many years, however, research bogged down over whether climate could be adequately represented by the aggregate responses of individual employees. The impasse was mitigated when James and Jones (1974) coined the term “psychological climate”; it referred to studies in which the individual, rather than the organization, was examined: “the unit of data collection as well as the unit of analysis was the individual” (Schneider et al., 2010). This gave rise to the study of organizational climate. As Kuenzi and Schminke (2009) note, three times as many articles on organizational climate were published between 2000 and 2008 than were published in the 1990s. Safety climate research effectively began when Zohar (1980) took the organizational/social factors derived from the theory of organizational climate and devised a safety climate ques- tionnaire to examine how these factors were perceived by the workforce. When collecting safety data from various Israeli manufacturing organizations, he found that scores developed from safety climate data significantly correlated with company accident rates and ratings by safety inspectors: higher safety climate scores were associated with lower company accident rates and higher ratings by safety inspectors. Additional safety climate studies involving a formal quantitative approach fol- lowed in different industries and cultural contexts. These studies generally support a relationship between safety climate scores and safety performance. Origins of Safety Culture Research A series of serious accidents—Three Mile Island (1979), Bhopal (1984), Chernobyl (1986), Zeebrugge Ferry (1987), King’s Cross Underground (1988), Clapham Junction (1989), and Piper Alpha (1990)—highlighted the significant role played by organizational and social factors (Zhang et al., 2002). INSAG first introduced the term “safety culture” in the aftermath of the nuclear disaster at Chernobyl. It was used in a number of subsequent accident inquiries as an umbrella term for a combination of managerial, organizational, and social factors that were seen as causally contributing to the accident. In this way, the concept of safety culture—unlike that of safety climate—initially sprang into existence with- out benefit of being theoretically derived. Instead it was practically derived from a series of detailed accident analyses. Clarke (2000) noted that some academics had attached the concept to the existing literature on safety climate. She called safety climate theory the “adoptive” parent of safety cul- ture. Organizational culture is the “natural” parent, but she asserted that the necessary theoretical framework had never been established. Clarke noted further that safety culture— while it was not derived from organizational culture—does share many of its features. For instance, it is of a social nature and is expressed in behavior. Organizational culture’s roots are found in anthropology and sociology. Pettigrew (1979) originally introduced the construct of culture to the study of organizational behavior so that organizational researchers would become familiar with the language and concepts of social anthropologists. By 1990, Pettigrew’s focus had become the study of processes of leadership, commitment building, and change and the nexus of culture, strategy, and change. “Practitioners and manage- ment consultants loved the concept of organizational cul- ture, and it caught on quickly as a key variable in trying to distinguish more effective from less effective organizations” (Schneider et al., 2010). Several popular management trade books, among them In Search of Excellence, by Peters and Waterman (1982), used the study of culture, and concepts such as myth and taboo, to examine organizations. A signifi- cant problem in the study of organizational culture was that researchers were unable to establish a relationship between their qualitative case study results and organizational effec- tiveness. And, just as climate researchers bogged down in the morass of statistical levels of analysis, culture researchers became obsessed with the variety of ways in which culture might be conceptualized instead of studying how it related to organizational effectiveness (Smircich, 1983). It was not until culture researchers began to switch to quantitative methods (for example, surveys) that relationships between culture and organizational effectiveness were demonstrated (Kotter and Heskitt, 1992; Sorenson, 2002). Researchers are divided over how difficult it is to trans- form a safety culture. The interpretive view is that culture cannot easily be altered because it is not a “simple thing that can be bolted onto an organization” (Turner et al., 1989). The functionalist view is that safety culture in fact can be “socially engineered” by “identifying and fabricating its essential com- ponents and then assembling them into a working whole” (Reason, 1997) and that it is a critical variable that can be manipulated so as to influence safety and reliability (Frost et al., 1991). In short, functionalist theory says that companies can change their existing safety culture to one that will result in improved safety performance primarily by changing safety practices, while interpretive theory says that such changes are difficult to achieve and cannot simply be imposed by fiat. It is therefore the functionalist perspective that provides a con- ceptual bridge between organizational behavior and strate- gic management interests (Wiegmann et al., 2004). In other words, functionalists believe that organizational behavior can be manipulated in the interests of achieving strategic busi- ness objectives. Unfortunately, a theoretical framework for safety culture, which is based on organizational culture, remains imma- ture in comparison with that for safety climate, and progress

87 toward operationalizing safety culture has also been slow. There still is no convergence toward a universal definition of safety culture or even agreement as to what major compo- nents are necessary to produce a positive safety culture. Safety Culture Versus Safety Climate Is there really a difference between safety culture and safety climate? There are two diametrically opposed views. Schein (1985) defined organizational culture as “a pattern of basic assumptions—invented, discovered, or developed by a group as it learns to cope with its problems of external adaptation and internal integration—that has worked well enough to be con- sidered valid and, therefore, to be taught to new members as the correct way to perceive, think, and feel in relation to those problems.” He said that climate is reflective of organizational culture but that the term “culture” has a deeper meaning that implies basic assumptions and beliefs that are shared by members of the organization. Ekvall (1983) described culture as beliefs and values about people, work, the organization, and the community that are shared by most members within the organization; organizational climate, he said, stems from common characteristics of behavior and expression of feel- ings by organizational members. Table A-1 presents the dif- ferences between culture and climate in organizations as defined by Krause (2005). Krause (2005) also defined safety climate as “the prevailing influences on a particular area of functioning safety in our case at a particular time.” Safety climate, according to Krause, differs from safety culture in that it can be described as a snapshot of perceptions of culture. Climate lacks permanence and often is regarded as being more superficial than culture. According to Glendon and Stanton (1998), climate involves the current position of the company and is seen as an indicator of the orga- nization’s safety culture as perceived by employees at a certain point in time. Glick (1985) distinguished culture and climate by research discipline: climate evolving from a social psychological framework and culture being rooted in anthropology. Cli- mate, according to Glick, has traditionally been assessed differently than culture in that climate uses a more formal quantitative approach, while culture uses mainly qualitative techniques. De Cock et al. (as translated from “Organisatieklimaat: En opdracht voor het personeelsbeleid?” and cited in Gulden- mund, 2000) asserted that organizational climate refers to the overall perception of a number of organizational processes, and that culture is the underlying meaning of those processes, which forms a pattern of significance and value. Schein (1992) maintained that climate preceded culture, and that climate is culture in the making; climate is a reflection and manifesta- tion of cultural assumptions. Ultimately, Schein believed that climate is replaced by culture, as culture conveys a broader, more profound, and more comprehensive meaning. Others, however, have argued emphatically that there is no real difference between safety culture and safety climate. Examples are: • Kennedy and Kirwan (1995), who stated that the “real dif- ficulty lies in the atheoretical roots of safety culture,” and who believed that it is only a matter of convenience that researchers “have conveniently attached the concept to an existing literature on safety climate,” and • Clarke (2000), who observed that there is “no universal agreement on the definition of safety culture” but rather “an ongoing academic debate about the difference between safety climate and safety culture and little theoretical underpinning for much of the empirical work in this area.” For purposes of this project, the research team treats safety climate as a snapshot in time of the organization’s safety culture (Krause, 2005). This view is consistent with that of Wiegmann et al. (2002), who concluded that safety climate is “a temporal indicator of a more enduring safety culture.” In terms of possible future reconciliation or melding of the concepts, Schneider et al. (2010) stated: “In the 1978 edi- tion of their book, Katz and Kahn used each of the following terms to capture the essence of the organization as a social psychological enterprise: norms, values, roles, climate, cul- ture, subculture, collective feelings and beliefs, atmosphere, CULTURE CLIMATE Common values that drive organizational performance Perceptions of what is expected, rewarded, and supported Applies to many areas of functioning Applies to specific areas of functioning “How we do things” “What we pay attention to” Unstated Stated Background Foreground Changes more slowly Changes more rapidly Table A-1. Comparison of culture and climate concepts (Krause, 2005).

88 • Cooper (2000) called safety culture a subset of organiza- tional culture because individual attitudes and behaviors are reflective of the organization’s ongoing health and safety performance. • Eiff (1999) described safety culture as “shared values, norms, behaviors about minimizing risk, respect toward safety, and technical competence shared by individuals and groups of individuals who place a high premium on safety as an organizational priority. Safety culture exists in an organization in which individual employees, regardless of their position, assume an active role in error prevention— and that role is supported by the organization.” • The UK’s Health and Safety Commission called safety cul- ture “the product of individual and group values, attitudes, competencies, and patterns of behavior that determine the commitment to, and the style and efficiency of, an organi- zation’s Health and Safety programs. Organizations with a positive safety culture are characterized by communica- tions founded on mutual trust, by shared perceptions of the importance of safety, and by confidence in the efficacy measures” (Health and Safety Commission, 1993). • EFCOG/DOE said a safety culture is “an organization’s val- ues and behaviors, modeled by its leaders, and internalized by its members, which serve to make safe performance of work the overriding priority to protect the public, workers, and the environment” (EFCOG/DOE, 2009). • TRACS defined safety culture as “the product of individual and group values, attitudes, perceptions, competencies, and patterns of behavior that can determine the commit- ment to and the style and proficiency of an organization’s safety management system” (Transit Rail Advisory Com- mittee for Safety, 2011). • The FRA defined organizational culture as “shared values, norms, and perceptions that are expressed as common expectations, assumptions, and views of rationality within an organization and play a critical role in safety.” It notes that organizations with a positive safety culture are char- acterized by “communications founded on mutual trust, by shared perceptions of the importance of safety, and by confidence in the efficacy of preventive measures” (U.S. Federal Register, 2012). • The Volpe Center has done a great deal of work for the U.S. DOT and the FRA. A white paper by Joyce Ranney (2011) defined safety culture (short version), relying on Cooper, as “shared values, actions and norms that demon- strate a commitment to safety over competing goals and demands,” with the 10 “most critical elements” [relying on Reason’s “Managing the Risks of Organizational Accidents” (1997) and the Health and Safety Commission (1993)] being: leadership commitment, open communication, shared responsibility, continuous learning, safety-conscious work environment, non-punitive reporting, safety as a pri- taboos, folkways, and mores. To our mind, this is a useful listing because it points to ways in which terminology from both climate research and culture research literatures might be simultaneously used to capture a broad range of related phenomena.” Theoretical Foundation Findings Theoretical foundation findings from the literature review are as follows: • There is a distinct and traceable theoretical foundation for safety climate; safety culture, however, to date remains immature. • Safety climate and safety culture are two closely associated but distinct concepts. • Safety climate studies generally use formal quantitative methods, while safety culture studies historically have used mainly qualitative case study techniques. However, the num- ber of safety culture quantitative studies has been increasing. • Safety climate studies generally support a relationship between safety climate scores and safety performance; recent quantitative safety culture studies have demonstrated a sim- ilar relationship between safety culture scores and organiza- tional effectiveness. Defining Safety Culture INSAG defined safety culture as “that assembly of char- acteristics and attitudes in organizations and individuals which establishes that, as an overriding priority, nuclear plant safety issues receive the attention warranted by their significance” (International Nuclear Safety Advisory Group, 2002). This INSAG definition, however, is just one of many, and there is little evidence of any momentum toward a universally accepted definition. In “Safety Culture: A Concept in Chaos?” Zhang et al. (2002) reviewed a number of studies conducted in high-risk industries and concluded that there is “consid- erable disagreement among researchers as to how to define safety culture.” The literature contains a multitude of definitions. Guldenmund (2000) cited 16 that appeared from 1980 through 1997 alone. The research team, based on its experience in pub- lic transportation, found the following to be the most compel- ling and relevant to public transportation: • Uttal (1983) defined safety culture as “shared values (‘what is important’) and beliefs (‘how things work’) that interact with a company’s people, organizational structures, and control systems to produce behavioral norms (‘the way we do things around here’).”

89 from mistakes is unlikely, new ideas are rejected in an envi- ronment in which accountability is low, and documentation is discouraged. Bureaucratic safety cultures address issues in terms of regulations, laws, and rules. The environment is reactive rather than proactive and is centered on governance. While new ideas are entertained, they are rarely implemented because strict adherence to rules and regulations discour- ages innovation and creativity. Generative safety cultures are based on a proactive model of problem solving. Information is actively sought and collected when an informed culture of safety exists (Reason, 1997). Messengers are trained to be effective, failures and near misses are scrutinized, and open reporting of safety concerns is welcomed. As Reason has noted, the Westrum model is therefore concerned primar- ily with how organizations process and share information: Westrum “has distinguished organizational cultures accord- ing to the way that they deal with safety-related information” (Reason, 1997). Reason Model (1997) Reason asserted that a safety culture can be engineered. Figure A-1 provides a schematic of his model. The various elements of Reason’s model are driven by underlying perceptions, attitudes, and behaviors. According to Reason, four of the elements (learning, reporting, flex- ible, and just) feed into and support the fifth (informed). As Reason said, “The preceding . . . have identified four critical subcomponents of a safety culture: a reporting culture, a just culture, a flexible culture, and a learning culture. Together they interact to create an informed culture which, for our purposes, equates with the term safety culture as it applies to the limitation of organizational accidents” (Reason, 1997). Note that many depictions of the Reason model incorrectly portray informed culture as being separate and distinct from learning, reporting, flexible, and just cultures. Reason said clearly that both the Westrum and Reason models have the processing of information as their primary focus. ority in decision making, mutual trust, fair and consistent responses, and training and resources. Ranney noted that “the extent to which attitudes, behaviors and policies align to prioritize safety indicates the strength of an organization’s safety culture.” Ranney (2003) cited the following safety cul- ture definitions: – The safety culture of an organization is the product of individual and group values, attitudes, competencies, and patterns of behavior that determine the commit- ment to, and the style and proficiency of, an organiza- tion’s health and safety programs (Health and Safety Commission, 1993). – “Shared values (‘what is important’) and beliefs (‘how things work’) that interact with an organization’s struc- tures and control systems to produce behavioral norms (‘the way we do things around here’)” (Reason, 1997). These definitions, however, operate at different levels of abstraction and emphasize different aspects of safety culture. There is no convergence toward a universal definition. Safety Culture Theories and Models The following safety culture theories and models are pre- sented roughly in the order in which they appeared in the literature. Also presented are significant contributions by sev- eral researchers that do not rise to the level of an independent theory or complete model. Westrum Model Westrum (1993) created one of the first models. In his model, there are three stages of safety culture—pathological, bureaucratic, and generative—which display the characteris- tics shown in Table A-2. Pathological safety cultures limit information between the lines of the organization, preventing valuable information from flowing that may benefit the safety culture; learning Pathological Bureaucratic Generative Information Hidden Ignored Sought Messengers Shouted Tolerated Trained Responsibilities Shirked Boxed Shared Reports Discouraged Allowed Rewarded Failures Covered up Merciful Scrutinized New ideas Crushed Problematic Welcomed Resulting organizations Conflicted organization Red-tape organization Reliable organization Table A-2. Westrum’s original safety culture model (Hudson, 1999).

90 organization from its present practices toward the ideal and thereby engineer a positive safety culture. The success of the new practices affects underlying employee perceptions, atti- tudes, and behaviors. For example, the changing of practices having to do with reporting and just treatment of employees can create a state of mutual trust in an organization, which in turn results in a much greater flow of useful information. It is important to note that Reason’s primary focus was on what he termed “organizational accidents” as opposed to “individual accidents.” He defined “organizational accidents” as the “comparatively rare, but often catastrophic, events that occur within complex modern technologies such as [those in] nuclear power plants, commercial aviation, the petro- chemical industry, chemical process plants, marine and rail transport.” Individual accidents, on the other hand, are “ones in which a specific person or group is often both the agent and the victim” (Reason, 1997). Hudson Model Hudson proposed a safety culture model that is a refinement of Westrum. It portrays the evolution of safety culture from the pathological to the generative stage while expanding the model to include five stages, replacing “bureaucratic” with “calcula- tive” and adding “reactive” and “proactive” phases (Hudson, 1999) (See Figure A-2). The primary drivers, from “patho- logical” to “generative.” are increased trust and increased dis- semination of information and (as with Westrum and Reason) sharing of information. In “A Framework for Understanding the Development of Organizational Safety Culture,” Parker et al. (2006) dem- onstrated a useful application of the Hudson model. They interviewed 26 oil and gas executives, creating a matrix that showed how each organization handles incident/accident reporting, causes of accidents, purpose of procedures, and so forth, locating it on the scale between pathological and generative in each category. The average of the results shows where on the scale, from pathological to generative, the orga- nization as a whole rests. Hudson, like Westrum and Reason, said that improvement in information flow leads to improved safety culture. Reason and Hudson also saw increased levels of trust as a primary driver of that improved flow of information. Guldenmund Model Guldenmund (2000) likened safety culture to an onion. The core contains basic assumptions about safety culture that are implicit, taken for granted, subconscious, and shared by the entire organization. The next layer, espoused values, refers to the attitudes of members of the organization. Four broad groups of espoused values represent attitudes about hard- In an informed culture, the organization collects and ana- lyzes relevant data and actively disseminates safety informa- tion. Individuals who manage and operate the organization’s safety system know the human, technical, organizational, and environmental factors that determine the safety of the sys- tem. All members of the organization understand and respect the hazards of operations and are alert to the system’s poten- tial vulnerabilities. In a reporting culture, an environment is cultivated that encourages employees to report safety issues without fear of punishment. Employees know that confiden- tiality will be maintained and that, when they disclose safety information, management will act to improve the situation. Reason’s model particularly communicates the importance of maintaining a reporting culture within an organization. This reporting culture, which must be initiated and sup- ported wholeheartedly by management, is necessary in order for management to get an accurate picture of the status of its organization’s safety culture. For example, Wiegmann et al. (2004) supported a claim by Eiff (1999) that “one of the foun- dations of a true safety culture is that it is a reporting culture” by identifying an effective and systematic reporting system as the keystone to identifying breaches before accidents happen. In a just culture, unintentional errors or unsafe acts are not punished. Deliberate, reckless, and indefensible acts that are considered unjustifiable and that place the organization and individuals at risk are subject to disciplinary action. A just culture in turn promotes mutual trust. In a flexible culture, the organization and employees are able to adapt effectively to changing needs and demands. For example, the organiza- tion may shift from a hierarchical structure to a flatter, or more horizontal than vertical, structure for more decentralized problem-solving capability. A learning culture encourages use of safety information to draw conclusions about necessary changes and incorporate a willingness to implement major reform when change is required (Civil Air Navigation Ser- vices Organisation, 2008). Management is able to take direct action in the areas pertaining to each subculture to move the Figure A-1. Reason’s safety culture model (research team modified version).

91 As Guldenmund noted, this model incorporates the con- cepts of both safety climate and safety culture and “also does justice to the integrative, holistic concept of culture as advo- cated by . . . cultural anthropologists.” He went on to say that change within an organization should only be undertaken with detailed knowledge of a company’s basic assumptions. ware (facilities/plant design), management systems (safety and other), people (from senior management to lower-level employees), and behavior (risk taking and so on). The outer layer consists of artifacts or the outward expression of safety culture. Examples include behaviors, safety performance, and physical signs of safety awareness (see Figure A-3). Figure A-2. Hudson’s 1999 model (Hudson, 1999). Figure A-3. Guldenmund’s model (Fleming, 2000).

92 Safety is not seen as a key business risk, and the safety department is perceived to have primary responsibility for safety. Many accidents are seen as unavoidable and as part of the job. Most frontline staff are uninterested in safety and may only use safety as the basis for other arguments, such as changes in shift systems.” 2. Managing: “The organization’s accident rate is average for its industrial sector but they tend to have more serious accidents than average. Safety is seen as a business risk and management time and effort is put into accident preven- tion. Safety is solely defined in terms of adherence to rules and procedures and engineering controls. Accidents are seen as preventable. Managers perceive that the majority of accidents are solely caused by the unsafe behavior of front- line staff. Safety performance is measured in terms of lag- ging indicators such as lost time injuries (LTIs), and safety incentives are based on reduced LTI rates. Senior manag- ers are reactive in their involvement in health and safety (i.e., they use punishment when accident rates increase).” 3. Involving: “Accident rates are relatively low, but they have reached a plateau. The organization is convinced that He offered two alternatives for initiating the desired change— (1) change the organization’s basic assumptions and (2) change the organization’s safety attitudes—and considered the second to be more likely to work. Fleming/Keil Centre Safety Culture Maturity Model Mark Fleming of the Keil Centre in Edinburgh, Scotland, developed a safety culture maturity model that is used in the aviation, rail, petrochemical, offshore oil and gas, health, steel- making, and manufacturing industries. The model is intended to help organizations identify and establish actions that will improve safety culture. It has five levels and includes 10 dis- tinct elements of safety culture maturity (see Figure A-4). Organizations move from the first to the fifth level by develop- ing and advancing the degree of maturity of the 10 elements. The five levels are: 1. Emerging: “Safety is defined in terms of technical and procedural solutions and compliance with regulations. Figure A-4. Fleming Safety Culture Maturity model (Fleming, 2000).

93 DuPont Bradley Curve Model (1999) The DuPont Bradley curve model places companies and organizations in the following four categories: 1. Reactive: These companies handle safety issues by natural instinct, focusing on compliance instead of a solid safety culture. Responsibility is delegated to the safety manager, and there is generally a lack of management involvement in safety issues. 2. Dependent: While there is some management commit- ment, supervisors are generally responsible for safety control, emphasis, and goals. Attention to safety is made a condition of employment but with an emphasis on fear, discipline, rules, and procedures. Such companies do value their people and will provide safety training. 3. Independent: These companies stress personal knowledge of safety issues and methods as well as commitment and standards. Safety management is internalized and stresses personal value and care of the individual. These compa- nies engage in active safety practices and habits and recog- nize individual safety achievements. 4. Interdependent: These companies actively help others conform to safety initiatives—they become others’ keep- ers, in a sense. They contribute to a safety network and have a strong sense of organizational pride in their safety endeavors. In the DuPont Bradley curve model, the three elements of safety management are: (1) leadership, (2) structure, and (3) processes and actions. DuPont has administered its safety perception survey since 1999 and has a database available for benchmarking. The data- base contains more than 632,000 responses from 96 industries, 41 countries, and over 3,383 locations. It is used to rate com- panies on the basis of their relative cultural strength. These ratings are “weak” (RCS less than 40), “average” (40–60), “good” (60–80), and “world-class” (greater than 80). RCS is then plotted on the x-axis of the Bradley curve against each company’s 3-year average OSHA total recordable injury rate on the y-axis. The results are as follows: 19 organizations with a weak RCS had a mean TRIR of 4.6, 57 companies with an average RCS had a mean TRIR of 2.7, 164 companies with a good RCS had a mean TRIR of 1.1, and 106 companies with a world- class RCS had a mean TRIR of 0.61. This comparison shows a strong correlation between relative culture strength and safety performance (see Figure A-5). No proof of causality is offered (Hewitt, 2011). While DuPont’s behavior-based safety work in the public transportation industry has some detractors (Lessin, 2000), the DuPont Bradley curve model has no obvious weaknesses the involvement of the frontline employee in health and safety is critical if future improvements are going to be achieved. Managers recognize that a wide range of fac- tors cause accidents and the root causes often originate from management decisions. A significant proportion of frontline employees are willing to work with management to improve health and safety. The majority of staff accept personal responsibility for their own health and safety. Safety performance is actively monitored and the data is used effectively.” 4. Cooperating: “The majority of staff in the organization are convinced that health and safety is important from both a moral and economic point of view. Managers and frontline staff recognize that a wide range of factors cause accidents and the root causes are likely to come back to management decisions. Frontline staff accept personal responsibility for their own and others’ health and safety. The impor- tance of all employees feeling valued and treated fairly is recognized. The organization puts significant effort into proactive measures to prevent accidents. Safety perfor- mance is actively monitored using all data available. Non- work accidents are also monitored and a healthy lifestyle is promoted.” 5. Continually improving: “The prevention of all injuries or harm to employees (both at work and at home) is a core company value. The organization has had a sus- tained period (years) without a recordable accident or high potential incident, but there is no feeling of compla- cency. They live with the paranoia that their next accident is just around the corner. The organization uses a range of indicators to monitor performance but it is not perfor- mance-driven, as it has confidence in its safety processes. The organization is constantly striving to be better and find better ways of improving hazard control mechanisms. All employees share the belief that health and safety is a critical aspect of their job and accept that the prevention of non-work injuries is important. The company invests considerable effort in promoting health and safety at home” (Fleming, 2000). The 10 elements of the Safety Culture Maturity model are: 1. Management commitment and visibility, 2. Productivity versus safety, 3. Learning organization, 4. Safety resources, 5. Shared perceptions, 6. Communication, 7. Participation, 8. Trust, 9. Industrial relations and job satisfaction, and 10. Training.

94 safety becomes a priority for management, resources are allocated to improve safety culture. Cooper said that safety culture can be evaluated by using safety climate surveys to measure psychological aspects, operational audits to measure behavioral aspects, and safety management system audits to measure situational aspects. The model also predicts that an intervention directed at improving any one of the three com- ponents will exert a reciprocal effect on the other two. Cooper’s business process model of safety culture (Fig- ure A-6), published in 2002, essentially recasts Cooper 2000 to make it clear that how a company manages safety “inputs” determines the extent to which employees commit them- selves to safety. This formulation highlights the significant role that management plays in cultivating safety culture. Values, beliefs, assumptions, and behaviors are shown in the model as inputs, which are transformed, when combined with organizational expectations, goals, and management practices, into a safety culture. Safety culture affects outcomes when commitment to safe operations and practices is evident in daily operations and when safety aspects are acted on by management and continually improved (Carelse, 2011). What is lacking in most of the preceding theories and models is a systems view. Most theories and models do not look at influ- ences outside the affected organization. As shown in the Roberts (2010) schematic of David Gaba’s Arrow (Figure A-7), regula- tors and government frequently have a significant effect on outcomes. The Arrow might be further expanded to include the individual involved in the accident, peers, management, board, stockholders, regulators, legislatures, and the public. Research has shown that the general public is reactive regard- ing safety—that is, willing through its legislators to provide resources after a dramatic accident rather than before, even though the best predictors and risk assessments indicate that proactive interventions are far more effective at reduc- ing risk. This conforms to the observations made by Reason (1997) about the role that regulation plays: “if regulators are to be other than convenient scapegoats, they will have to be provided with the legislation, the resources, and the tools to do their jobs effectively. As we have seen, safety legislation is or internal contradictions, is based in part on credible empir- ical data, and demonstrates a relationship between safety cul- ture and safety performance. Cooper Model Cooper (2000) initially developed a model based on three interrelated aspects of safety culture: psychological, behav- ioral, and situational (see Table A-3). Psychological aspect shows “how people feel,” behavioral is indicative of “what people do,” and situational is “what the organization has.” The psychological aspects in Cooper’s model refer to indi- vidual feelings surrounding safety culture and safety man- agement systems. These include safety climate, defined as a snapshot of the values, attitudes, and perceptions of indi- viduals and groups. Therefore, this particular safety culture model subsumes safety climate as one of its three aspects. The behavioral aspects reflect those safety-related actions dem- onstrated by individuals when performing work. Situational aspects reflect the structure of management systems within an organization and the interaction of the different hierarchi- cal levels in terms of accountability for safety culture. When Figure A-5. DuPont Bradley curve (Hewitt, 2011). Table A-3. Cooper’s safety culture model (Civil Air Navigation Services Organisation, 2008). Psychological Aspects Behavioral Aspects Situational Aspects “How People Feel” “What People Do” “What the Organization Has” Can be described as the safety climate of the organization, which is concerned with individual and group values, attitudes, and perceptions about safety. Safety-related actions and behaviors, management commitment to safety. Policies, procedures, regulation, organizational structures, and management systems.

95 would be a prudent step. As these operations become more complex and technology-dependent, the need to adhere to such a model increases sharply. An HRO is generally defined as an organization that repeat- edly accomplishes its mission while avoiding catastrophic events, despite significant hazards, dynamic tasks, time con- straints, and complex technologies (Hartley, 2010). B&W Pantex has published several books on HRO implementation in the nuclear weapons industry, including Hartley et al., High Reliability Operations: A Practical Guide to Avoid the System Accident, and Hartley et al., Causal Factors Analysis: An Approach for Organizational Learning. HROs are generally regarded as ranking high in the safety hierarchy. The Columbia Accident Investigation Board was critical of NASA’s safety culture and, as a result, adopted the high-reliability organization as a standard. Its conclusion was that, had the principles of HRO organizations been fol- lowed, the Columbia would not have disintegrated (Boin and Schulman, 2008). The term “high-reliability organization” was popularized through research conducted by a group of UC-Berkeley schol- ars who noticed that, while much had been written about organizations experiencing disasters, little attention had been devoted to organizations that operated in a technologically complex environment—one in which the potential for mis- hap is great but whose records indicated that the organiza- tions had avoided catastrophe. These researchers focused on the U.S. air traffic control system, the Diablo Canyon Power Plant, and U.S. Navy nuclear aircraft operations. The first emerging definition came from Roberts (1990): “Within the set of hazardous organizations there is a subset which has enjoyed a record of high safety over long periods of time. One can identify this subset by answering the question, ‘How many times could this organization have failed, resulting in catastrophic consequences, yet did not?’ If the answer is on the order of tens of thousands of times, the organization is highly reliable.” Ongoing criticism of this definition (Marais et al., 2004) was that such an organization could undergo a major accident every day and still qualify as highly reliable. enacted in the aftermath of disasters, not before them.” He went on to note that, while there is no obvious political gain to be had from preventing accidents, in the long run that effort is more rewarding. This applies throughout the “system chain” of prevention responsibility. High-Reliability Organization Model The research team believes that, given the potentially cata- strophic consequences of an unanticipated event and the subsequent loss of critical transportation functions, larger transit authorities might consider adoption of the HRO model described here. Two subway trains operating under CBTC at rush hour in the tunnels of New York carry up to 5,000 passengers. A head-on collision due to a CBTC failure and a subsequent fire at rush hour would lead to total casual- ties that exceed those resulting from most aviation crashes, offshore platform accidents, and other high-profile accidents and incidents. A sequence of events of this kind could cripple all transportation within New York City for days, if not weeks. This model is described in great detail because the research team believes that its adoption by large heavy-rail properties Figure A-6. Cooper’s business process model of safety culture (Cooper, 2002). Figure A-7. David Gaba’s Arrow systems view model (Roberts, 2010).

96 Turner (1978), however, had noted that simplification was dangerous because it could limit the precautions people would take and the number of undesirable results they could envision. Naevestad (2009) discussed Turner’s findings in terms of the meanings that members of an organization might attribute to warnings and signals of danger in ill-structured situations. Weick et al. (1999) argued that HRO researchers understand culture in organizations. Drawing on the work of Turner (1978), they believed, further, that simplification, rigid safety views, and a limited opinion of the role of safety induce igno- rance to hazards and signals of danger. HROs create processes and systems that reduce the possibil- ity of unexpected events, allowing for containment and speedy recovery once such an event occurs. In the HRO infrastructure, small failures are tracked meticulously. Personnel are engaged in the process of collective problem solving through inquiry, which allows HROs to maintain a high level of proficiency in identifying gaps in system continuity and understanding warnings of potential catastrophes. Operations personnel are trained to react to even weak signals and to address the cause of failure prior to a series of events that can lead to disaster. The interactions of HRO processes are illustrated in Figure A-8. According to Weick et al. (1999), the following five charac- teristics create an HRO mind-set and guide HRO behaviors and operational thinking: • Preoccupation with failure: HROs are focused on indica- tors that may predict possible catastrophic consequences. Near misses are viewed as opportunities to improve sys- tems by analyzing strengths and identifying weaknesses, as well as allocating necessary resources to address and cor- rect issues. In HROs, near misses are used to illustrate to employees the weaknesses within systems. This constant vigilance results in a broader understanding of processes. • Reluctance to simplify: HROs acknowledge the complex- ity of the work environment and are apprehensive about accepting simple solutions. Individuals are encouraged to analyze all of the potential root causes of a problem and to draw on the diverse experiences of staff to refine pro- Indeed, Marais et al. (2004) say that, by this criterion, “it is difficult to think of any low reliability organizations.” HRO researchers, led by La Porte (1996), refined the definition by saying that high-reliability organizations are hazardous systems that produce “nearly accident-free per- formance” or function in a “nearly error-free fashion.” High- reliability organizational theory surfaced from field studies conducted by researchers investigating low accident and human error rates in three high-risk organizations (Weick et al., 1999). HRO theory assumes that an organization’s people, processes, and technology, when properly struc- tured, can handle complex and hazardous activity (Singer et al., 2003). In contrast to conventional accident theory that posits that accidents are impossible to prevent in highly com- plex organizations, the theory of high reliability is based on a belief that organizational design and good management practices can in fact prevent accidents and control error rates. High-reliability organizations generally use the following techniques (Marais et al., 2004): 1. Redundancy, 2. Simulation, 3. Strict organizational structure, 4. Decentralized decision making, 5. Learning from mistakes, 6. Mindfulness, 7. Training, and 8. Use of highly skilled individuals. Perrow (1999) believed that a tightly coupled complex orga- nization’s cascading effects can quickly spiral out of control before operators are able to understand the situation and per- form corrective action. Perrow’s beliefs fed the argument that, in complex, interactive, tightly coupled organizations, the need for further complexity will become more pressing, thereby increasing the likelihood of more accidents. In order to create safer systems, organizations must reduce inter active complex- ity and decouple systems because functionality and efficiency can be achieved through systems that are more simply designed. Figure A-8. A mindful infrastructure for high reliability (Muhren et al., 2008).

97 mitigate risk and to reduce or eliminate factors that lead to high-risk events. • Goal prioritization and consensus exist in HROs because leaders gain support from employees by prioritizing per- formance and safety as organizational goals (La Porte and Consolini, 1991). Making safety the number-one priority is verbalized but also demonstrated in decision-making processes and resource allocation. • Simultaneously decentralized and centralized operations represent an organizational principle in HROs. Trained field- level staff respond to a specific crisis, while the primary chain of command maintains control of centralized operations. One example is the operation of aircraft carriers in the U.S. Navy—the carrier is subject to the navy’s chain of command, yet the lowest-level seaman can abort landings and address safety concerns as necessary (La Porte and Consolini, 1991). • Extensive use of redundancy is defined as the ability to pro- vide for secondary-unit execution of a task if the primary unit falters or fails (La Porte, 1996). According to Roberts (1990), this includes technical redundancy (backup com- puters are used) and personnel redundancy (functions are duplicated, and more than one person is assigned to per- form a given check). • Organizational learning takes on particular importance in high-reliability organizations as a result of the impractical- ity of trial-and-error learning (Weick, 1987). Role playing, simulated experience, storytelling, and other creative forms of sharing information are often used. Resources are appro- priated to encourage development of technical skills and competencies. While accidents are certainly used as learn- ing experiences in high-reliability organizations, trial-and- error learning is not viewed as an effective way to reduce risk when the accident could have catastrophic consequences. HROs have demonstrated the ability to adapt, change, and be flexible in complex environments, and Weick et al. (1999) see this as a primary reason for moving high-reliability prac- tice more into the mainstream. HROs focus on failure through adaptive learning and reliable performance. Hannan and Free- man (1984) thought that organizational reliability could be achieved through the development of highly standardized routines. HRO theory deviates from that definition. Inherent in the concept of high reliability is variation in learning and questioning and exploring, as well as using collective evalua- tion tactics to increase cognition and sustain understanding of probable issues. The best HROs are not sitting and wait- ing for an error to occur before responding. Instead they are preparing for extraordinary events through an expansion of knowledge and the use of technology (Weick et al., 1999). Adoption of the HRO model, however, is no guarantee of catastrophe avoidance. As the Fukushima nuclear accident cesses, systems, and decision making. While HROs work to simplify processes, they do not apply simplistic solutions to complex problems that arise as a result of the nature of their work. • Sensitivity to operations: HROs are flexible. Implicit is an acknowledgment that circumstances change. The element of constant change in complex systems requires HROs to identify anomalies and recognize problems quickly. This process is referred to as “maintaining situational awareness.” • Commitment to resilience: Containing errors and creat- ing new methods to prevent future errors are top priori- ties. The organization assumes that, in spite of training, information sharing, and numerous built-in safeguards, the system may fail. To this extent, teams in high-reliability organizations prepare to respond to system failure through consistent operational procedure drills. • Deference to expertise: HROs maintain a culture of respect for knowledge and experience. Team members look to the individual who is most knowledgeable about an issue regardless of tenure/seniority or position within the orga- nization. Hierarchy is deemphasized in favor of an open transfer of knowledge and an atmosphere that encourages information sharing, which can prevent problems. Rochlin (as cited in Roberts, 1993) states that what sepa- rates HROs most distinctly is their adherents’ ability to run nearly error-free complex operations while incurring few, if any, accidents and avoiding major catastrophes. The capacity to sustain good performance is enhanced by a preoccupation with avoiding major system setbacks and an ability to ques- tion and analyze processes, defer to knowledge and expertise, exchange information constantly, and create environments for decentralized decision making. High-reliability organiza- tions are defined as organizations that have not just avoided failure through good fortune or the vagaries of probability but have actively managed to control and reduce the risks of technical operations whose inherent hazards make them prone to catastrophic failure. Roberts (1993) identified the basic elements of such organizations: • People within an organization must be helpful to and sup- portive of one another; • People must trust one another; • People must have friendly, open relationships emphasizing credibility and attentiveness; and • Work environments must be resilient, must emphasize creativity and goal achievement, and must provide strong feelings of credibility and personal trust. HROs share a number of traits that sustain their safety cul- tures. In designing safety culture around the following fun- damental principles, high-reliability organizations strive to

98 at-risk behaviors.” Geller believes that people-based leader- ship means more than just holding individuals accountable; it requires employees to inspire others to be accountable for injury prevention and to care actively for the safety and health of coworkers. Behavior-based safety, another of Geller’s pre- cepts, has come under attack from labor unions. The unions argue that this concept can be perverted to maintain that it is not hazards on the job that cause injuries and illnesses but the behavior of those exposed to the hazards (victims). They see this concept as a useful tool for a management that is intent on shifting blame and focus from employers (and hazard- ous conditions) to workers (and unsafe acts). They are par- ticularly critical of the DuPont STOP program, which they maintain posits that 96% of all accidents are caused by unsafe acts and focuses on worker behavior, discipline, and safety incentive programs (Lessin, 2000). Models Compared and Contrasted Of the fully developed safety culture theories or models that have been presented, four (Westrum, Hudson, Fleming, and the DuPont Bradley curve) might be called dynamic models in that they define three, four, or five progressive levels of safety culture maturity in an organization. The dynamic models are therefore much more definite with respect to where an organization is on the negative-to-positive safety culture scale. Conversely, a shortcoming of dynamic models is that many organizations’ characteristics fall into more than one model level, which makes the specification of different levels seem somewhat artificial and arbitrary. • Westrum: This model progresses from the “pathological” through the “bureaucratic” to the “generative” stages. In the pathological stage, reports are discouraged, failures covered up, and new ideas crushed. In the generative stage, reports are rewarded, failures scrutinized, and new ideas welcomed. The primary driving force is how the organization processes and shares information. • Hudson: This model, which is a refinement of the Westrum model, also progresses from the pathological to the genera- tive stage, but in five steps instead of three: pathological, reactive, calculative, proactive, and generative. The pri- mary goals in Hudson on the path from pathological to generative are that the organization become increasingly better informed and experience higher levels of trust. • Fleming: This model progresses in five levels, from emerg- ing through managing, involving, and cooperating to continuous improvement. At the emerging level, safety is perceived in terms of technical and procedural solutions, compliance with regulations is the primary driver, safety is not seen as a key business risk, and the safety depart- ment is perceived to have primary responsibility for safety. reminded the world, organization culture and system com- plexity can combine to produce disaster if assumptions are not constantly questioned and complex systems hazards not fully anticipated (Pidgeon, 2012). Other Significant Contributions Dan Petersen and E. Scott Geller have also made contribu- tions to the safety culture literature. While neither has pro- duced a fully developed theory or model, their work has gone a long way toward stimulating discussion of safety culture concepts and has ignited useful discussions. Petersen Petersen (1996) has argued that “safety is just another management function and should be managed in the same way.” He believes that five widely held safety beliefs are wrong: 1. Accidents are caused by unsafe acts and conditions. Petersen says that accidents are in fact caused by “a com- bination of a management system and a culture or envi- ronment that leads to human error.” 2. There are certain essential elements to a safety program. Petersen says that this is not true in all cases, and that instead it is environment and culture that control and determine which elements work and which do not. 3. Accident statistics tell us something. He says that, for most organizations, particularly smaller ones, recordable injury rates “have no statistical validity and very little meaning whatsoever.” They neither diagnose problems nor direct organizations in the direction of improvement. 4. Audits predict results in safety. He says that there is little correlation between audit reports and injury records in large companies because audits are generally as much about paperwork and regulatory compliance as they are about the effectiveness of a safety program. 5. Regulatory compliance ensures safety results. He argues that being in compliance with OSHA and having a safe work- place are totally different things; to support his hypothesis, he cites injury statistics since the institution of OSHA. Geller Geller (2008) sees people-based safety as an extension and evolution of behavior-based safety, of which he is a leading proponent. He uses the acronym ACTS (act, coach, think, see) to describe the process. “Specifically in a total safety culture, people act to protect themselves and others from unintentional injury; coach themselves and others to iden- tify barriers to safe acts and provide constructive behavior- based feedback; think in ways that activate and support safe behavior; and focus and scan strategically to see hazards and

99 a menu of tremendous detail and complexity in its learning, reporting, flexible, and just subcultures driven by underlying perceptions, attitudes, and behaviors. The genius of the model lies in Reason’s recognition that the essence of all that detail and complexity is how effectively an organization develops, disseminates, and uses safety information. In Reason’s words, “an informed culture is a safety culture.” A transit agency can draft an action plan based on Reason that, if properly executed, will surely move it toward a more positive safety culture. The Guldenmund model incorporates concepts from both safety climate and safety culture. It focuses to a greater extent than the other models on the assumptions and values that underlie and drive the artifacts or visible signs of the state of safety culture in an organization. Its major contribution is its detailed exploration of the influence of assumptions and values on safety culture. The Cooper 1999 model’s focus on how psychological, behavioral, and situational aspects interact to produce a safety culture in an organization is a major contribution. Its predic- tion that an intervention directed at improving any one of the three components will exert a reciprocal effect on the other two is unique, as is its insistence that safety culture can only be evaluated by using safety climate surveys to measure psy- chological aspects, operational audits to measure behavioral aspects, and safety management system audits to measure situational aspects. The Cooper 2002 business process model is a reformula- tion of Cooper 1999 that makes it clear that how a company manages the safety inputs initially defined in Cooper 1999 determines the extent to which its employees commit them- selves to safety. The major contribution of the systems view model, of course, is to point out that significant influences are exerted by entities external to the affected organization. Finally, there is the HRO model. It is clearly the most advanced and probably the most effective safety culture model. It also has had more practical application than most—if not all—of the others presented. It is the established universal model for indus- tries such as nuclear power, commercial aviation, and offshore energy extraction. It is demanding, difficult, and comparatively expensive to implement. Transit agencies that exclusively oper- ate buses or light rail may well find that the marginal cost of the HRO model outweighs the marginal benefit. However, as noted before, transit agencies operating heavy rail might consider its adoption. Sets of Components of Safety Culture Given the elusiveness of a universal definition of safety cul- ture, it is instructive to look at what uniformity exists in the literature with respect to the sets of components that combine Accidents are frequently seen as unavoidable and just part of the job. At the continuous improvement level, the preven- tion of injuries is a core company value; there is no feeling of complacency, despite the many years that have gone by without a serious accident; employees are alert to the fact that an accident could conceivably be just around the cor- ner; the organization is constantly striving to be better and to find ways of improving hazard control mechanisms; and all employees share the belief that health and safety are criti- cal aspects of their jobs. • DuPont Bradley curve: This model has four levels: reactive, dependent, independent, and interdependent. At the reactive level, companies handle safety issues by natural instinct, focus on compliance, delegate responsibility for all safety matters to the safety manager, and exhibit a general lack of manage- ment involvement in safety issues. At the inter dependent level, companies actively help employees conform to safety initiatives, contribute to a safety network, and have a strong sense of organizational pride in their safety endeavors. Two of the dynamic models (Westrum and Hudson) start at a much lower point on the negative-to-positive safety cul- ture scale than the Fleming and DuPont models. Westrum and Hudson are therefore more useful for organizations that are at or near the bottom of the scale. In fact, such organiza- tions might have a hard time finding a level in the Fleming and DuPont models that seems familiar. The Fleming model, however, provides more practical detail at each level because it consists of five steps spread over a shorter scale and therefore has somewhat greater utility in the aviation, rail, petrochemical, offshore oil and gas, health, steelmaking, and manufacturing industries—for which it was in fact developed. The DuPont model has the advantage over the other models presented in its firm empirical grounding. DuPont has admin- istered a safety perception survey since 1999, creating a database containing more than 632,000 responses from 96 industries and 41 countries, which rates companies on the basis of their relative cultural strength. The RCS for a company is plotted on the x-axis against the company’s 3-year average OSHA total recordable injury rate on the y-axis. The result is the DuPont Bradley curve, which establishes a strong correlation between higher cultural strength scores and lower injury rates. Six of the models (Reason, Guldenmund, Cooper 1999, Cooper 2002, systems view, and HRO) might be called static: levels of safety culture maturity are not delineated but are implicit in the degree to which different components are developed at any given point in time. Of these six, the Reason model is perhaps the most highly developed in terms of its theoretical depth and grounding in practical realities. If other than the largest rail transit agencies could look at and learn from only one safety culture model, Reason is clearly the first choice. The Reason model offers

100 • Industrial relations and job satisfaction, and • Training. The Idaho National Engineering and Environmental Lab- oratory (2001) noted eight core components of total safety culture: • Management commitment to safety; • Job satisfaction; • Training, equipment, and physical environment; • Organizational commitment; • Worker involvement; • Coworker support; • Performance management; and • Personal accountability. As noted previously, the literature provides numerous defi- nitions of safety culture, with no consensus on a single one. There is a corresponding lack of agreement on components. Numerous sets of components have also been proposed, ranging from as few as two components to as many as 19 (Flin et al., 2000). The Aviation Research Lab (Wiegmann et al., 2002), for example, identified the following as primary indicators that influence how safety culture is prioritized in the organization: (1) organizational commitment, and (2) leadership and management commitment. In general, the literature cited emphasizes organizational and leadership commitment, management involvement, management com- munications, employee engagement and rewards, learning, flexibility, justice, and reporting systems as key components. In her comprehensive analysis “Safety Culture: Under- specified and Overrated?” Clarke (2000) noted: “A major theme in empirical studies has been defining the dimensions or components of safety climate/safety culture. . . . an over- view of 16 empirical studies . . . involved development of the architecture of safety attitudes. There is much variation in the number of dimensions; these vary from global measures of safety climate to 16 distinct components. The content of the dimensions also varies considerably between studies. However, from the studies . . . five dominant themes seem to emerge: work task/work environment, personal involvement and responsibility, management attitudes, safety manage- ment system, and management actions.” Williamson et al. (1997) noted that different approaches to determining the components of safety climate are par- tially responsible for the differences in components found in empirical studies. They identified two differing approaches: first, asking workers for their perceptions of actual workplace characteristics (Zohar, 1980), and second, asking more general questions about safety (Cox and Cox, 1991). Additionally, many studies construct their measurement tools solely by selecting items from previous questionnaires, although some studies to produce safety culture. (Note that the terms attributes, ele- ments, dimensions, and indicators are synonymous with the word “components” in the literature.) What the literature pro- vides is a multitude of different sets of components. Examples include those in the following. Zohar (1980) said that the dimensions that make up safety climate are: • Strong management commitment to safety, • Emphasis on safety training, • The existence of open communication links and frequent contacts between workers and management, • A general environment control and good housekeeping, • A stable workforce and older workers, and • Distinctive ways of promoting safety. The International Civil Aviation Organization (2005) noted that a good safety culture has the following attributes: • Senior management placing a strong emphasis on safety, • Staff having an understanding of hazards within the workplace, • Senior management’s willingness to accept criticism and an openness to opposing views, • Senior management’s fostering a climate that encourages feedback, • Emphasis on the importance of communicating relevant safety information, • The promotion of realistic and workable safety rules, and • Ensuring that staff are well educated and trained so that they understand the consequences of unsafe acts. Hudson (2001) suggested using the Reason (1997) dimen- sions of: • An informed culture, • A reporting culture, • A flexible culture, • A learning culture, and • A just culture. Fleming (2000) noted 10 elements of a safety culture matu- rity model: • Management commitment and visibility, • Communication, • Productivity versus safety, • Learning organization, • Safety resources, • Participation, • Shared perceptions about safety, • Trust,

101 lic recognition and monetary incentives. Research has shown that incentive programs that foster competition (employees competing to win or gain a safety incentive) may lead to a failure to report actual safety issues, leaving an organization vulnerable to catastrophe. When employees are competing for recognition, the possibility of not reporting injuries and other events increases. In a memorandum dated March 12, 2012, entitled “Employer Safety Incentive and Disincentive Policies and Practices,” OSHA noted that Section 11(c) of the Occu- pational Safety and Health Act prohibits an employer from discriminating against an employee because the employee reports an injury or illness (29 CFR 1904.36). Further, it noted that employers who establish programs that intention- ally or unintentionally provide employees an incentive to not report injuries are probably in violation of Section 11(c) if the incentive involved is of sufficient magnitude that failure to receive it “might have dissuaded reasonable workers from” reporting injuries (U.S. Department of Labor, Occupational Safety and Health Administration, 2012). Geller advised against the public recognition/safety contest approach and encouraged private personal recognition. His emphasis was on delivery of the message to the employee in a sincere, simple, timely manner that supports desirable behavior (Geller, 2008). According to Joshua Williams, “Effective safety leaders provide high-quality recognition to work groups as well as individuals. This involves sincere, personal praise with pro- social behaviors, as well as nonthreatening corrective feedback when job behaviors are less than ideal” (Williams, 2002). Eiff (1999) claimed that a fair evaluation and reward system is essential to promoting safety in the workplace and discourag- ing unsafe conduct. Wiegmann and von Thaden (2007) claimed that an organization’s safety culture depends on the extent to which management rewards employees for reinforcing safety at work (monetarily or through rewards such as plaques or pub- lic recognition) and discourages unsafe behavior. Consistency is important: “an organization’s safety culture is signified, not only by the existence of such reward systems, but also by the extent to which the reward systems are formally documented, consistently applied, and thoroughly explained and understood by all of its employees” (Wiegmann et al., 2004). Williams (2002) suggested that leaders should consider some guidelines when rewarding employees: • Safety rewards should focus on proactive, process-oriented behaviors and activities instead of outcome numbers (e.g., OSHA accidents as recorded). • Rewards should be symbolic of safety achievement. Safety shirts, plaques, and certifications may hold more meaning for safety than financial incentives. Employees should help select the rewards. (Rewards like these, including award dinners and lunches, might be more appropriately called “reinforcers” than “rewards.”) demonstrate a systematic approach to item generation (Cox and Cox, 1991; Donald and Canter, 1994). Individual Components of Safety Culture As previously noted, there are many individual compo- nents of safety culture included in the sets described so far. The sets differ in terms of which components are included and which are excluded. What follows are brief descriptions of the most common of the components cited. Accountability and Reward Systems The Oregon Chapter of OSHA defined accountability as it relates to safety: performance is measured against standards and evaluated, and there are natural and/or system conse- quences when standards are not met. Further, the process is outlined within an accountability system including establish- ing formal standards, providing adequate resources, evaluating employee performance, applying effective consequences, and evaluating the accountability system (Occupational Safety and Health Administration, Oregon Chapter, 2005). Accountability and blame are two separate concepts. Paul (1997) made a useful distinction between the two. He said that accountability refers to assigning responsibility for tasks in advance and requires clear communication to discuss com- mon difficulties. Inherent in making individuals accountable is recognition of the fact that everyone makes mistakes and that mistakes are opportunities for learning and growing. He said that blame is the process of shaming others and searching for something wrong in them. While the presence of a blame culture has many negative effects on learning and employee motivation, Whittingham (2004) acknowledged that there are some cases in which an individual making an error deserves to experience repercussions. Blame should be assigned when it is deserved—for instance, when there is evidence of gross negligence, misconduct, or deliberate rule violation. Sidney Dekker encourages organizations to clearly define who is responsible for drawing the line between appropriate and inappropriate consequences, and recommends impartial third-party reporting, so that employees will not feel appre- hensive about filing reports. Research has shown that a just culture is defined not by the absence of blame but by the pro- cesses in place to ensure its appropriateness (Dekker, 2007). Beyond strict accountability, the pros and cons of reward systems are extensively debated in the literature. E. Scott Geller believes that it is best to direct rewards to employees whose intention or demonstrated behavior is leading toward change (Geller, 2008). While reward systems are one way of recogniz- ing performance, Williams (2002) questions the value of pub-

102 employees themselves” (American Public Transportation Association, 1998). While management is typically charged with taking the lead in initiating improvements to safety programs within the organization, Ludwig et al. stated that “employees must fully trust that they will have management support for deci- sions made in the interest of safety, while also recognizing that intentional breaches of safety will not be tolerated. The result is a non-punitive environment that encourages the identification, reporting, and correction of safety issues” (Ludwig et al., 2007). When employees trust that safety is the top organizational priority and are able to report safety con- cerns and successes based on that priority and without fear of retaliation or harassment, the safety culture of the organi- zation improves. “Employees (e.g., [truck] drivers) must be, over the long term, part of an organization—both developing and learning its culture. Likewise, an organization must have a culture in place to teach new members its norms, attitudes, values, and beliefs. If this culture-building process is not in place due to labor instability, then a driver may hold only the industrial subculture of the driving profession as he moves from carrier to carrier, which will undermine the safety cul- ture of those carriers that are the driver’s past, present, and future employers” (Short et al., 2007). Expectations One way that organizations can clearly communicate expec- tations to employees is through an organizational or safety mission statement. In its “Best Practices Guide to Developing Your Safety Policy Mission Statement,” the Maine Municipal Association (2005) stated, “one of the key elements that many employers fail to include in their workplace safety program, when it is first being developed, is a safety policy mission state- ment. This critical document should set the tone for the whole safety program. It lets all employees know that management has set the safety and health of that organization’s workers as one of its top priorities.” One of the crucial steps in developing this statement is to include clear expectations of employees and managers to ensure that they “know what specific perfor- mance is expected of them.” Between 2003 and 2006, Georgia Ports Authority cut inci- dent rates in half and increased productivity by more than 300,000 person-hours as a result of implementing a safety culture initiative (Bloess, 2007). A vital part of this initiative was clearly communicating management expectations to all employees. A safety policy statement was issued, and employees were engaged in the following programs: 1. Audits, inspections, and investigations; 2. Job safety procedures; 3. Job safety analysis; • Financial incentives may create a sense of entitlement among employees, making the incentives difficult to eliminate. Williams also pointed out that providing incentives based on injury data may lead to underreporting and suggested that it is “best to reward positive safety performance, rather than reporting negative information” (Williams, 2002). Development of Safety Information and Communications Reason encouraged working toward creating an informed culture, one in which managers and employees are aware of the status of safety initiatives. In “most important respects,” he wrote, “an informed culture is a safety culture.” He advised that “in the absence of frequent bad events, the best way to induce and then sustain a state of intelligent and respectful wariness is to gather the right kinds of data. This means cre- ating a safety information system that collects, analyzes, and disseminates information from incidents and near misses, as well as from regular proactive checks on the system’s vital signs” (Reason, 1998). Management may communicate with employees through a variety of media and use delivery mechanisms such as e-mail and intranet communication. Others might use large employee gatherings or meetings to deliver important messages about safety. Regardless of scale or media, it is essential for manage- ment to communicate with employees to maintain a positive safety culture (Short et al., 2007). “Effective communication also involves active listening, where leaders genuinely empathize with employee concerns” (Williams, 2002). Employee Involvement All parties—management and employees—must par- ticipate in the creation of a positive safety culture. “The degree to which the safety culture is positive or negative will depend entirely upon the collective amount of energy vis- ibly expended in the pursuit of excellence by organizational members” (Cooper, 2002). In the Manual for the Develop- ment of Bus Transit System Safety Program Plans, APTA stated, “The most valuable resource any transit system has is its employee workforce” (American Public Transportation Association, 1998). As the most valuable resource in transit systems, employees must be encouraged to voice their opin- ions and concerns and contribute to the creation of a positive safety culture. Furthermore, “it is essential from an employee consideration perspective and from a good management per- spective to ensure as much as possible the safety of employees. An Employee Safety Program must be designed to have the best possible input from all necessary units, including the

103 and in the behaviors of its members, it is more likely that a safety mind-set will be established and safe work practices will be followed” (Marais et al., 2004). According to Gill and Shergill (2004), “this commitment must be demonstrated not only through written and verbal communication from man- agement to employees, but also by management’s actions. One of the most important things management can do to promote positive safety culture is to reinforce an ‘informed culture’ by encouraging reporting on safety and ensuring that reported information is used to improve safety rather than to punish employees—fostering a ‘just culture’”(Gill and Shergill, 2004). If management commitment plays an important role in determining organizational and safety culture, effective inter- nal oversight plays a key role in maintaining it. Citing Schein, Clarke noted that “the way in which senior managers instruct, reward, allocate their attention and behave under pressure” is a key determinant in organizational culture formation. As management observes employees and their attitudes toward safety in the workplace, it is imperative that positive conduct is rewarded and negative conduct is dealt with in order to maintain the integrity of the system (Clarke, 1999). The Ladbroke Grove Rail Inquiry formulated recommen- dations to ensure safety’s position at the top of the list of an organization’s priorities. Four factors were listed as having a positive effect on workers’ perception that safety was important: 1. Valuing subordinates, 2. Visiting worksite frequently, 3. Workers’ participation in decision making, and 4. Effective communication (Health and Safety Executive, 2001). It was suggested that managers spend time touring frontline locations informally, as such visits were seen as more meaning- ful than formal inspections. The Ladbroke Grove Rail Inquiry recommended that at least one hour a week be scheduled into the diaries of senior executives for these walkabouts, while middle-ranking managers should schedule one hour per day and first-line managers 30% of their time. Other recommen- dations included prominently placing safety information in workplace communication materials and developing an effec- tive communications plan that includes staff at all levels of the organization (Health and Safety Executive, 2001). Wiegmann et al. (2004) defined “management involvement” as “the extent to which both upper- and middle-level managers get personally involved in critical safety activities within the organization.” Managers should attend and contribute to safety seminars and training, demonstrate active oversight of criti- cal operations, and be aware of the risks involved in everyday operations. Further, they should understand the chain of com- munication not only among fellow management but “up and down the organizational hierarchy.” 4. Safety training; 5. Employee safety orientation; 6. Safety communications; and 7. Safety recognition. Georgia Ports Authority began with the overarching policy conclusion that it is important to have a safety strategy that becomes a natural way of conducting business. “World-class organizations do have a compelling safety vision that is docu- mented, known by all, displayed, and cascades into personnel action” (Taylor, 2010). Flexibility The International Atomic Energy Agency (2002) recog- nized that the design of internal processes “must remain flexible to allow the organization to adapt to a changing environment.” It stipulated that it is essential to maintain “open and frank dialogue” with regulatory bodies, “especially when the dialogue concerns safety objectives.” Such dialogue is “vital to enhancing safety culture” (International Atomic Energy Agency, 2002). Management Commitment and Oversight Safety must be identified by top management as a core value and a top priority. Management must clearly communi- cate expectations to employees, demonstrate a commitment to safety in their own roles, and clearly define safety as a pri- ority for all departments within the organization. “An organi- zation’s commitment to safety is . . . ultimately reflected by the efforts put forth to ensure that every aspect of its operations, such as equipment, procedures, selection, training, and work schedules, [is] routinely evaluated and, if necessary, modified to improve safety” (Wiegmann et al., 2002). Safety culture “flows from top to bottom, with senior management being essential to an organization’s safety culture, and official poli- cies and objectives regarding safety being a critical indicator of an organization’s safety culture” (Short et al., 2007). In many cases, management is cited as having the great- est influence over an organization’s safety culture. Though experts agree that all levels of an organization must partici- pate in creating a cohesive and positive safety culture, such participation in most cases begins with leadership. “Experts in the field of organizational change affirm that no substantive transformation will take place within an organization without the skill, visible commitment, and guiding example of leader- ship” (Marais et al., 2004). In setting the tone, management must also provide a walking, talking example of the culture for which they want their organization to strive. “If there is visible commitment to safety within the organization that is evident in the actions of its leaders, in the work environment,

104 the confluence of factors creating error-prone situations can continuously reconfigure itself” (Meacham, 1983). Joseph Carroll (1998) defined organizational learning as taking place “through activities performed by individuals, groups, and organizations as they gather and digest informa- tion, imagine and plan new actions, and implement change.” Levitt and March (1988) presented the concept of organiza- tional memory in their work on organizational learning. They explained that repetition and documentation of learning pro- cesses maintain consistency “despite the turnover of person- nel and the passage of time. Rules, procedures, technologies, beliefs, and cultures are conserved through systems of social- ization and control. Such organizational instruments not only record history but shape its future path, and the details of that path depend significantly on the processes by which the mem- ory is maintained and consulted” (Levitt and March, 1988). Chris Argyris and Donald Schön have also made significant contributions to organizational learning with their work on theories of action and single-loop and double-loop learning. With respect to theories of action, they made a distinction between “theories-in-use,” which are those implicit in what we actually do, and “espoused theories,” which are those on which we call to describe our actions to others (Argyris and Schön, 1974). With respect to single-loop and double-loop learning, Argyris and Schön posited that learning involves the detection and correction of error. When there is an error, most people will initially look to fix the problem within the same “govern- ing variables”—norms, policies, and objectives. This is single- loop learning. An example often used is a thermostat that reads the actual temperature, compares it to the desired tempera- ture, and turns the furnace on or off accordingly. Double-loop learning involves seeking solutions by questioning the origi- nal governing variables (Argyris and Schön, 1978). Argyris focused on how organizations can increase their capacity for double-loop learning, which he argued is necessary if prac- titioners and organizations are to make informed decisions in rapidly changing and often uncertain contexts (Argyris, 1990). Argyris and Schön created and manipulated two mod- els that describe features of theories-in-use that either inhibit or enhance double-loop learning (Argyris and Schön, 1996). Argyris (1976) provided two examples of double-loop learn- ing: the first is that of a teacher who believes that she has a class of “stupid” students and who will communicate expectations such that the children behave stupidly. She confirms her theory by asking them questions and eliciting stupid answers or puts them in situations in which they behave stupidly. The theory- in-use is self-fulfilling. Similarly, a manager who believes that his subordinates are passive and dependent and require author- itarian guidance rewards dependent and submissive behavior. He tests his theory by posing challenges for employees and eliciting outcomes that exhibit the employees’ dependency. In order to break this congruency, the teacher or manager would Organizational Commitment Commitment is established when an organization’s board and senior management prioritize safety in decision making and ensure the allocation of adequate resources to safety. Adequate Resources The Oregon OSHA safety accountability process addresses the issue of resources. Oregon OSHA’s belief is that it is imperative for management to support safe working condi- tions and positive attitudes toward safety culture by allocating time and money to “tools, equipment, machinery, materials, personal protective equipment, chemicals, workstations, air quality, noise, lighting, and other environmental conditions” (Occupational Safety and Health Administration, Oregon Chapter, 2005). In addition to physical resources, manage- ment also needs to provide psychosocial resources to foster a supportive environment in which employees can gain the knowledge and skills they need to contribute in a meaningful way to a positive safety culture. This component is sometimes subsumed under organizational commitment. Organizational Learning “One of the greatest challenges in changing a culture is to develop a learning organization that will be able to make its own continual diagnosis, and self-manage whatever transfor- mations are needed as the environment changes” (International Atomic Energy Agency, 2002). Peter Senge gained widespread popularity with The Fifth Discipline: The Art and Practice of the Learning Organization (1990). According to Senge, learning organizations are those in which “people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where col- lective aspiration is set free, and where people are continually learning to see the whole together.” He said that only those organizations able to adapt quickly and effectively will be able to excel in their field or market. Two conditions are essential: (1) the ability to design the organization to match the intended or desired outcomes, and (2) the ability to recognize when the initial direction of the organization will not lead to the desired outcome and adjust accordingly (Senge, 1990). Meacham (1983) noted, “organizations with a greater capacity for learning are those that maintain an open mind and a sense of curiosity, accepting that there is always some- thing to learn because of the uncertainties, complexities, and fluidity of their environment. These organizations are neither overly confident nor overly cautious in their pursuit of knowl- edge, since the former implies they have learned all there is to learn and the latter does not lend itself to innovation. Flexible thinking is important in understanding error causation, since

105 illustrates characteristics of model-based and human inquiry perspectives on organizational learning. Reporting System (Reporting and Visible Action Taken on Reports) As Reason (1997) said: On the face of it, persuading people to file critical incident and near-miss reports is not an easy task, particularly when it may entail divulging their own errors. Human reactions to making mistakes take various forms, but frank confession does not usually come high on the list. Even when such personal issues do not arise, potential informants cannot always see the value in making reports, especially if they are skeptical about the likelihood of management acting on the information. Is it worth the extra work when no good is likely to come of it? Moreover, even when people are persuaded that writing a sufficiently detailed account is justified and that some action will be taken, there remains the overriding problem of trust. Will I get my colleagues into trouble? Will I get into trouble? Of particular importance is having near misses or close calls formally reported to the organization. A near miss or close call is an incident that could have caused the organiza- tion to suffer serious injuries or fatalities but by chance did not. Such an incident may reveal a vulnerability that has not been adequately addressed. It may be considered a free pass to prevent a future catastrophic event. Near misses and close need to engage in open-loop learning in which they deliber- ately disconfirm their theory-in-use of stupid students or pas- sive and dependent subordinates. Instead they might change their theory-in-use expectations to intelligent students and active, independent, and self-starting subordinates and observe the results. This would be double-loop learning. A more practi- cal example is the kind of divergent thinking and action that led scenario-planning teams at Royal Dutch Shell to anticipate both the demise of the Soviet Union and the resulting fall of oil prices during the mid-1980s well before the rest of the world could even imagine them. Shell saved huge amounts of money by reducing the capital required to develop a large North Sea oil field in order to stay competitive when oil prices fell by wait- ing until the price drop occurred to go forward with its North Sea oil field acquisitions. The Shell planners happened upon this strategy by asking questions such as, “What would have to be true for the Soviet Union to begin increasingly to sell its oil in Europe?” One answer was that such an event could occur if a political unknown named Mikhail Gorbachev became pre- mier. Shell managers had noticed the rise of Gorbachev and had begun to see possibilities further down the road. This enabled them to solve the problem of how to make extracting expensive oil from the North Sea good business (Dooley, 1999). In the work of Aase and Nybø (2002), two perspectives on organizational learning were presented: (1) the model-based perspective, and (2) the human inquiry perspective. Table A-4 Table A-4. Perspectives on organizational learning for high reliability (Aase and Nybø, 2002). MODEL-BASED PERSPECTIVE HUMAN INQUIRY PERSPECTIVE Focus on information processing and dissemination Focus on participation and collaboration Syntactic information Semantic information “Simple” information “Sticky” information Lean information Rich information Explicit knowledge Tacit knowledge Closeness (individually based) Socialization (sharing of tacit knowledge) Internalization (tacit remains tacit) Externalization (from tacit to explicit) FORMAL MEANS INFORMAL MEANS Focus on codified knowledge Focus on knowledge in practice Procedure and requirement handbooks Informal contacts/personal networks Knowledge/experience databases Personnel rotation Written experience reports Seminars/courses/meetings/forums Formalized networks Professional networks Systematic experience-collecting efforts Dialogue-based case studies Job descriptions Training programs

106 stated that, from 1995 to 1997, the lost workday case rate went from 18.8 to 5.17, total recordable accidents went from 6 to 4.3, and the lost workday injury rate went from 1 to 0.68. Another example of successful union–management col- laboration leading to significant safety progress was at Alliance Energy (AE) in the Midwest. Management from AE and leader- ship from the International Brotherhood of Electrical Workers locals collaborated to create a campaign to work toward 100% fall protection. Union and management leadership involved employees from the start, creating teams to evaluate safety har- nesses and straps. Once the equipment was selected, and after researching rollout strategies from other, similar companies, teams recruited volunteer employees who would communi- cate with and train workers in the use of the new equipment. A safety official at AE said, “the success of this program has a direct relationship to the company and union’s commitment to employee safety” (Severson, 2011). Union and Management Leadership Changes A difficult test in maintaining an organization’s safety culture is leadership transition. If the culture is strong and deeply ingrained, deterioration is less likely. Although exten- sive research on this topic as it relates to transit does not exist, there are examples in non-peer industries that demonstrate successful transitions with retention of established core val- ues and culture. Companies such as Southwest Airlines demonstrate that when a culture is ingrained in the people and processes of the organization, changes in leadership can occur with minimal long-term effects on that culture. Southwest Airlines incorpo- rates “relational coordination” in the systems approach, which enables employees to more effectively coordinate their work with one another and encourages shared goals, shared knowl- edge, and mutual respect. Because the culture is accepted and institutionalized by employees, new leaders in management and in the union have embraced it. As a result, the organiza- tion continues to benefit, being the only airline in the industry to achieve profits for 37 consecutive years and being named one of the best places in the country to work year after year (Gittell, 2005). Organizations like Southwest, however, have to guard against complacency, as Southwest’s recent fuselage rupture problems have illustrated. (Southwest Airlines was fined for continuing to fly dozens of Boeing 737s that had not been inspected for fuselage cracks; FAA, 2008). At Hamilton Standard, the management team changed following a corporate merger (to Hamilton Sundstrand). Throughout that transition, safety teams maintained the integ- rity of the system, which produced positive results for the orga- nization. For years after the leadership change, the company continued to improve its safety performance and maintain adopted safety practices (Culture Change Consultants, 2006). calls, however, are frequently not reported in organizations without a strong reporting culture. Safety Policies, Procedures, and Rules Safety policies, procedures, and rules must be practical, realistic, and appropriate to the environment in which they are applied. They must reside in the minds of a transit agency’s employees instead of just in books sitting on shelves. There should be no bureaucratic or unnecessary rules (International Civil Aviation Organization, 2005). Training Training may be seen as the means by which the sum total of what an organization has learned from the date it was founded to the present day may be conveyed to succeeding generations of employees. Too often, however, training requirements are not fully understood. Also, it is not uncommon to have train- ing budgets cut when financial pressures are severe simply because the results of such cuts will not become evident until a few years after the cuts are made. In many transit agencies, on-the-job training is the primary method of training, with little thought or discipline injected into the process. There are no job analyses defining required competency levels and the necessary path to them. Adequate training is essential to safety and safety culture in an organization (International Civil Avi- ation Organization, 2005). Trust The Institute of Nuclear Power Operations recognizes that in order to maintain a positive safety culture, a “high level of trust is established in the organization, fostered, in part, through timely and accurate communication. There is a free flow of information in which issues are raised and addressed. Employees are informed of steps taken in response to their concerns” (McConnell, 2010). Union–Management Relations In organizations in which there is union representation, the union’s involvement in the safety processes—both initially and continually—is “an absolute critical success factor” (Galloway, 2010). One example of a union/management success story is Hamilton Standard’s aerospace manufacturing division’s joint union/management task force, created in 1995 to achieve ambi- tious safety goals. The task force conducted a series of joint meetings and perception surveys and undertook to build trust between the two parties. As the initiative proceeded, the organi- zation was divided into safety teams that became the heart and soul of the safety process. Culture Change Consultants (2006)

107 Safety audits are a form of direct observation and can pro- vide the basis for improving safety performance. Blair and O’Toole (2010) noted that several large organizations “report anecdotally that . . . audit results correlate strongly with reductions in injury rates.” They recommended Manuele’s risk score formula as a suitable tool to estimate risk levels and establish measurement priorities. The three-dimensional matrix assesses risk on the basis of probability, frequency of exposure, and severity of accidents or incidents. “Measur- ing safety performance is about developing the safety man- agement systems and the related safety culture” (Blair and O’Toole, 2010). Petersen’s caveat (that there is little correlation between audit reports and injury records in large companies because audits are generally as much about paperwork and regulatory compliance as they are about the effectiveness of a safety program) applies (Petersen, 1996). Surveys There are numerous benefits to safety surveys; Blair and O’Toole (2010) stated that “surveys provide a snapshot of an organization’s culture and can be a useful tool in developing measures to drive culture.” They argued that well-designed surveys provide benefits to an organization. They are: • Practical. They address the primary safety issues. Even if the issue is one of perception, perceptions are real to those who hold them and must be addressed. • Predictive. They fulfill the definition of what a leading indicator is supposed to do. • Prescriptive. The results generally indicate clearly what needs to be addressed. • Proactive. They are preferable to accident investigation, which is a reactive measure (Blair and Spurlock as cited in Blair and O’Toole, 2010). Safety culture assessments are considered tools to detect management blind spots in safety culture. Research has shown that views of management and frontline staff vary. The differ- ences can be instructive. Questionnaires can be designed to explore a specific dimension of safety culture. Other advan- tages of safety culture surveys are their ability to reach large numbers of employees at relatively low cost, the retention of anonymity by responders, the identification of problems and issues, and the ability to track progress over time using suc- cessive surveys. Interviews and Focus Groups Interviews also can play a significant role in the assess- ment of safety culture. They can be used to develop informa- tion directly on the state of safety culture in an organization. In order to sustain a positive safety culture, a company needs to focus collectively on vision, policy, and individual and organizational roles. These roles are essential to the sys- tems being executed and implemented. “A strong feature of positive safety culture over time lies in the integration of safety culture into the business. This promotes the indepen- dence of culture from individuals or personality. Culture is then supported by system activities owned and shared by all employees and develops into something larger than the sum of the individual culture” (Taylor, 2010). It is this pro- cess that Taylor believes sustains safety culture regardless of personnel or structural changes at any level within the organization. Component Confusion Components are also referred to as attributes, dimensions, elements, and indicators. Moreover, considerable overlap exists—for example, organizational commitment is some- times understood to be made up of management commit- ment, company policies and procedures, and the provision of adequate resources. In other contexts, management com- mitment and organizational commitment are considered to be separate and equal components. Union–management rela- tions and employee involvement also have obvious overlap, as do the concepts of recognition and reward. Recognition and reward are categorized variously as organizational learn- ing and accountability. With reference to disciplinary systems, accountability overlaps with just culture. Training is some- times considered to be part of organizational learning and sometimes stands on its own (Clarke, 2000). Assessing Safety Culture Numerous methods are available for assessing an organi- zation’s safety culture. The most common are direct obser- vation or audits, surveys, interviews and focus groups, and performance indicator tracking. Direct Observation and Audits Direct observations of workplace behavior may provide objective information regarding the effectiveness of train- ing, management, accountability, and behavior expectations. Direct observation of employees at work can also provide valuable information on involvement, attitude, and willing- ness to confront perceived unsafe behavior. However, obser- vations cannot be quantified and used for statistical purposes, and there is always the risk of overgeneralization from too few observations. (EFCOG/DOE, 2009). Conducting sufficient observations to produce an accurate assessment of the state of safety culture will be time-consuming and expensive.

108 our safety results as we work toward our goal of zero injuries and safety incidents. One of our most important measure- ments is tracking off-the-job injuries, which helps determine how well we are building a robust safety culture that is 24/7, not just on the job. We believe the true challenge is to go beyond the standard regulatory requirements and track the leading indicators that determine the ultimate success of our safety program” (Froetscher, 2011). There are a number of accepted means of measuring and assessing progress in safety management systems, both qualita- tive and quantitative. Many sources cite employee surveys and questionnaires and face-to-face interviews as ways to capture information. Wiegmann et al. (2004) suggested that combin- ing qualitative and quantitative methods will yield a compre- hensive understanding of safety culture, but they went on to say that “quantitative approaches, especially surveys of individu- als’ responses, are often more practical in terms of time and cost effectiveness.” While surveys and interviews are widely used, specific metrics are being developed in some industries to measure safety in a more quantitative way. In the aviation industry, for example, the Volpe Center is working with the FAA to create a runway incursion severity calculator that will categorize the outcome severity of runway incursions (Volpe Center Highlights, 2009). In the chemical industry, the Center for Chemical Process Safety recommends that “all companies and trade associations collect and report the three lagging metrics: Process Safety Incidents Count, Process Safety Incident Rate, and Process Safety Severity Rate” (Center for Chemical Process Safety, 2011). “While many safety executives understand trailing mea- sures, such as trend analysis, control charts and evaluating the effectiveness of safety initiatives, these measures often- times do not provide feedback for continuous safety process improvement, nor do they contribute to the development of safety culture. Positive safety culture remains unaffected when the above measures are the primary focus for metrics in an organization” (Blair and O’Toole, 2010). The practice of devel- oping leading measures and concurrent measures using quali- tative metrics for system and employee behaviors was noted by Toellner (2001), who studied the oil industry. Five specific measures were scored for quality and quantity: safety meetings, housekeeping, barricade performance, job safety analysis, and safety walks. Employee engagement is key to any safety man- agement process. Blair and O’Toole provided an example of a large brewery where employees use individual score-carding activities such as: • Observation cards, • Job safety analysis (training and auditing), • Safety meetings and safety audits, • Maintenance walkthroughs, and • Pre-shift stretching. Alternatively, they can be used as a means of providing input to survey design or to explore issues in greater depth that have emerged from a survey. An advantage of interviews is that respondents are not limited by the wording or structure of a written survey. An interviewer is flexible and can drill down until an issue or problem is fully clarified and ambigu- ity resolved. However, generalization becomes a problem if the interviews are limited in number: it must be remembered that the employees interviewed do not necessarily speak for the whole organization. It is generally prohibitive to gather a large sample; as with direct observation, interviews grow to be time- consuming and expensive (EFCOG/DOE, 2009). Focus groups are more efficient but less flexible than indi- vidual interviews. The efficiency derives from the fact that one interviewer can elicit the views of multiple employees in a single session. Flexibility is somewhat reduced because gen- erally the interviewer uses a set of prepared questions to pro- vide basic organization and direction. A significant downside to focus groups is that, without a skilled facilitator, a minority of participants can dominate a discussion and provide input that might differ significantly from the results obtained from individual interviews with all members of the group (Cox and Cheyne, 2000). Key Performance Indicators While management practices can promote positive safety practices, safety indicators can also help leaders determine other organizational goals and objectives. For example, the “General Manager of the Bahrain National Gas Co. uses safety performance indicators to develop corporate objectives, ensur- ing financial resources and manpower are available to meet or exceed safety standards” (Froetscher, 2011). Many aspects of safety culture are not visible, so assessment is not a simple task (Ahmed, 2011). Metrics must be direc- tional, hold individuals accountable, relate to injury reduction, and be highly motivational (Blair and O’Toole, 2010). The Blair and O’Toole research shows that lagging indicators alone do not address or contribute to improvements in safety culture. (Lagging indicators are measures of past performance; lead- ing indicators indicate future performance.) Metrics used to assess safety and safety culture should include a combination of leading and lagging measures; lagging or trailing measures alone are not effective indicators. As previously noted, Blair and O’Toole (2010) maintain that “leading indicators serve as a catalyst for change, meaningful metrics are motivational for both employees and management, and leading indicators ulti- mately drive safety performance” (Blair and Spurlock, 2008). In an interview with Safety + Health, Harold Yoh III, listed among the magazine’s “2011 CEOs who get it,” said that his company, which does engineering, construction, and main- tenance of nuclear plants, “religiously measures and reports

109 • The FRA defines organizational culture as “shared values, norms, and perceptions that are expressed as common expectations, assumptions, and views of rationality within an organization and play a critical role in safety.” It notes that organizations with a positive safety culture are “characterized by communications founded on mutual trust, by shared per- ceptions of the importance of safety, and by confidence in the efficacy of preventive measures” (U.S. Federal Register, 2012). Theories and Models Safety culture is complex and multidimensional. And while there are numerous theoretical models of safety culture in the literature, there was no consensus arrived at as to the quality or effectiveness of any one model. Of the models discussed, the most elaborate and sophisticated is the Reason model, which benefits from Reason’s practical experience. This model’s best fit in terms of guidance and explanatory power is probably bus and light-rail transit agencies. The DuPont model is also impressive because of the exten- sive amount of data that was employed to verify the inverse relationship between the degree of strength of safety culture and the OSHA recordable injury rate. It is of interest to note that the Reason and DuPont models differ in that Reason is pri- marily concerned with frequently catastrophic “organizational accidents,” whereas the DuPont model is directed at “indi- vidual accidents” as reflected by the OSHA recordable injury rate. The research team has reconciled this seeming difference by concluding that most of the measures employed to protect against organizational accidents would also contribute to the reduction of an excessive number of individual accidents. In terms of the use of a model to obtain a “quick-and-dirty” evaluation of the state of safety culture in an organization, the Parker matrix, which is based on the original Hudson model, is an interesting, if not exacting, approach and probably has value for preliminary assessments. The HRO model places a special premium on positive safety culture and possesses special attributes that help identify poten- tially dangerous safety behaviors. HROs are recognized as having extraordinary technical competence, flexible decision-making processes, sustained high technical performance systems, and processes that reward the discovery and open reporting of errors or potential errors. These organizations value safety equally with production demands and organizational commitment to sustaining institutional culture. They place a substantial value on organizational learning, expertise, and the promotion of a questioning environment in which the revelation of potential safety issues can be recognized and appreciated. HROs tend to be preoccupied with failure and share a collective mindfulness that leads to learning from mistakes and the continual analy- sis of information gained from near misses and other leading indicators that have proven to be predictive of potential safety issues. They believe that complacency leads to vulnerability Safety culture assessment is a critical component of safety culture improvement. Measures should be well thought out and relate to industry standards. Blair and O’Toole (2010) offer six critical and effective guidelines for implementing safety measures: 1. Customize measures specifically for individual sites, 2. Use risk assessment to prioritize safety measures by severity, 3. Simplify by limiting the total number of safety measures used at any time, 4. Engage employees meaningfully in the development of safety measures and related safety goals, 5. Use a thoughtfully chosen mix of performance and out- come measures, and 6. Design measures to specifically influence the safety culture. Major Conclusions The review of the relevant safety culture literature led to the following conclusions: Definition The literature contains scores of different definitions of safety culture. Of those cited in this review, Reason endorsed two in lieu of formulating a definition of his own. The Uttal definition: “safety culture is shared values (what is important) and beliefs (how things work) that interact with an organization’s people, structures, and control systems to pro- duce behavioral norms (the way we do things around here).” The UK Health and Safety Commission definition, which says safety culture is “the product of individual and group values, attitudes, competencies, and patterns of behaviour that deter- mine the commitment to, and the style and efficiency of, an organization’s health and safety programs. Organizations with a positive safety culture are characterized by communications founded on mutual trust, by shared perceptions of the impor- tance of safety, and by confidence in the efficacy measures.” The Uttal definition is echoed in current federal govern- ment definitions. • The Department of Energy says a safety culture is an “orga- nization’s values and behaviors, modeled by its leaders, and internalized by its members, that serve to make safe perfor- mance of work the overriding priority to protect the public, workers, and the environment” (EFCOG/DOE, 2009). • TRACS defines safety culture as “the product of individual and group values, attitudes, perceptions, competencies, and patterns of behavior that can determine the commitment to and the style and proficiency of an organization’s safety management system” (Transit Rail Advisory Committee for Safety, 2011).

110 • Maintaining an effective reporting system, with visible action taken on issues reported, and ensuring timely responses to safety concerns and safety issues; • Using leading and lagging safety indicators to gauge the effectiveness of safety programs on employee behavior; • Demonstrating leadership behaviors that encourage mutual trust between management and employees; • Monitoring performance continuously; and • Treating employees fairly. The lack of a common set of components could be inter- preted to indicate that (a) safety culture is a multifaceted phe- nomenon consisting of scores of contributing components, (b) the prominence of any given component in a specific safety culture is dictated by the dominant circumstances of the environment in which that culture exists, and (c) the safety culture phenomenon accordingly presents many different faces, thereby making promulgation of a universal definition and description difficult. Assessment There are many ways to assess the state of safety culture in an organization. Direct observation over a long period of time by a team of individuals who are safety culture experts is certainly an excellent method. However, this approach is time-consuming and expensive. The experts, for example, have to remain on-site long enough for their presence on the property to be taken for granted and for behavior to revert to the norms that obtained when agency personnel were unobserved. Also, unless performed by the same group of experts at successive properties, direct observation does not lend itself to accurate agency comparisons. The standard safety audit usually does not last long enough to produce the equivalent of unobserved behavior. Some mix of interviews, focus groups, and surveys is likely to be more economical in terms of time and expense. The least expensive but also least effective method would be the use of leading performance indicators. and puts the organization at risk. Given that the potential for disaster, the potential loss of a critical societal function, and the extent of reliance on advanced heavy-rail technology at the largest U.S. transit agencies is similar to circumstances found at HROs, adoption of the HRO model by the large transit agencies that operate heavy rail might be considered. Several presenters suggested this idea at the NTSB February 25, 2010, hearing on the WMATA 2009 accident (Hartley, 2010; Roberts, 2010). Safety Culture Versus Safety Climate For purposes of this project, the research team has treated safety climate as a snapshot in time of an organization’s safety culture (Krause, 2005). This view is consistent with that of Wiegmann et al. (2002), who conclude that safety climate is “a temporal indicator of a more enduring safety culture.” Sets of Components of Safety Culture As is the case with safety culture and safety climate, there is no convergence in the literature on a single set of components of safety culture. The number of components in a set and the identity of those components vary significantly from one example to another. Previous attempts to establish a universal set (e.g., Clarke, 2000) have not been successful. Based on the research, the most common threads are: • Maintaining safety as a core value; • Requiring strong leadership and management commitment; • Enforcing high performance standards; • Providing adequate resources for safety; • Empowering individuals at each organizational level to be responsible for safety; • Involving unions continuously in safety process (where employees are unionized); • Emphasizing learning, education, and training; • Ensuring open, honest, and effective communication within the organization and encouraging a questioning environment;

Next: Appendix B - Transit Agency Mini Case Study Detail »
Improving Safety Culture in Public Transportation Get This Book
×
 Improving Safety Culture in Public Transportation
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Transit Cooperative Research Program (TCRP) Report 174: Improving Safety Culture in Public Transportation presents research on the definition of safety culture within public transportation, presents methods and tools for assessing safety culture, and provides strategies and guidelines that public transportation agencies may apply to initiate and build a program for improving safety culture.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!