Technology and Tools in the Diagnostic Process
A wide variety of technologies and tools are involved in the diagnostic process (see Figure 5-1), but the primary focus of the chapter is on health information technology (health IT) tools. Health IT covers a broad range of technologies used in health care, including electronic health records (EHRs), clinical decision support, patient engagement tools, computerized provider order entry, laboratory and medical imaging information systems, health information exchanges, and medical devices. Health IT plays key roles in various aspects of the diagnostic process: capturing information about a patient that informs the diagnostic process, including the clinical history and interview, physical exam, and diagnostic testing results; shaping a clinician’s workflow and decision making in the diagnostic process; and facilitating information exchange.
The committee concluded that health IT has the potential to impact the diagnostic process in both positive and negative ways. When health IT tools support diagnostic team members and tasks in the diagnostic process and reflect human-centered design principles, health IT has the potential to improve diagnosis and reduce diagnostic errors. Despite this potential, however, there have been few demonstrations that health IT actually improves diagnosis in clinical practice (El-Kareh et al., 2013). Indeed, many experts are concerned that current health IT tools are not effectively facilitating the diagnostic process and may be contributing to diagnostic errors (Basch, 2014; Berenson et al., 2011; El-Kareh et al., 2013; Kuhn et al., 2015; Ober, 2015; ONC, 2014b; Verghese, 2008). This chapter discusses the design of health IT for the diagnostic process, the interoperability of patient health information, patient safety issues related to
FIGURE 5-1 Technologies and tools are an important element of the work system in which the diagnostic process occurs.
the use of health IT, and the potential for health IT to aid in the measurement of diagnostic errors. The committee makes one recommendation aimed at ensuring that health IT tools and technologies facilitate timely and accurate diagnoses. In addition, this chapter briefly reviews the use of mobile health (mHealth) and telemedicine in the diagnostic process. Other technologies, such as diagnostic testing, are discussed in Chapter 2.
This content builds on earlier Institute of Medicine (IOM) work, including the report Health IT and Patient Safety: Building a Safer Health System (IOM, 2012a). That report emphasized that health IT functions within the context of a larger sociotechnical system involving the technology itself, the people who work within the system, the workflow (or actions and procedures clinicians are anticipated to perform as they deliver care), the organization using the technology, and the external environment. Box 5-1 includes the recommendations from the 2012 report; this chapter’s text references these recommendations where relevant.
DESIGN OF HEALTH IT FOR THE DIAGNOSTIC PROCESS
The design of health IT has the potential to support the diagnostic process. In particular, by supporting the individuals involved in the diagnostic process and the tasks they perform, health IT may improve diagnostic performance and reduce the potential for diagnostic errors. The increasing complexity of health care has required health care professionals to know and apply vast amounts of information, and these demands are outstripping human cognitive capacity and contributing to challenges in diagnosis (see Chapter 2). El-Kareh et al. (2013, p. ii40) asserted that “[u]naided clinicians often make diagnostic errors” because they are “[v]ulnerable to fallible human memory, variable disease presentation, clinical disease processes plagued by communication lapses, and a series of well-documented ‘heuristics,’ biases and disease-specific pitfalls.” It is widely recognized that health IT has the potential to help health care professionals address or mitigate these human limitations.
Although health IT interventions are not appropriate for every quality-of-care challenge, there are opportunities to improve diagnosis through appropriate use of health IT. For instance, a well-designed health IT system can facilitate timely access to information; communication among health care professionals, patients, and their families; clinical reasoning and decision making; and feedback and follow-up in the diagnostic process (El-Kareh et al., 2013; Schiff and Bates, 2010). Table 5-1 describes a number of opportunities to reduce diagnostic errors through the use of health IT. The range of these suggestions is broad; some are pragmatic opportunities for intervention and others are more visionary, given the limitations of today’s health IT tools.
A number of researchers have identified patient safety risks that may result from poorly designed health IT tools (Harrington et al., 2011; IOM, 2012a; Meeks et al., 2014; Sittig and Singh, 2012; Walker et al., 2008). In recognition of these risks, the 2012 IOM report described the key attributes of safe health IT, including (IOM, 2012a, p. 78):
- Easy retrieval of accurate, timely, and reliable native and imported data;
- A system the user wants to interact with;
- Simple and intuitive data displays;
- Easy navigation;
- Evidence at the point of care to aid decision making;
- Enhancements to workflow, automating mundane tasks, and streamlining work, never increasing physical or cognitive workload;
Recommendations from Health IT and Patient Safety: Building a Safer Health System
Recommendation 1: The Secretary of Health and Human Services (HHS) should publish an action and surveillance plan within 12 months that includes a schedule for working with the private sector to assess the impact of health IT [health information technology] on patient safety and minimizing the risk of its implementation and use. The plan should specify:
- The Agency for Healthcare Research and Quality (AHRQ) and the National Library of Medicine (NLM) should expand their funding of research, training, and education of safe practices as appropriate, including measures specifically related to the design, implementation, usability, and safe use of health IT by all users, including patients.
- The Office of the National Coordinator (ONC) for Health Information Technology should expand its funding of processes that promote safety that should be followed in the development of health IT products, including standardized testing procedures to be used by manufacturers and health care organizations to assess the safety of health IT products.
- The ONC and AHRQ should work with health IT vendors and health care organizations to promote post-deployment safety testing of EHRs [electronic health records] for high-prevalence, high-impact EHR-related patient safety risks.
- Health care accrediting organizations should adopt criteria relating to EHR safety.
- AHRQ should fund the development of new methods for measuring the impact of health IT on safety using data from EHRs.
Recommendation 2: The Secretary of HHS should ensure insofar as possible that health IT vendors support the free exchange of information about health IT experiences and issues and not prohibit sharing of such information, including details (e.g., screenshots) relating to patient safety.
Recommendation 3: The ONC should work with the private and public sectors to make comparative user experiences across vendors publicly available.
Recommendation 4: The Secretary of HHS should fund a new Health IT Safety Council to evaluate criteria for assessing and monitoring the safe use of health IT and the use of health IT to enhance safety. This council should operate within an existing voluntary consensus standards organization.
Recommendation 5: All health IT vendors should be required to publicly register and list their products with the ONC, initially beginning with EHRs certified for the meaningful use program.
Recommendation 6: The Secretary of HHS should specify the quality and risk management process requirements that health IT vendors must adopt, with a particular focus on human factors, safety culture, and usability.
Recommendation 7: The Secretary of HHS should establish a mechanism for both vendors and users to report health IT–related deaths, serious injuries, or unsafe conditions.
- Reporting of health IT–related adverse events should be mandatory for vendors.
- Reporting of health IT–related adverse events by users should be voluntary, confidential, and nonpunitive.
- Efforts to encourage reporting should be developed, such as removing the perceptual, cultural, contractual, legal, and logistical barriers to reporting.
Recommendation 8: The Secretary of HHS should recommend that Congress establish an independent federal entity for investigating patient safety deaths, serious injuries, or potentially unsafe conditions associated with health IT. This entity should also monitor and analyze data and publicly report results of these activities.
Recommendation 9a: The Secretary of HHS should monitor and publicly report on the progress of health IT safety annually beginning in 2012. If progress toward safety and reliability is not sufficient as determined by the Secretary, the Secretary should direct the Food and Drug Administration (FDA) to exercise all available authorities to regulate EHRs, health information exchanges, and personal health records.
Recommendation 9b: The Secretary should immediately direct FDA to begin developing the necessary framework for regulation. Such a framework should be in place if and when the Secretary decides the state of health IT safety requires FDA regulation as stipulated in Recommendation 9a above.
Recommendation 10: HHS, in collaboration with other research groups, should support cross-disciplinary research toward the use of health IT as part of a learning health care system. Products of this research should be used to inform the design, testing, and use of health IT. Specific areas of research include
- User-centered design and human factors applied to health IT,
- Safe implementation and use of health IT by all users,
- Sociotechnical systems associated with health IT, and
- Impact of policy decisions on health IT use in clinical practice.
SOURCE: IOM, 2012a.
TABLE 5-1 Opportunities to Reduce Diagnostic Error Through Electronic Clinical Documentation
|Role for Electronic Documentation||Goals and Features of Redesigned Systems|
|Providing access to information||Ensure ease, speed, and selectivity of information searches; aid cognition through aggregation, trending, contextual relevance, and minimizing of superfluous data.|
|Recording and sharing assessments||Provide a space for recording thoughtful, succinct assessments, differential diagnoses, contingencies, and unanswered questions; facilitate sharing and review of assessments by both patient and other clinicians.|
|Maintaining dynamic patient history||Carry forward information for recall, avoiding repetitive patient querying and recording while minimizing copying and pasting.|
|Maintaining problem lists||Ensure that problem lists are integrated into workflow to allow for continuous updating.|
|Tracking medications||Record medications that the patient is actually taking, patient responses to medications, and adverse effects in order to avert misdiagnoses and ensure timely recognition of medication problems.|
|Tracking tests||Integrate management of diagnostic test results into note workflow to facilitate review, assessment, and responsive action as well as documentation of these steps.|
|Ensuring coordination and continuity||Aggregate and integrate data from all care episodes and fragmented encounters to permit thoughtful synthesis.|
|Enabling follow-up||Facilitate patient education about potential red-flag symptoms; track follow-up.|
|Providing feedback||Automatically provide feedback to clinicians upstream, facilitating learning from outcomes of diagnostic decisions.|
|Providing prompts||Provide checklists to minimize reliance on memory and directed questioning to aid in diagnostic thoroughness and problem solving.|
|Providing placeholder for resumption of work||Delineate clearly in the record where clinician should resume work after interruption, preventing lapses in data collection and thought process.|
|Calculating Bayesian probabilities||Embed calculator into notes to reduce errors and minimize biases in subjective estimation of diagnostic probabilities.|
|Providing access to information sources||Provide instant access to knowledge resources through context-specific “infobuttons” triggered by keywords in notes that link user to relevant textbooks and guidelines.|
|Role for Electronic Documentation||Goals and Features of Redesigned Systems|
|Offering second opinion or consultation||Integrate immediate online or telephone access to consultants to answer questions related to referral triage, testing strategies, or definitive diagnostic assessments.|
|Increasing efficiency||More thoughtful design, workflow integration, and distribution of documentation burden could speed up charting, freeing time for communication and cognition.|
SOURCE: Schiff and Bates, 2010. New England Journal of Medicine, G. Schiff and D. W. Bates. Can electronic clinical documentation help prevent diagnostic errors? 362(12):1066–1069. 2010. Massachusetts Medical Society. Reprinted with permission from Massachusetts Medical Society.
- Easy transfer of information to and from other organizations and clinicians; and
- No unanticipated downtime.
If health IT products do not have these features, it may be difficult for users to effectively interact with the technology, contributing to workarounds (alternate pathways to achieve a particular functionality) or unsafe uses of the technology, as well as errors associated with the correct use of the technology. Although many of these risks apply to health care broadly, the committee concluded that health IT risks are particularly concerning for the diagnostic process. Poor design, poor implementation, and poor use of health IT can impede the diagnostic process at various junctures throughout the process. For instance, a confusing or cluttered user interface could contribute to errors in information integration and interpretation that result in diagnostic errors. Poor integration of health IT tools into clinical workflow may create cognitive burdens for clinicians that take time away from clinical reasoning activities.
To ensure that health IT supports patients and health care professionals in the diagnostic process, collaboration between the federal government, the health IT industry, and users is warranted. The 2012 IOM report concluded that the safety of health IT is a shared responsibility and described the ways in which health IT vendors, users, governmental agencies, health care organizations, and others can collaborate to improve the safety of health IT. Users include a wide variety of clinicians (such as treating health care professionals, clinicians with diagnostic testing expertise, pharmacists, and others), as well as patients and their families (HIMSS, 2014). For example, by working with users, health IT vendors
can improve safety during all phases of the design of their products, from requirements gathering to product testing. In addition, the report called on the Office of the National Coordinator for Health Information Technology (ONC) to expand funding for processes that promote safety in the development of health IT products (IOM, 2012a). In line with these recommendations, the committee recommends that health IT vendors and ONC should work together with users to ensure that health IT used in the diagnostic process demonstrates usability, incorporates human factors knowledge, integrates measurement capability, fits well within clinical workflow, provides clinical decision support, and facilitates the timely flow of information among patients and health care professionals involved in the diagnostic process. Collaboration among health IT vendors, ONC, and users can help to identify best practices in the design, implementation, and use of health IT products used in the diagnostic process. Further research in designing health IT for the diagnostic process is also needed (see Chapter 8). The sections below describe the importance of these various features in the design of health IT for the diagnostic process. The committee did not want to impose specific requirements for how this recommendation is implemented, because the approach would be too proscriptive. The committee’s recommendation emphasizes that collaboration is needed among the health IT vendor community, ONC, and users, and it outlines the essential characteristics of health IT to improve diagnosis and reduce diagnostic errors.
Usability and Human Factors
The potential benefits of health IT for improving diagnosis cannot be realized without usable, useful health IT systems. Usability has been defined as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use” (ISO, 1998). According to the Healthcare Information Management Systems Society (HIMSS), a system exhibits good usability when it is “easy to use and effective. It is intuitive, forgiving of mistakes and allows one to perform necessary tasks quickly, efficiently and with a minimum of mental effort. Tasks which can be performed by the software . . . are done in the background, improving accuracy and freeing up the user’s cognitive resources for other tasks” (HIMSS, 2009, p. 3).
Recent discussions of usability have focused on the importance of incorporating design principles that take human factors1 into account
1 Human factors (or ergonomics) is defined as “the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the
(Middleton et al., 2013). A number of terms have been used to describe the optimal design approach, including human-centered design, user-centered design, use-centered design, and participatory design. The committee opted for the more inclusive term, human-centered design, to describe how the involvement of all stakeholders, rather than just users, is affected by the health IT system. A human-centered design approach balances the requirements of the technical system of computers and software with those of the larger sociotechnical system (Gasson, 2003). Although some health IT vendors have adopted human-centered design principles, the practice is not universal (AHRQ, 2010). Furthermore, usability challenges may only become evident after the system has been implemented or after it has been in widespread use. Accordingly, it is important to make continuous improvements to the design, implementation, and use of health IT (Carayon et al., 2008). Opportunities to assess the effects of technology on the diagnostic process are discussed in Chapter 3.
Although clinicians have reported a high level of use and satisfaction with certain health IT features, such as electronic prescribing (Makam et al., 2013), a number of challenges with usability remain, and the National Institute of Standards and Technology has indicated that usability is often overlooked in the adoption of EHR systems (NIST, 2015). Health IT that is not designed and implemented to support the diagnostic process can increase vulnerability to diagnostic errors. The American Medical Association (AMA) recently released a statement that health IT is misaligned with the cognitive and workflow requirements of medicine and listed eight priorities for improving the usability of EHRs (AMA, 2014) (see Box 5-2). Future research on health IT usability will be important (see Chapter 8).
As mentioned in Box 5-2, a major issue related to health IT is how it will affect the patient–clinician relationship. The hope is that health IT will enhance patient and clinician communication and collaboration by, for example, facilitating patient access to health information (see Chapter 4). However, this needs to be facilitated by health IT tools that assist patients and their families in engaging in the diagnostic process (such as patient access to clinical notes; see Recommendation 1). Patient portals provide patients with access to their medical information, but poor usability—including navigational problems and unmet expectations about functionality—can hinder adoption of such tools among patients (Greenhalgh, 2010). Additional patient-facing health IT tools include mHealth applica-
profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance. Practitioners of ergonomics and ergonomists contribute to the design and evaluation of tasks, jobs, products, environments and systems in order to make them compatible with the needs, abilities and limitations of people” (IEA, 2000).
American Medical Association’s Improving Care: Priorities to Improve Electronic Health Record (EHR) Usability
- Enhance physicians’ ability to provide high-quality patient care. Effective communication and engagement between patients and physicians should be of central importance in EHR design. The EHR should fit seamlessly into the practice and not distract physicians from patients.
- Support team-based care. EHR design and configuration must (1) facilitate clinical staff to perform work as necessary and to the extent their licensure and privileges permit and (2) allow physicians to dynamically allocate and delegate work to appropriate members of the care team as permitted by institutional policies.
- Promote care coordination. EHRs should have an enhanced ability to automatically track referrals and consultations as well as to ensure that the referring physician is able to follow the patient’s progress/activity throughout the continuum of care.
- Offer product modularity and configurability. Modularity of technology will result in EHRs that offer the flexibility necessary to meet individual practice requirements. Application program interfaces can be an important contributor to this modularity.
- Reduce cognitive workload. EHRs should support medical decision making by providing concise, context-sensitive, and real-time data uncluttered by extraneous information. EHRs should manage information flow and adjust for context, environment, and user preferences.
- Promote data liquidity. EHRs should facilitate connected health care—interoperability across different venues such as hospitals, ambulatory care settings, laboratories, pharmacies and post-acute and long-term-care settings. This means not only being able to export data but also to properly incorporate external data from other systems into the longitudinal patient record. Data sharing and open architecture must address EHR data “lock in.”
- Facilitate digital and mobile patient engagement. Whether for health and wellness or the management of chronic illnesses, interoperability between a patient’s mobile technology and the EHR will be an asset.
- Expedite user input into product design and post-implementation feedback. An essential step to user-centered design is incorporating end-user feedback into the design and improvement of a product. EHR technology should facilitate this feedback.
SOURCE: Copyright 2014 American Medical Association. All Rights Reserved.
tions, such as symptom checkers, but concerns about their validity are ongoing (see section on mHealth) (Jutel and Lupton, 2015; Semigran et al., 2015). In addition, there are concerns that clinicians may be unwilling or not know how to act on information collected by patients though mHealth, wearable technologies, or other forums (Dwoskin and Walker, 2014; Ramirez, 2012).
Furthermore, there are also significant concerns that “technology is cleaving the sacred bond between doctor and patient” and that the EHR distracts clinicians from patient-centered care (Wachter, 2015, p. 27). One article suggested that the EHR has negatively affected the clinician–patient bond by prioritizing the computer above the patient. In this view, the patient is no longer the most important thing in the examining room because the machine, rather than the patient, has become the center of the clinician’s focus (Ober, 2015). Verghese described this phenomenon as the emergence of the iPatient (or the EHR as a surrogate for a real patient), arguing that there is a real danger to reducing the attention paid to the patient: “If one eschews the skilled and repeated examination of the real patient, then simple diagnoses and new developments are overlooked, while tests, consultations, and procedures that might not be needed are ordered” (Verghese, 2008).
An important component of usability is whether it supports teamwork in the diagnostic process. Health IT has the potential to strengthen intra- and interprofessional teamwork by providing structural support for enhanced collaboration among the health care professionals involved in the diagnostic process. There is evidence that EHRs facilitate primary care teamwork via enhanced communication, redefined team roles, and improved delegation (O’Malley et al., 2015). However, this is not the case across the board; the AMA has noted that many EHR systems “are not well configured to facilitate team-based care and require physicians to enter data or perform tasks that other team members should be empowered to complete” (AMA, 2014, p. 5).
Reducing the cognitive burdens on clinicians is another key feature of usable health IT systems. Health IT has the potential to support clinicians in the diagnostic process by managing information flow and filtering and presenting information in a way that facilitates decision making. A thoughtfully designed user interface has the potential to help clinicians develop a more complete view of a patient’s condition by capturing and presenting all of the patient’s health information in one place.
In particular, the problem list feature of EHRs can help clinicians to quickly see a patient’s most important health problem; it is a way of organizing a patient’s health information within the health record. The problem list derives from the problem-oriented medical record, developed by
Lawrence Weed (Jacobs, 2009). “Problem-oriented” has two interrelated meanings (Weed and Weed, 2011, p. 134):
- the information in the medical record is organized by the patient problem to which the information relates (as distinguished from the traditional arrangement by source, with doctors’ notes in one place, nurses’ notes in another, lab data in another, etc.), and
- problems are defined in terms of the patient’s complete medical needs rather than providers’ beliefs or specialty orientation (thus, for example, the record should cover not just the “chief complaint” but all identified medical needs, and those needs should be defined in terms of the problems requiring solution, not in terms of providers’ diagnostic hypotheses or treatment plans).
The problem list includes all past and present diagnoses, as well as the time of occurrence and whether the problem was resolved, and links to further information on each entry in the list (AHIMA, 2011; Weed, 1968). Although studies have shown that use of high-quality problem lists is associated with better patient care (Hartung, 2005; Simborg et al., 1976), variability in the structure and content of problem lists has limited its effectiveness in improving patient care (AHIMA, 2011; Holmes et al., 2012). There is a move to standardize the structure and content of problem lists in EHRs through the use of diagnostic and problem codes (AHIMA, 2011). To encourage this change, meaningful use criteria require that participants maintain an up-to-date, coded problem list for at least 80 percent of their patients (AHIMA, 2011).
Unfortunately, poorly designed health IT systems, such as those with confusing user interfaces and disorganized patient information, may contribute to cognitive overload rather than easing the cognitive burden on clinicians. Poorly designed systems can detract from clinician efficiency and impede information integration and interpretation in the diagnostic process. A recent analysis of the graphical display of diagnostic test results in EHRs found that few of the current EHRs meet evidence-based criteria for how to improve comprehension of such information (Sittig et al., 2015). For example, one EHR system graphed diagnostic testing results in reverse chronological order; none of the EHRs in the analysis had graphs with y-axis labels that displayed both the name of the variable and the units of measurement. Human factors engineering approaches, such as a heuristic evaluation or an assessment of how well a particular interface design complies with established design principles for usability, could help identify usability problems and guide the design of user interfaces (CQPI, 2015). One key feature of an effective user interface is simplicity. “Simplicity in design refers to everything from lack of visual clutter
and concise information display to inclusion of only functionality that is needed to effectively accomplish tasks” (HIMSS, 2009). Clinicians have expressed dissatisfaction about EHR screens being too busy due to a high degree of display clutter (or the high density of objects). In their review, Moacdieh and Sarter (2015) found: “Displays described as cluttered have been shown to degrade the ability to monitor and detect signal changes, to delay visual search, to increase memory load, to instill confidence in wrong judgments, to lead to confusion, and to negatively affect situational awareness, reading, and linguistic processing” (p. 61).
Another principle of usability is efficiency (HIMSS, 2009). Inefficient health IT tools may impede diagnosis by adding to clinicians’ work burdens, leaving them with less time for the cognitive work involved in diagnosis and communicating with patients and the other health care professionals who are involved in the patients’ care. Clinicians need to be able to complete a task without having to undergo extra steps, such as clicking, scrolling, or switching between a keyboard and mouse; however, many health IT tools are cumbersome to navigate. One study of emergency department clinicians found that inputting information consumed more of their time than any other activity, including patient care (Hill et al., 2013). By counting computer mouse “clicks,” the researchers found that it took 6 clicks to order an aspirin tablet, 8 clicks to order a chest X-ray, 15 clicks to provide a patient with one prescription, and 40 clicks to document the exam of a hand and wrist injury. Hill and colleagues (2013) estimated that a clinician could make 4,000 clicks in one 10-hour shift. EHRs may also present clinicians with more alerts than they can effectively manage. For example, many comprehensive EHR systems automatically generate alerts in response to abnormal diagnostic testing results, but Singh and colleagues (2013) found that information overload may contribute to clinicians missing test results. Almost 70 percent of clinicians surveyed said that they received more alerts than they could effectively manage, and almost 30 percent of clinicians reported that they had personally missed alerts that resulted in patient care delays.
Makam and colleagues (2013) found that clinicians spend an appreciable amount of time using EHRs outside of their clinic hours. Almost half of the clinicians they surveyed reported that completing EHR documentation for each scheduled half-day clinic session required 1 or more extra hours of work, and 30 percent reported that they spent at least 1 extra hour communicating electronically with patients, even though they may not get paid for this time. Howard and colleagues (2013, p. 107) found mixed results on work burden when they studied small, independent, community-based primary care practices: “EHR use reduced some clinician work (i.e., prescribing, some lab-related tasks, and communica-
tion within the office), while increasing other work (i.e., charting, chronic disease and preventive care tasks, and some lab-related tasks).”
Health IT can also be used to measure diagnostic errors by leveraging the vast amounts of patient data contained in health IT databases (Shenvi and El-Kareh, 2014; Singh et al., 2007b, 2012). For instance, algorithms can be developed that periodically scan EHRs for diagnostic errors or clinical scenarios that suggest a diagnostic error has occurred. An example of the former would be cases of patients with newly diagnosed pulmonary embolism who were seen in the 2 weeks preceding diagnosis by an outpatient or emergency department clinician with symptoms that may have indicated pulmonary embolism (e.g., cough, shortness of breath, chest pain). An example of the latter may be patients who are hospitalized or seen in the emergency department within 2 weeks of an unscheduled outpatient visit, which may be suggestive of a failure to correctly diagnosis the patient at the first visit (Singh et al., 2007b, 2012; Sittig and Singh, 2012). In both of these instances, health IT systems need to incorporate user-friendly platforms that enable health care organizations to measure diagnostic errors or surrogate measures. For health IT systems that are used by multiple health care organizations or across multiple settings (inpatient and outpatient), common platforms for measuring diagnostic errors will permit comparisons of diagnostic error rates across organizations and settings. Improving the identification of diagnostic errors is an important recommendation of this committee (see Chapter 6), and health IT vendors should facilitate efforts to do so by developing tools that enable organizations to more easily determine the rates of diagnostic errors, especially those that are common and that have serious implications for patients (e.g., pulmonary embolism, acute myocardial infarction, and stroke).
Fit Within Clinical Workflow
The diagnostic process is not a single task, but rather a series of tasks that involve multiple people across the health care continuum. Clinical workflow, or the sequence of physical and cognitive tasks performed by various people within and between work environments, affects the diagnostic process at many junctures (Carayon et al., 2010). A critical element of workflow is health IT: Effective integration of health IT into the clinical workflow is essential for preventing diagnostic errors. However, integrating health IT into the clinical workflow is made more difficult by the wide range of workflows used by different individuals participating
in the diagnostic process, both within one setting and across care settings. According to HIMSS, there are more than 50 physician specialties, and each of these specialties has its own software needs, including the unique software needs of the other health care professionals involved in that specialty (e.g., nurses, pharmacists, physical therapists, respiratory therapists, and medical dieticians). Each specialty may have different tasks that require a range of software interface designs (HIMSS, 2009). Furthermore, the actual clinical workflow does not always follow a formal, linear process; for example, orders may need to be executed before the proper administrative data, such as a patient’s social security number, is entered or even known (Ash et al., 2004). As a result, health IT systems need both flexibility and modularity so that they can be tailored to specific workflow needs. Additionally, the time spent implementing and maintaining health IT systems may negatively impact workflow and even contribute to error (IOM, 2012a). For instance, EHR systems may become temporarily inaccessible because of software updates or network failure.
Clinical documentation is central to patient care and often occupies a significant amount of clinicians’ time (Hripcsak et al., 2011). Clinical documentation has been defined as “the process of recording historical data, observations, assessments, interventions, and care plans in an individual’s health record. The purpose of documentation is to facilitate clinical reasoning and decision making by clinicians and promote communication and coordination of care among members of the care team” (Kuperman and Rosenbloom, 2013, p. 6). Beyond supporting patient care, clinical documentation also needs to meet requirements outside of the clinical care setting, including billing, accreditation, legal, and research purposes (Hripcsak and Vawdrey, 2013). Clinical documentation is used to justify the level of service billed to insurers, to collect information for research or quality improvement purposes, and to inform a legal record in case of litigation (Rosenbloom et al., 2011). For example, the electronic documentation of clinical decisions and activity, including both user-entered data and metadata, “may affect the course of malpractice litigation by increasing the availability of documentation with which to defend or prove a malpractice claim” (Magnalmurti et al., 2010, p. 2063). Payment and liability concerns, in combination with the growth in EHRs, have resulted in extensive and growing clinical documentation—sometimes referred to as “note bloat”—that has led to a situation in which key information in a patient’s medical record can be obscured (Kuhn et al., 2015). A number of clinicians have expressed concern that clinical documentation is not promoting high-quality diagnosis and is instead primarily centered around billing and legal re-
quirements, forcing clinicians to “focus on ticking boxes rather than on thoughtfully documenting their clinical thinking” (Schiff and Bates, 2010, p. 1066). In addition, research has shown that electronic documentation adds to clinicians’ work burden: Intensive care unit residents and physicians spend substantially more time on clinical review and documentation after EHR implementation (Carayon et al., 2015). For example, extensive clinical documentation for justifying payment, facilitated by the copy and paste feature of EHRs, can contribute to cognitive overload and impede clinical reasoning. Chapter 7 further elaborates on how documentation guidelines for billing interfere with the diagnostic process and presents the committee’s recommendation for how to better align documentation guidelines with clinical reasoning activities.
A major goal of using data collected within EHRs for legal, billing, and population-wide health management has led to a profusion of structured clinical documentation formats within health IT tools. However, structured documentation may cause problems for clinicians because they “value different factors when writing clinical notes, such as narrative expressivity, amenability to the existing workflow, and usability” (Rosenbloom et al., 2011, p. 181). Clinicians need to be able to record information efficiently and in ways that render it useful to other health care professionals involved in caring for a patient. Research has found “that in a shared context, concise, unconstrained, free-text communication is most effective for coordinating work around a complex task” (Ash et al., 2004, p. 106). There are also concerns that overly structured data entry has impacted clinicians’ cognitive focus and abilities to focus on and attend to relevant information in the EHR (Ash et al., 2004).
Tools, such as speech recognition technology, have been developed to assist clinicians with clinical documentation, with varying degrees of success. Though several studies have found that voice recognition technology can improve the turnaround time of results reporting (Johnson et al., 2014; Prevedello et al., 2014; Singh and Pal, 2011), there are a number of issues associated with this technology that make it difficult to implement or may negatively impact the diagnostic process. This includes high implementation costs, the need for extensive user training, decreased report quality due to technology-related errors, and workflow interruptions (Bhan et al., 2008; de la Cruz, 2014; Fratzke et al., 2014; Houston and Rupp, 2000; Hoyt and Yoshihashi, 2010; Johnson et al., 2014; Quint et al., 2008).
Another technology that may help address the challenges of clinical documentation is natural language processing (Hripcsak and Vawdrey, 2013). Natural language processing extracts data from free text, converting clinicians’ notes and narratives into structured, standardized formats. When the task is sufficiently constrained and when there is sufficient time to train the system, natural language processing systems can extract
information with minimal effort and very high performance (Uzuner et al., 2008). Health IT vendors have begun to incorporate natural language processing software into EHRs. Additional technologies, particularly data mining, hold promise for improving clinical documentation in the future. Data mining “relies on the collective experience of all previous notes to steer how data should be entered in a new note” (Hripcsak and Vawdrey, 2013, p. 2). These technologies also hold promise for improving clinical decision support, discussed below.
Clinical Decision Support in Diagnosis
Health IT has the potential to support the diagnostic process through clinical decision support (CDS) tools. CDS provides clinicians and patients “with knowledge and person-specific information [that is] intelligently filtered or presented at appropriate times, to enhance health and health care” (HealthIT.gov, 2013). A number of studies have shown that clinical decision support systems can improve the rates of certain desirable clinician behaviors such as appropriate test ordering, disease management, and patient care (Carayon et al., 2010; Lobach and Hammond, 1997; Meigs et al., 2003; Roshanov et al., 2011; Sequist et al., 2005).
Diagnostic decision support tools can provide support to clinicians and patients throughout each stage of the diagnostic process, such as during information acquisition, information integration and interpretation, the formation of a working diagnosis, and the making of a diagnosis (Del Fiol et al., 2008; Zakim et al., 2008). Box 5-3 categorizes health IT tools according to the tasks they assist with in the diagnostic process (El-Kareh et al., 2013). Tools such as infobuttons can be integrated into EHRs and provide links to relevant online information resources, such as medical textbooks, clinical practice guidelines, and appropriateness criteria; there is evidence that infobuttons can help clinicians answer questions at the point of care and that they lead to a modest increase in the efficiency of information delivery (Del Fiol et al., 2008). CDS can also facilitate the ordering of the diagnostic tests that help clinicians develop accurate and timely diagnoses. In its input to the committee, the American College of Radiology stated that structured decision support for image ordering and reporting is critical for reducing diagnostic errors (Allen and Thorwarth, 2014). The Protecting Access to Medicare Act, passed in 2014, includes a provision that requires clinicians to use specified criteria when ordering advanced imaging procedures and directs the Department of Health and Human Services to identify CDS tools to help clinicians order these imaging procedures.2 Given the growth of molecular testing and advanced
2 Protecting Access to Medicare Act of 2014: www.congress.gov/bill/113th-congress/housebill/4302 (accessed December 6, 2015).
Categories Describing Different Steps in Diagnosis Targeted by Diagnostic Health Information Technology Tools
- Tools that assist in information gathering
- Cognition facilitation by enhanced organization and display of information
- Aids to the generation of a differential diagnosis
- Tools and calculators to assist in weighing diagnoses
- Support for the intelligent selection of diagnostic tests/plan
- Enhanced access to diagnostic reference information and guidelines
- Tools to facilitate reliable follow-up, assessment of patient course, and response
- Tools/alerts that support screening for the early detection of disease in asymptomatic patients
- Tools that facilitate diagnostic collaboration, particularly with specialists
- Systems that facilitate feedback and insight into diagnostic performance
SOURCE: El-Kareh et al., 2013. Reproduced from Use of health information technology to reduce diagnostic error. R. El-Kareh, O. Hasan, and G. Schiff. BMJ Quality and Safety 22(Suppl 2):ii40–ii51, with permission from BMJ Publishing Group Ltd.
imaging techniques, the importance of clinical decision support in aiding decisions involving this aspect of the diagnostic process is likely to increase.
Although decision support technologies have been around for quite some time (Weed and Weed, 2011; Weed and Zimny, 1989), there is still much room for progress. Questions about the validity and utility of diagnostic decision support tools still remain. A number of studies have assessed the performance of diagnostic decision support tools. Researchers such as Ramnarayan et al. (2003) have developed scores to measure the impact of diagnostic decision support on the quality of clinical decision making. These scores assess the performance of diagnostic decision support tools based on how often the “correct” diagnosis is produced by either the decision support system or by the clinicians after using the decision support; the scores also take into account the rank of the correct diagnosis on the list of differential diagnoses. There may be problems with these criteria, however; for example, rare diagnoses may be less likely to be considered because of a lower ranking. A review of four differential diagnosis generators found these tools to be “subjectively assistive and functional for clinical diagnosis and education” (Bond et al., 2012, p. 214). On a five-point scale (5 when the actual diagnosis was suggested on the first screen or in the first 20 suggestions, and 0 when no suggestions
were close to the clinical diagnosis), the differential diagnosis generators received scores ranging from 1.70 to 3.45. Additional studies suggest that diagnostic decision support tools have the potential to improve the accuracy of diagnosis (Graber and Mathew, 2008; Kostopoulou et al., 2015; Ramnarayan et al., 2006, 2007). However, the studies assessing diagnostic decision support tools were conducted in highly controlled research settings; further research is needed to understand the performance of diagnostic decision support tools in clinical practice (see Chapter 8).
Though relatively early in its development, the application of new computational methods, such as artificial intelligence and natural language processing, has the potential to improve clinical decision support (Arnaout, 2012). For instance, these approaches can analyze large amounts of complex patient data (such as patient notes, diagnostic testing results, genetic information, as well as clinical and molecular profiles) and compare the results to “thousands of other patient EHRs to identify similarities and associations, thus, elucidating trends in disease course and management” (Castaneda, 2015, p. 12).
In addition to these efforts involving generalized decision support tools, there are also ongoing efforts to use decision support in radiology. One such decision support tool is computer-aided detection (CAD), which is designed to help radiologists during imaging interpretation by analyzing images for patterns associated with underlying disease (e.g., breast cancer during mammography screening). Despite the broad acceptance and use of CAD, there is mixed evidence demonstrating its effectiveness (Rao et al., 2010). Although CAD is not yet mature, the technology holds promise for improving detection.
Challenges with the usability and acceptability of diagnostic decision support have hindered adoption of these tools in clinical practice (Berner, 2014). For these tools to be useful, they need to be used only when appropriate, to be understandable, and to enable clinicians to quickly determine the level of urgency and relevancy. Decision support needs to function within the workflow and physical environment of the diagnostic process, which may include distractions and interruptions. If decision support tools are to be optimally designed, it will be necessary to consider tailoring the support to different users based on such factors as experience and workload. For example, a highly trained or highly experienced user may be better able to navigate a computer interface that is cumbersome than a less experienced user.3 And the more experienced clinicians may need support to avoid pitfalls in diagnosis due to the use of system 1 processes, whereas more novice clinicians may need access to additional information to support system 2 processes. Research on how clinicians use
3 Although a cumbersome interface may also be challenging to an experienced user.
technology may provide insight into the ways that human–automation interactions may be contributing to errors. EHR systems log users’ actions through both user-entered data (i.e., timing of events and who performed them) and metadata. EHRs can also measure the rate at which clinicians override alerts and medication-dose defaults.
In addition, there are a number of potential patient safety risks associated with decision support. A systematic review found that an overreliance on decision support has the potential to reduce independent clinician judgment and critical thinking (Goddard et al., 2012). A decision support tool could provide incorrect advice if it has incomplete information or applies outdated treatment guidelines (AHLA, 2013). This may place a clinician in a position in which he or she believes that the decision support is correct and therefore discounts his or her own assessment of the issue. Although Friedman and colleagues (1999) found that the use of clinical decision support was associated with a modest increase in diagnostic accuracy, in 6 percent of cases, clinicians overrode their own correct decisions due to erroneous advice from the decision support system. Informational content, as well as the presentation of information in decision support, can lead to adverse events. Adverse events relating to informational content are grouped around three themes: (1) changing roles and/or elimination of clinicians and staff, (2) the currency of CDS content, and (3) inaccurate or misleading CDS content. Adverse events relating to presentation of information are grouped by: (a) the rigidity of systems, (b) sources of alert fatigue, and (c) sources of potential errors (Ash et al., 2007).
Timely Flow of Information
The timely and effective exchange of information among health care professionals and patients is critical to improving diagnosis, and breakdowns in that communication are a major contributor to adverse events, including diagnostic errors (Gandhi et al., 2000; Poon et al., 2004; Schiff, 2005; Singh et al., 2007a). Health IT has the potential to reduce communication breakdowns, including breakdowns in intra- and interpersonal communication, in communication among patients and health care professionals, and in information exchange (e.g., the reporting of test results) (Singh et al., 2008). As discussed in Chapter 4, improved patient access to EHRs, including diagnostic testing results and clinical notes, can promote improved engagement in the diagnostic process and facilitate more timely information flow between and among patients and health care professionals. Health IT can also assist with the tracking of test results and follow-up (see Chapter 6). For example, the AMA (2014) concluded that EHRs can support care coordination if they “automatically track referrals
and consultations as well as ensure that the referring physician is easily able to follow the patient’s progress/activity throughout the continuum of care” (p. 5).
However, health IT tools may not be facilitating optimal communication among health care professionals, and they may even contribute to communication breakdowns. For example, Parkash and colleagues (2014) found that EHRs may not alert clinicians when surgical pathology reports have been amended, which may result in an incorrect diagnosis that is based on the original pathology report, an incorrect treatment plan, and the potential for serious consequences for a patient. A lack of interoperability (discussed below) can also prevent the timely flow of information among health care professionals.
Furthermore, another effect of health IT tools may be a reduction in informal, in-person collaborations between clinicians that can facilitate insights into the diagnostic process. In-person consultation between treating clinicians and the radiology department was common prior to the computerization of radiology and the introduction of the picture archiving communications system (Wachter, 2015). With the transition to filmless radiology systems, there has been a decrease in in-person consultations with the radiology department (Reiner et al., 1999).
An example of the importance of the timely flow of information is illustrated by the delayed diagnosis of Ebola in a Dallas emergency department (see Box 5-4). As the committee was deliberating in 2014, the most widespread outbreak yet seen of the Ebola virus occurred (CDC, 2015). Although the epidemic was primarily localized to several West African countries, the United States experienced its first case of Ebola virus in September 2014, a highly publicized example of diagnostic error. The committee included this case because it demonstrates the complex etiology of diagnostic error, including the roles that health IT and interprofessional communication play in conveying information in the diagnostic process.
Another health IT–related challenge in the diagnostic process is the lack of interoperability, or the inability of different IT systems and software applications to communicate, exchange data, and use the information that has been exchanged (HIMSS, 2014). It is not unusual for the diagnostic process to occur over a protracted period of time, with multiple clinicians across different care settings involved in the process. A free flow of information is critical to ensuring accurate and timely diagnoses because in order for health care professionals to develop a complete picture of a patient’s health problem, all relevant health information needs to be available and accessible. A lack of interoperability can impede the
A Case of Diagnostic Error: Delayed Diagnosis of Ebola Virus Infection
Thomas Eric Duncan traveled from Liberia to the United States in September 2014. He visited a Texas area emergency department on September 25, presenting with nonspecific symptoms, including fever, nausea, abdominal pain, and a severe headache, symptoms that can be attributed to a number of common acute illnesses (Upadhyay et al., 2014). Mr. Duncan informed the triage nurse of his recent travel from Africa (Dunklin and Thompson, 2014). The electronic health record (EHR) indicated that Mr. Duncan arrived with a fever of 100°F, which spiked to 103°F, and then dropped to 101°F prior to discharge (Dallas Morning News, 2014; Energy & Commerce Committee, 2014; Upadhyay et al., 2014). The physician who evaluated Mr. Duncan during this visit was not aware of his travel history (Dallas Morning News, 2014). Mr. Duncan underwent a series of tests, including a computed tomographic scan, and was released with a diagnosis of sinusitis, but a later evaluation found that the imaging results were not consistent with this diagnosis (Upadhyay et al., 2014). Mr. Duncan returned to the hospital on September 28 via ambulance (Energy & Commerce Committee, 2014; Upadhyay et al., 2014). On September 30 he was confirmed to have the Ebola virus (Energy & Commerce Committee, 2014), and on October 8 Mr. Duncan died from this infection. The hospital accepted responsibility for the diagnostic error (Upadhyay et al., 2014).
The chief clinical officer of Texas Health Resources stated in testimony to the U.S. Congress, “Unfortunately, in our initial treatment of Mr. Duncan, despite our best intentions and a highly skilled medical team, we made mistakes. We did not correctly diagnose his symptoms as those of Ebola. We are deeply sorry” (Energy & Commerce Committee, 2014).
Current evidence suggests that patients seen in the emergency department are at high risk of experiencing diagnostic errors because of the range of conditions seen, the time pressures involved, and complexity of the work system environment (Campbell et al., 2007). As illustrated in this case of diagnostic error, a number of factors typically contribute to many adverse safety events (Graber, 2013).
Patient history and physical exam often suggest the correct diagnosis (Peterson et al., 1992). In this example, Mr. Duncan’s travel history was especially relevant to his medical condition (Dallas Morning News, 2014). Although the travel history was obtained by the nurse, the physician examining Mr. Duncan told the Dallas Morning News that the “travel information was not easily visible in my standard workflow” (Dallas Morning News, 2014). Communication breakdowns likely contributed to this diagnostic error: The travel history may not have been communicated or communicated adequately among the patient and his care team. Additionally, the significance of this information may not have been considered during the diagnostic process (Dunklin and Thompson, 2014; Upadhyay et al., 2014). Without knowledge of the travel history, the physician chose a much more common condition as the possible explanation (Dunklin and Thompson, 2014).
Although most diagnostic errors involve common conditions, this case illustrates the problem of diagnosing rare diseases (zebras), when much more com-
mon diseases (horses) could explain similar symptoms. There is no easy solution to this problem. The challenge has been well described in Atul Gawande’s book Complications, when he compared a necrotizing fasciitis diagnosis with cellulitis. In considering such a rare diagnosis, he said, “I felt a little foolish considering the diagnosis—it was a bit like thinking the Ebola virus had walked into the ER” (Gawande, 2002, p. 233).
Understanding the information flow and communication breakdowns in this case is a more challenging task (Upadhyay et al., 2014). The nurse documented the travel history in the nursing note, which was not considered by the physician. This raised a number of questions:
- Was documentation in the EHR sufficient to convey this information?
- When is verbal communication of key facts necessary?
- Was the EHR designed appropriately to support sharing of important information?
- Are the notes in EHRs too hard to locate and share in the typical workflow of a busy emergency department?
- Are notes valued appropriately by members of the care team?
- Does the format of a nursing note (template versus unstructured) influence how key information is communicated?
After the diagnostic error of Ebola occurred, Texas Presbyterian implemented a number of organizational and technological changes intended to reduce the risk of similar errors in the future. A public statement outlining the lessons learned and responses to this diagnostic error included
- “Upgraded medical record software to highlight travel risks
- New triage procedures initiated to quickly identify at-risk individuals
- A triage procedure to move high-risk patients immediately from the emergency department
- A final step for cleared patients: 30 minutes prior to discharge, vital signs will be rechecked. If anything is abnormal, the physician will be notified
- Increased emphasis on face-to-face communication.” (Watson, 2014)
- Although diagnostic errors typically involve common conditions, patients with unusual or rare conditions are at high risk for diagnostic error if their symptoms mimic those of more common conditions.
- The etiology of a diagnostic error is typically multifactorial. The various contributions of the work system, including the cognitive characteristics of clinicians and the complex interactions between them, can best be understood by adopting a human factors perspective.
- Breakdowns in information flow and communication are some of the most common factors identified in cases of diagnostic error, just as they are in other major patient safety adverse events.
- Although EHR technology provides many advantages to the diagnostic process, it can also cause a predisposition to certain types of errors, such as ineffective search for important information.
diagnostic process because it can limit or delay access to the data available for clinical decision making. When health care systems do not exchange data, clinical information may be inaccurate or inadequate. For instance, one version of a patient’s EHR may exist on the primary clinical information system while a variety of outdated or partial versions of the record are present in other places. Furthermore, the record on the primary clinical information system may not necessarily be complete.
Given the importance of the free flow of information to diagnosis, ONC can play a critical role in improving interoperability. The vision that ONC has articulated for the interoperability of health IT is of an “ecosystem that makes the right data available to the right people at the right time across products and organizations in a way that can be relied upon and meaningfully used by recipients” (ONC, 2014a, p. 2). By 2024, ONC anticipates that individuals, clinicians, communities, and researchers will have access to a variety of interoperable products. However, the progress toward achieving health information exchange and interoperability has been slow (CHCF, 2014). For example, office-based exchange of information remains low; a study conducted by Furukawa et al. (2014) found that only 14 percent of the clinicians surveyed reported sharing data with clinicians outside their organization. Recognizing that progress in interoperability is critical to improving the diagnostic process, the committee calls on ONC to more rapidly require that health IT systems meet interoperability requirements. Thus, the committee recommends that ONC should require health IT vendors to meet standards for interoperability among different health IT systems to support effective, efficient, and structured flow of patient information across care settings to facilitate the diagnostic process by 2018. This recommendation is in line with the recent legislation that repealed the sustainable growth rate, which included a provision that declared it a national objective to “achieve widespread exchange of health information through interoperable certified [EHR] technology nationwide by December 31, 2018.”4 The law requires the Secretary of Health and Human Services (HHS) to develop metrics to evaluate progress on meeting this objective by July 2016. Furthermore, the legislation stipulates that if interoperability has not been achieved by 2018, the Secretary is required to submit a report to Congress in 2019 that identifies the barriers and makes recommendations for federal government action to achieve interoperability, including adjusting payments for not being meaningful EHR users and criteria for decertifying certified EHR technology products.
Improved interoperability across different health care organizations—as well as across laboratory and radiology information systems—is criti-
4 Medicare Access and CHIP Reauthorization Act of 2015. P.L. 114-10 (April 16, 2015).
cal to improving the diagnostic process. Challenges to interoperability include the inconsistent and slow adoption of standards, particularly among organizations that are not subject to EHR certification programs, as well as a lack of incentives, including a business model that generates revenue for health IT vendors via fees associated with transmitting and receiving data (Adler-Milstein, 2015; CHCF, 2014). The IOM report Health IT and Patient Safety: Building a Safer Health System recognized interoperability as a key feature of safely functioning health IT and noted that interoperability needs to be in place across the entire health care continuum: “Currently, laboratory data have been relatively easy to exchange because good standards exist such as Logical Observation Identifiers Names and Codes (LOINC) and are widely accepted. However, important information such as problem lists and medication lists (which exist in some health IT products) are not easily transmitted and understood by the receiving health IT product because existing standards have not been uniformly adopted” (IOM, 2012a, p. 86). Although laboratory data may be relatively easy to exchange, a recent report noted that the lack of incentives (or penalties) for organizations that are not subject to the EHR certification process under the Medicare and Medicaid EHR Incentive Programs (such as clinical laboratories) also contributes to poor interoperability (CHCF, 2014).
Additionally, the interface between EHRs and laboratory and radiology information systems typically has limited clinical information, and the lack of sufficiently detailed information makes it difficult for a pathologist or radiologist to determine the proper context for interpreting findings or to decide whether diagnostic testing is appropriate (Epner, 2015). For example, one study found that important non-oncological conditions (such as Crohn’s disease, human immunodeficiency virus, and diabetes) were not mentioned in 59 percent of radiology orders and the presence of cancer was not mentioned in 8 percent of orders, demonstrating that the complete patient context is not getting received (Obara et al., 2015). Insufficient clinical information can be problematic as radiologists and pathologists often use this information to inform their interpretations of diagnostic testing results and suggestions for next steps (Alkasab et al., 2009; Obara et al., 2015). In addition, the Centers for Disease Control and Prevention’s Clinical Laboratory Improvement Advisory Committee (CLIAC) expressed concern over the patient safety risks regarding the interoperability of laboratory data and display discrepancies in EHRs (CDC, 2014; CLIAC, 2012). They recommended that laboratory health care professionals collaborate with other stakeholders to “develop effective solutions to reduce identified patient safety risks in and improve the safety of EHR systems” regarding laboratory data (CDC, 2014, p. 3). There have been some efforts to improve the transmission of clinical context
with diagnostic testing orders; for example, a quality improvement initiative in the outpatient and emergency department settings was able to improve the consistency with which radiology orders were accompanied by a complete clinical history (Hawkins et al., 2014).
Another emerging challenge is the interoperability between EHRs and patient-facing health IT, such as physical activity data, glucose monitoring, and other health-related applications (see section on mHealth) (Marceglia et al., 2015; Otte-Trojel et al., 2014).5
Economic incentives are another barrier to achieving interoperability. Current market conditions create business incentives for information blocking, that is, “when persons or entities knowingly and unreasonably interfere with the exchange or use of electronic health information” (ONC, 2015, p. 8). A variety of persons or entities may engage in information blocking practices, but most complaints of information blocking are related to the actions of health IT developers. Health IT vendors may “charge fees that make it cost-prohibitive for most customers to send, receive, or export electronic health information stored in EHRs, or to establish interfaces that enable such information to be exchanged” (ONC, 2015, p. 15). For instance, clinicians may pay $5,000 to $50,000 each to secure the right to set up connections that allow them to transmit information regularly to laboratories, health information exchanges, or governments (Allen, 2015). Additional fees may be charged each time a clinician sends, receives, or even searches for (or “queries”) data (ONC, 2015). Health care organizations are also capable of engaging in information blocking. For instance, larger hospital systems that already capture a large proportion of patients’ clinical information internally may be less motivated to join health information exchanges. In such instances, “information is seen as a tool to retain patients within their system, not as a tool to improve care” (Tsai and Jha, 2014, p. 29).
Issues related to data security and privacy will need to be considered as interoperability and health information exchange increases. The personal information stored within health IT systems needs to be secure. However, these data also need to be easily available when patients move from one system to another. Transparency will become increasingly important as interoperability improves and as data aggregation for quality improvement and population health management becomes more common. The ONC recognizes that it will be important to “support greater transparency for individuals regarding business practices of entities that use their data, particularly those that are not covered by the HIPAA [Health Insurance Portability and Accountability Act] Privacy and Security Rules” (ONC, 2014a, p. 5).
5 Interoperability is one challenge surrounding patient-facing technologies; there are also other important considerations, such as vetting the quality of patient-reported data.
SAFETY OF HEALTH IT IN DIAGNOSIS
Patient safety risks related to the use of health IT in the diagnostic process are an important concern because there is growing recognition that health IT can result in adverse events (IOM, 2012a; ONC, 2014b; Walker et al., 2008), including sentinel events that result in permanent patient harm or death (The Joint Commission, 2015b). Such health IT safety risks have been described in the context of a sociotechnical system, in which the system components (including technology, people, workflow, organizational factors, and external environment) can dynamically interact and contribute to adverse events (IOM, 2012a; Sittig and Singh, 2010). A number of health IT–related patient safety risks may affect the diagnostic process and the occurrence of diagnostic errors. For example, challenges with the usability of EHRs have led to work-arounds from their intended use; although many of these work-arounds are benign, there is the potential for negative effects on patient safety and diagnosis (Ash et al., 2004; Friedman et al., 2014; IOM, 2012a; Koppel et al., 2008). Clinical documentation in the EHR and the use of the copy and paste functionality of EHRs are areas of increased concern. While the use of copy and paste functionality may increase efficiency by saving time that would otherwise be spent retyping or reentering information, it carries with it a number of risks, including redundancy that contributes to lengthy notes and cognitive overload as well as the spreading of inaccurate, outdated, or incomprehensible information (AHIMA, 2014; The Joint Commission, 2015a; Kuhn et al., 2015). New safety risks may also include errors related to entering and retrieving information (such as juxtaposition errors), errors in communication and coordination (mistaking information entry into an EHR system as a successful communication act), and health IT system maintainability (Ash et al., 2004). For instance, a pathologist may assume that the entry of new test results into an EHR system means that the results have been communicated to the clinician, even though this may not be the case (documentation in the EHR is not necessarily equivalent to communication).
Unfortunately, contractual provisions, intended to protect vendors’ intellectual property interests and liability from the unsafe use of health IT products, limit the free exchange of information about health IT–related patient safety risks (IOM, 2012a). Specifically, “some vendors require contract clauses that force [health IT] system purchasers to adopt vendor-defined policies that prevent the disclosure of errors, bugs, design flaws, and other [health IT]-software-related hazards” (Goodman et al., 2011, p. 77). These contractual barriers may propagate safety risks and pose significant challenges to the use of data for future patient safety and quality improvement research (IOM, 2012a). In recognition of these challenges, the American Medical Informatics Association board of directors convened a task force to help resolve issues surrounding vendor–user
contracts and made a number of suggestions for improving health IT contract language (see Box 5-5). Westat prepared a report for ONC that provides an overview of the key contract terms for health care organizations to be aware of when negotiating agreements with health IT vendors (Westat, 2013).
Recommendations from an American Medical Informatics Association Special Task Force on Health Information Technology Contracts
- Contracts should not contain language that prevents system users, including clinicians and others, from using their best judgment about what actions are necessary to protect patient safety. This includes freedom to disclose system errors or flaws, whether introduced or caused by the vendor, the client, or any other third party. Disclosures made in good faith should not constitute violations of [health information technology (health IT)] contracts. This recommendation neither entails nor requires the disclosure of trade secrets or of intellectual property.
- Hospitals, physician purchasers, and other users should understand that commercial products’ screen designs and descriptions of software-supported workflows represent corporate assets developed at a cost to software vendors. Unless doing so would prematurely prevent disclosure of flaws, users should consider obligations to protect vendors’ intellectual property and proprietary materials when disclosing (potential) flaws. Users should understand and accept their obligation to notify vendors before disclosing such features, and be aware of the range of remedies available to both the purchaser and the vendor in addressing safety issues. Equally, or more important, users should consider obligations to protect patient safety via such disclosures.
- Because vendors and their customers share responsibility for patient safety, contract provisions should not attempt to circumvent fault and should recognize that both vendors and purchasers share responsibility for successful implementation. For example, vendors should not be absolved from harm resulting from system defects, poor design or usability, or hard-to-detect errors. Similarly, purchasers should not be absolved from harm resulting from inadequate training and education, inadequate resourcing, customization, or inappropriate use.
- While vendors have legitimate corporate interests and duties (e.g., to shareholders), contract language should make explicit a commitment by all parties to patient care and safety, and, as applicable, to biomedical research and public health.
- Vendors should be protected from claims in which a facility (hospital, medical office, practitioner, etc.) causes errors that cannot reasonably be attributed to a defect in the design or manufacture of a product, or to vendor-related problems in installation, updating, or configuration processes. Similarly, vendors should
In line with the movement toward more transparency, the IOM report on patient safety and health IT recommended that the Secretary of HHS “should ensure insofar as possible that health IT vendors support the free exchange of information about health IT experiences and issues and not prohibit sharing of such information, including details (e.g., screenshots) relating to patient safety” (IOM, 2012a, p. 7). The committee endorses
not be held responsible for circumstances in which users make foolish or intentional errors.
- “Hold harmless” clauses in contracts between electronic health application vendors and purchasers or clinical users, if and when they absolve the vendors of responsibility for errors or defects in their software, are unethical. Some of these clauses have stated in the past that [health IT] vendors are not responsible for errors or defects, even after vendors have been informed of problems.
- A collaborative system or process of third- or neutral-party dispute resolution should be developed. Contracts should contain language describing a process for timely and, as appropriate, transparent conflict resolution.
- Contracts should make explicit a mechanism by which users/clients can communicate problems to the company; and vendors should have a mechanism for dealing with such problems (compare in this regard the processes in place for adverse event and device failure tracking by implantable medical device manufacturers).
- Contracts should require that system defects, software deficiencies, and implementation practices that threaten patient safety should be reported, and information about them be made available to others, as appropriate. Vendors and their customers, including users, should report and make available salient information about threats to patient safety resulting from software deficiencies, implementation errors, and other causes. This should be done in a way easily accessible to customers and to potential customers. This information, when provided to customers, should be coupled with applicable suggested fixes, and should not be used to penalize those making the information available. Disclosure of information should not create legal liability for good-faith reporting. Large [health IT] systems undergo thousands of revisions when looked at on a feature-by-feature basis. Requirements that the vendor notify every customer of every single feature change on a real-time basis would have the unintended result of obscuring key safety risks, as customers would have to bear the expense of analyzing thousands of notifications about events which are typically rare. Therefore, vendors should notify customers as soon as possible about any product or configuration issues (1) of which they are aware and (2) which pose a risk to patients.
SOURCE: K. W. Goodman, E. S. Berner, M. A. Dente, B. Kaplan, R. Koppel, D. Rucker, D. Z. Sands, and P. Winkelstein, Challenges in ethics, safety, best practices, and oversight regarding HIT vendors, their customers, and patients: A report of an AMIA special task force, Journal of the American Medical Informatics Association, 2011,18(1):77–81, by permission of the American Medical Informatics Association.
this recommendation and further recommends that the Secretary of HHS should require health IT vendors to permit and support the free exchange of information about real-time user experiences with health IT design and implementation that adversely affect the diagnostic process. Health IT users can discuss patient safety concerns related to health IT products used in the diagnostic process in appropriate forums. Such forums include the forthcoming ONC Patient Safety Center or patient safety organizations (see Chapter 7) (RTI International, 2014; Sittig et al., 2014a). In addition, the Agency for Healthcare Research and Quality has developed a Common Format reporting form for health IT adverse events and ONC is beginning to evaluate patient safety events related to health IT (ONC, 2014b).
Because the safety of health IT is critical for improvements to the diagnostic process, health IT vendors need to proactively monitor their products in order to identify potential adverse events, which could contribute to diagnostic errors and challenges in the diagnostic process (Carayon et al., 2011). To ensure that their products are unlikely to contribute to diagnostic errors and adverse events, vendors need to have independent third-party evaluations performed on whichever of their health IT products are used in the diagnostic process. Thus, the committee recommends that the Secretary of HHS should require health IT vendors to routinely submit their products for independent evaluation and notify users about potential adverse effects on the diagnostic process related to the use of their products. Health IT vendors may consider using self-assessment tools, such as the SAFER guides, to prepare for the evaluations (Sittig et al., 2014b). If health IT products have the potential to contribute to diagnostic errors or have other adverse effects on the diagnostic process, health IT vendors have a responsibility to communicate this information to their customers in a timely manner.
In addition to health IT, several emerging technologies, such as telemedicine/telehealth and mHealth/wearable technologies, present opportunities to improve the diagnostic process. This section examines the use of these technologies by health care professionals and patients to improve the diagnostic process.6
6 The use of emerging technologies in diagnosis and treatment raises a number of regulatory, legal, and policy issues that are beyond the scope of this discussion (such as privacy and security concerns, payment, credentialing, licensure, program integrity, liability, and others).
Telemedicine and Telehealth
Although the definitions vary, telemedicine and telehealth generally refer to the delivery of care, consultations, and information using communications technology (American Telemedicine Association, 2015). A 2012 IOM workshop defined both telemedicine and telehealth, saying that they “describe the use of medical information exchanged from one site to another via electronic communications to improve the patient’s health status. Although evolving, telemedicine is sometimes associated with direct patient clinical services and telehealth is sometimes associated with a broader definition of remote health care services” (IOM, 2012b, p. 3). Telemedicine encompasses an increasing array of applications and services, such as “two-way video, e-mail, smart phones, wireless tools, and other forms of telecommunication technology” (American Telemedicine Association, 2015).
Telemedicine typically is used in two settings: (1) between a clinician and a patient who is in a different location or (2) between two clinicians for consultations. The transmission of images, data, and sound can take place either synchronously (real-time), where the consulting clinician participates in the examination of the patient while diagnostic information is collected and transmitted, or asynchronously (anytime), through store-and-forward technology that transmits digital information for the consulting clinician to review at a later time.
As new payment and care delivery models are being implemented and evaluated, there is a growing recognition of the potential for technological capabilities to improve patient accessibility to health care services and also to improve care coordination and affordability. Telemedicine can create additional options for how individuals receive health care, while lessening the dependence on traditional in-person methods of receiving medical treatment. Telemedicine arrangements have emerged in a number of medical specialties (e.g., radiology, pathology, dermatology, ophthalmology, cardiology, neurology, geriatrics, and psychiatry), certain hospital service lines (e.g., home health and dentistry), and certain patient populations (e.g., prison inmates).
Telemedicine poses a number of challenges in the diagnostic process that may differ from those in traditional health care visits. For example, in the absence of a prior patient–clinician relationship, a clinician may not know enough details about the patient’s history to ask pertinent questions, which may lead clinicians to overutilize diagnostic testing (Huff, 2014). In addition, telemedicine approaches can limit a clinician’s ability to perform a comprehensive physical exam; certain medical conditions cannot be diagnosed effectively via a telemedicine encounter (Robison, 2014). There is also the potential for technological failures and transmis-
sion errors during a telemedicine encounter that can impair the diagnostic process and medical evaluation (Carranza et al., 2010). It is important that both patients and clinicians fully understand the telemedicine process and its associated limitations and risks, including the scope of the diagnostic health care services that can be delivered safely through this medium. Additionally, health care professionals may need to document their findings differently in the absence of face-to-face interactions, given the absence of a comprehensive physical exam. Clinicians participating in telemedicine need to be attuned to care continuity and coordination issues and to effectively convey to their patients who has accountability over their care and whom they should contact for follow-up. Finally, health care professionals will need to keep abreast of professional standards of care and the relevant state laws that create heightened requirements for a particular telemedicine activity and which may affect the diagnostic process.
The following text provides an overview of telemedicine applications in radiology, pathology, and neurology.
Teleradiology has been a forerunner in telemedicine arrangements “with on-call emergency reporting being used in over 70 percent of radiology practices in the United States and general teleradiology by ‘nighthawk services’ around the world” (Krupinski, 2014, p. 5). In these arrangements, outsourced, off-hour radiology interpretations are provided by physicians credentialed in the United States who are either located within the United States or abroad. Continuous developments in picture archiving and communication systems and radiology information systems have strengthened the overall teleradiology process, including image capture, storage, processing, and reporting. In response to such developments, there has been an increase in the sub-specialization of radiologists along systems- and disease-related specialties. Greater sub-subspecialization has led to increased expansion and utilization of teleradiology in major urban as well as rural and medically underserved areas (Krupinski, 2014).
Telepathology is currently being used in select locations for a variety of clinical applications, including the diagnosis of frozen section specimens, primary histopathological diagnoses, second opinion diagnoses, and subspecialty pathology consultations, although telemedicine approaches could also be considered for clinical pathology purposes (Dunn
et al., 2009; Graham et al., 2009; Kayser et al., 2000; Massone et al., 2007). Telepathology involves a hub-site pathologist that can access a remote-site microscope and has the ability to control the movement of the slide and adjust magnification, focus, and lighting while the images are viewed on a computer screen (Dunn et al., 2009). Because the field selection is accomplished by the consultant, the information obtained, except for digital imaging capabilities, is functionally the same as the consultant would obtain using a microscope in his or her own office. By providing immediate access to off-site pathologists as well as direct access to subspecialty pathologists, telepathology has the potential to improve both diagnostic accuracy and speed (turnaround time) for the patients at the remote site. Moreover, a telepathology consultation allows the local pathologist and consulting pathologist to examine the case at the same time, which could improve the educational potential of the interaction because the local pathologist can observe firsthand the diagnostic approach employed by the consulting pathologist (Low, 2013).
One application of telemedicine in neurology is telestroke, a widespread and growing practice model (Krupinski, 2014; Silva et al., 2012). Successful management of acute ischemic stroke is extremely time-dependent, which makes it particularly important to have technological tools that can facilitate acute stroke evaluation and management in rural areas and other areas underserved by neurologists and thus improve post-stroke outcomes (Rubin and Demaerschalk, 2014).
A recent Mayo Clinic study explored the efficiency of remote neurological assessments in diagnosing concussions in football players on the sidelines of games in rural Arizona. For the study, an off-site neurologist used a portable unit to perform neurological exams on players who had suffered possible head injuries and recommended whether the players were safe to return to the field (Vargas et al., 2012). These types of innovations may help facilitate the diagnostic process, especially for time-sensitive medical conditions.
mHealth and Wearable Technologies
mHealth applications7 and wearable technologies8 are transforming health care delivery for both health care professionals and patients, and
7 Mobile applications are software programs that have been developed to run on a computer or mobile device to accomplish a specific purpose.
8 Electronics embedded in watchbands, clothing, contact lenses, or other wearable equipment.
they have the potential to influence the diagnostic process. The recent proliferation of mHealth applications has resulted in a broad and evolving array of mHealth applications that are available to both clinicians and patients. mHealth applications are often designed to assist clinicians at the point of care and include drug reference guides, medical calculators, clinical practice guidelines, textbooks, literature search portals, and other decision support aids. Other mHealth applications are designed specifically for patients and facilitate the gathering of diagnostic data or assist patients in coordinating care by keeping track of their medical conditions, diagnostic tests, and treatments.
mHealth applications may augment traditional health care professional education by providing opportunities for interactive teaching and more personalized educational experiences for students. They also have the potential to support clinical decision making at the point of care (Boulos et al., 2014). A systematic review found an increase in the appropriateness of diagnostic and treatment decisions when mobile devices were used for clinical decision support, but the researchers who performed the study noted that the evidence was limited; thus, more research will be needed to draw reliable conclusions concerning whether and how these mobile devices help and in what circumstances and how they should be used (Divall et al., 2013). Other mHealth applications designed for clinicians may serve as an alternative to traditional health IT tools and have the potential to improve diagnosis in emergency or low-resource settings. For example, tablets could be used to view medical images, and recent evidence suggests that they are comparable to conventional picture archiving and communications systems or liquid-crystal display monitor systems in diagnosing several conditions, although further research is needed (Johnson et al., 2012; McLaughlin et al., 2012; Park et al., 2013). Smartphones have been used in conjunction with specialized attachments to make certain laboratory-based diagnostics more accessible (Laksanasopin et al., 2015). For example, an adaptor with electrocardiogram electrodes may transmit electrical data that can be used to detect abnormal heart rhythms (Lau et al., 2013). Future generations of such technologies may be even more advanced; there is an ongoing Qualcomm Tricorder XPRIZE in which teams are competing to build a device that can accurately diagnose 16 health conditions and assess five vital signs in real time (XPRIZE, 2015).
In response to an increasing demand from patients for self-monitoring tools, a plethora of patient-centered mHealth applications have become available. They can perform a variety of functions related to such lifestyle factors as weight management, activity levels, and smoking cessation. Patients may also leverage certain mHealth applications to actively participate in the diagnostic process, such as consumer symptom checkers,
which offer patients access to targeted searches based on their symptoms and enable patients to compile their own differential diagnoses, print out the results, and compare their findings with their clinicians’ findings. Other mHealth applications for patients, such as wearable technologies, are intended to facilitate data collection, and they offer an additional source of patient data which may improve clinicians’ ability to diagnose certain conditions. For example, patients with diabetes may synchronize a glucometer attachment to their mobile device to track blood glucose and upload the data through an Internet connection (Cafazzo et al., 2012).
Despite the potential for mHealth applications to improve diagnosis, a number of challenges remain. In particular, the quality of mobile applications can be quite variable, and there are concerns about the accuracy and safety of these applications, especially about how well they conform to evidence-based recommendations (Chomutare et al., 2011; Powell et al., 2014). For example, Semigran and colleagues (2015, p. h3480) evaluated available symptom trackers for patients and concluded that “symptom checkers had deficits in both triage and diagnosis.” The evaluation found that the symptom checkers identified the correct diagnosis first in 34 percent of the cases, and they listed the correct diagnosis within the top 20 list in 58 percent of the cases (Semigran et al., 2015). Jutel and Lupton (2015, p. 94) call for further research of these applications given their variable development and quality—“the sheer number and constant proliferation of medical apps in general pose difficulties for regulatory agencies to maintain oversight of their quality and accuracy”—as well the impact of these applications on the patient–clinician relationship.
Furthermore, there is a lack of data that support or identify the best practices for their use, including integrating such technologies with EHRs, patient monitoring systems, and other health IT infrastructure (Mosa et al., 2012). Issues related to usability and health literacy will also need to be addressed in order to ensure that mHealth applications effectively meet user needs and facilitate the diagnostic process. The rapid pace of innovation and the evolving regulatory framework for mHealth are other challenges (Cortez et al., 2014).
Goal 3: Ensure that health information technologies support patients and health care professionals in the diagnostic process
Recommendation 3a: Health information technology (health IT) vendors and the Office of the National Coordinator for Health Information Technology (ONC) should work together with users to ensure that health IT used in the diagnostic process demonstrates
usability, incorporates human factors knowledge, integrates measurement capability, fits well within clinical workflow, provides clinical decision support, and facilitates the timely flow of information among patients and health care professionals involved in the diagnostic process.
Recommendation 3b: ONC should require health IT vendors to meet standards for interoperability among different health IT systems to support effective, efficient, and structured flow of patient information across care settings to facilitate the diagnostic process by 2018.
Recommendation 3c: The Secretary of Health and Human Services should require health IT vendors to:
- Routinely submit their products for independent evaluation and notify users about potential adverse effects on the diagnostic process related to the use of their products.
- Permit and support the free exchange of information about real-time user experiences with health IT design and implementation that adversely affect the diagnostic process.
Adler-Milstein, J. 2015. America’s health IT transformation: Translating the promise of electronic health records into better care. Paper presented before the U.S. Senate Committee on Health, Education, Labor and Pensions, March 17. www.help.senate.gov/imo/media/doc/Adler-Milstein.pdf (accessed June 5, 2015).
AHIMA (American Health Information Management Association). 2011. Problem List Guidance in the EHR. Journal of AHIMA 82(9):52–58.
AHIMA. 2014. Appropriate use of the copy and paste functionality in electronic health records. www.ahima.org/topics/ehr (accessed March 27, 2015).
AHLA (American Health Lawyers Association). 2013. Minimizing EHR-related serious safety events. www.mmicgroup.com/resources/industry-news-and-updates/2013/369-ahla-resource-to-minimize-ehr-related-serious-safety-events (accessed July 29, 2015).
AHRQ (Agency for Healthcare Research and Quality). 2010. Electronic Health Record Usability: Vendor Practices and Perspectives. AHRQ Publication No. 09(10)-0091-3-EF. Rockville, MD: Agency for Healthcare Research and Quality.
Alkasab, Tarik K., Jeannette Ryan Alkasab, and Hani H. Abujudeh. 2009. Effects of a computerized provider order entry system on clinical histories provided in emergency department radiology requisitions. Journal of the American College of Radiology 6(3):194–200.
Allen, A. 2015. Doctors say data fees are blocking health reform. Politico, February 23. www.politico.com/story/2015/02/data-fees-health-care-reform-115402.html (accessed June 6, 2015).
Allen, B., and W. T. Thorwarth. 2014. Comments from the American College of Radiology. Input submitted to the Committee on Diagnostic Error in Health Care, November 5 and December 29, 2014, Washington, DC.
AMA (American Medical Assocation). 2014. Improving care: Priorities to improve electronic health record usability. www.ama-assn.org/ama/pub/about-ama/strategic-focus/enhancing-professional-satisfaction-and-practice-sustainability.page (accessed February 9, 2015).
American Telemedicine Association. 2015. What is telemedicine? www.americantelemed.org/about-telemedicine/what-is-telemedicine#.VWHni0b9x7x (accessed May 24, 2015).
Arnaout, R. 2012. Elementary, my dear doctor watson. Clinical Chemistry 58(6):986–988.
Ash, J. S., M. Berg, and E. Coiera. 2004. Some unintended consequences of information technology in health care: The nature of patient care information system-related errors. Journal of the American Medical Informatics Association 11(2):104–112.
Ash, J. S., D. F. Sittig, E. M. Campbell, K. P. Guappone, and R. H. Dykstra. 2007. Some unintended consequences of clinical decision support systems. AMIA Annual Symposium Proceedings 26–30.
Basch, P. 2014. ONC’s 10-year roadmap towards interoperability requires changes to the meaningful use program. http://healthaffairs.org/blog/2014/11/03/oncs-10-year-roadmap-towards-interoperability-requires-changes-to-the-meaningful-use-program (accessed March 27, 2015).
Berenson, R. A., P. Basch, and A. Sussex. 2011. Revisiting E&M visit guidelines—A missing piece of payment reform. New England Journal of Medicine 364(20):1892–1895.
Berner, E. S. 2014. What can be done to increase the use of diagnostic decision support systems? Diagnosis 1(1):119–123.
Bhan, S. N., C. L. Coblentz, and S. H. Ali. (2008). Effect of voice recognition on radiologist reporting time. Canadian Association of Radiologists Journal 59(4):203–209.
Bond, W. F., L. M. Schwartz, K. R. Weaver, D. Levick, M. Giuliano, and M. L. Graber. 2012. Differential diagnosis generators: An evaluation of currently available computer programs. Journal of General Internal Medicine 27(2):213–219.
Boulos, M. N., A. C. Brewer, C. Karimkhani, D. B. Buller, and R. P. Dellavalle. 2014. Mobile medical and health apps: State of the art, concerns, regulatory control and certification. Online Journal of Public Health Informatics 5(3):229.
Cafazzo, J. A., M. Casselman, N. Hamming, D. K. Katzman, and M. R. Palmert. 2012. Design of an mHealth app for the self-management of adolescent type 1 diabetes: A pilot study. Journal of Medical Internet Research 14(3):e70.
Campbell, S. G., P. Croskerry, and W. F. Bond. 2007. Profiles in patient safety: A “perfect storm” in the emergency department. Academic Emergency Medicine 14(8):743–749.
Carayon, P., T. B. Wetterneck, A. S. Hundt, S. Rough, and M. Schroeder. 2008. Continuous technology implementation in health care: The case of advanced IV infusion pump technology. In K. Zink (ed.), Corporate sustainability As a challenge for comprehensive management (pp. 139–151). New York: Springer.
Carayon, P., B.-T. Karsh, and R. S. Cartmill. 2010. Incorporating health information technology into workflow redesign: Summary report. AHRQ Publication No. 10-0098-EF. Rockville, MD: Agency for Healthcare Research and Quality.
Carayon, P., H. Faye, A. S. Hundt, B.-T. Karsh, and T. Wetterneck, T. 2011. Patient safety and proactive risk assessment. In Y. Yuehwern (ed.), Handbook of Healthcare Delivery Systems (pp. 12-1–12-15). Boca Raton, FL: Taylor & Francis.
Carayon, P., T. B. Wetterneck, B. Alyousef, R. L. Brown, R. S. Cartmill, K. McGuire, P. L. Hoonakker, J. Slagle, K. S. Van Roy, J. M. Walker, M. B. Weinger, A. Xie, and K. E. Wood. 2015. Impact of electronic health record technology on the work and workflow of physicians in the intensive care unit. International Journal of Medical Informatics 84(8):578–594.
Carranza, N., V. Ramos, F. G. Lizana, J. Garcia, A. del Pozo, and J. L. Monteagudo. 2010. A literature review of transmission effectiveness and electromagnetic compatibility in home telemedicine environments to evaluate safety and security. Telemedicine Journal and E Health 16(7):818–826.
Castaneda, C., K. Nalley, C. Mannion, P. Bhattacharyya, P. Blake, A. Pecora, A. Goy, and K. S. Suh. 2015. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine. Journal of Clinical Bioinformatics 5(1):4.
CDC (Centers for Disease Control and Prevention). 2014. The Essential Role of Laboratory Professionals: Ensuring the Safety and Effectiveness of Laboratory Data in Electronic Health Record Systems. www.cdc.gov/labhit/paper/Laboratory_Data_in_EHRs_2014.pdf (accessed July 27, 2015).
CDC. 2015. 2014 Ebola outbreak in West Africa. www.cdc.gov/vhf/ebola/outbreaks/2014-west-africa (accessed May 4, 2015).
CHCF (California HealthCare Foundation). 2014. Ten years in: Charting the progress of health information exchange in the U.S. www.chcf.org/~/media/MEDIA%20LIBRARY%20Files/PDF/T/PDF%20TenYearsProgressHIE.pdf (accessed February 9, 2015).
Chomutare, T., L. Fernandez-Luque, E. Arsand, and G. Hartvigsen. 2011. Features of mobile diabetes applications: Review of the literature and analysis of current applications compared against evidence-based guidelines. Journal of Medical Internet Research 13(3):e65.
CLIAC (Clinical Laboratory Improvement Advisory Committee). 2012. Letter to HHS Secretary. wwwn.cdc.gov/CLIAC/pdf/2012_Oct_CLIAC_%20to_Secretary_re_EHR.pdf (accessed August 11, 2015).
Cortez, N. G., I. G. Cohen, and A. S. Kesselheim. 2014. FDA regulation of mobile health technologies. New England Journal of Medicine 371(4):372–379.
CQPI (Center for Quality and Productivity Improvement). 2015. Usability tools. www.cqpi.wisc.edu/usability-tools.htm (accessed May 3, 2015).
Dallas Morning News. 2014. Full transcript: Dr. Joseph Howard Meier’s responses to questions from The Dallas Morning News. www.dallasnews.com/ebola/headlines/20141206-full-transcript-dr.-joseph-howard-meier-s-responses-to-questions-from-the-dallas-morning-news.ece (accessed March 30, 2015).
de la Cruz, J. E., J. C. Shabosky, M. Albrecht, T. R. Clark, J. C. Milbrandt, S. J. Markwell, and J. A. Kegg. 2014. Typed versus voice recognition for data entry in electronic health records: Emergency physician time use and interruptions. Western Journal of Emergency Medicine 15(4):541–547.
Del Fiol, G., P. J. Haug, J. J. Cimino, S. P. Narus, C. Norlin, and J. A. Mitchell. 2008. Effectiveness of topic-specific infobuttons: A randomized controlled trial. Journal of the American Medical Informatics Association 15(6):752–759.
Divall, P., J. Camosso-Stefinovic, and R. Baker. 2013. The use of personal digital assistants in clinical decision making by health care professionals: A systematic review. Health Informatics Journal 19(1):16–28.
Dunklin, R., and S. Thompson. 2014. ER doctor discusses role in Ebola patient’s initial misdiagnosis. Dallas Morning News, December 6. www.dallasnews.com/ebola/headlines/20141206-er-doctor-discusses-role-in-ebola-patients-initial-misdiagnosis.ece (accessed August 11, 2015).
Dunn, B. E., H. Choi, D. L. Recla, S. E. Kerr, and B. L. Wagenman. 2009. Robotic surgical telepathology between the Iron Mountain and Milwaukee Department of Veterans Affairs Medical Centers: A twelve year experience. Seminars in Diagnostic Pathology 26(4):187–193.
Dwoskin, E., and J. Walker. 2014. Can data from your fitbit transform medicine? The Wall Street Journal, June 23. www.wsj.com/articles/health-data-at-hand-with-trackers-1403561237 (accessed July 30, 2015).
El-Kareh, R., O. Hasan, and G. Schiff. 2013. Use of health information technology to reduce diagnostic error. BMJ Quality and Safety 22(Suppl 2):ii40–ii51.
Energy & Commerce Committee. 2014. Examining the U.S. public health response to the Ebola outbreak. Hearing. U.S. House of Representatives, Committee on Energy and Commerce, Subcommittee on Oversight and Investigations. October 16. http://energycommerce.house.gov/hearing/examining-us-public-health-response-ebola-outbreak (accessed June 6, 2015).
Epner, P. 2015. Input submitted to the Committee on Diagnostic Error in Health Care. January 13, 2015, Washington, DC.
Fratzke, J., S. Tucker, H. Shedenhelm, J. Arnold, T. Belda, and M. Petera. 2014. Enhancing nursing practice by utilizing voice recognition for direct documentation. Journal of Nursing Administration 44(2):79–86.
Friedman, A., J. C. Crosson, J. Howard, E. C. Clark, M. Pellerano, B. T. Karsh, B. Crabtree, C. R. Jaen, and D. J. Cohen. 2014. A typology of electronic health record workarounds in small-to-medium size primary care practices. Journal of the American Medical Informatics Association 21(e1):e78–e83.
Friedman, C. P., A. S. Elstein, F. M. Wolf, G. C. Murphy, T. M. Franz, P. S. Heckerling, P. L. Fine, T. M. Miller, and V. Abraham. 1999. Enhancement of clinicians’ diagnostic reasoning by computer-based consultation: A multisite study of 2 systems. JAMA 282:1851–1856.
Furukawa, M. F., J. King, V. Patel, C. J. Hsiao, J. Adler–Milstein, and A. K. Jha. 2014. Despite substantial progress in EHR adoption, health information exchange and patient engagement remain low in office settings. Health Affairs (Millwood) 33(9):1672–1679.
Gandhi, T. K., D. F. Sittig, M. Franklin, A. J. Sussman, D. G. Fairchild, and D. W. Bates. 2000. Communication breakdown in the outpatient referral process. Journal of General Internal Medicine 15(9):626–631.
Gasson, S. 2003. Human-centered vs. user-centered approaches to information system design. Journal of Information Technology Theory and Application 5(2):29–46.
Gawande, A. 2002. Complications: A surgeon’s notes on an imperfect science. New York: Picador.
Goddard, K., A. Roudsari, and J. C. Wyatt. 2012. Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association 19(1):121–127.
Goodman, K. W., E. S. Berner, M. A. Dente, B. Kaplan, R. Koppel, D. Rucker, D. Z. Sands, P. Winkelstein, and AMIA Board of Directors. 2011. Challenges in ethics, safety, best practices, and oversight regarding HIT vendors, their customers, and patients: A report of an AMIA special task force. Journal of the American Medical Informatics Association 18(1):77–81.
Graber, M. 2013. The incidence of diagnostic error in medicine. BMJ Quality and Safety 22(Suppl 2):ii21–ii27.
Graber, M. L., and A. Mathew. 2008. Performance of a web-based clinical diagnosis support system for internists. Journal of General Internal Medicine 23(Suppl 1):37–40.
Graham, A. R., A. K. Bhattacharyya, K. M. Scott, F. Lian, L. L. Grasso, L. C. Richter, J. B. Carpenter, S. Chiang, J. T. Henderson, A. M. Lopez, G. P. Barker, and R. S. Weinstein. 2009. Virtual slide telepathology for an academic teaching hospital surgical pathology quality assurance program. Human Pathology 40(8):1129–1136.
Greenhalgh, T., S. Hinder, K. Stramer, T. Bratan, and J. Russell. 2010. Adoption, non-adoption, and abandonment of a personal electronic health record: case study of HealthSpace. BMJ 341:c5814.
Harrington, L., D. Kennerly, and C. Johnson. 2011. Safety issues related to the electronic medical record (EMR): Synthesis of the literature from the last decade, 2000–2009. Journal of Healthcare Management 56(1):31–43; discussion 43–44.
Hartung, D. M., J. Hunt, J. Siemienczuk, H. Miller, and D. R. Touchette. 2005. Clinical implications of an accurate problem list on heart failure treatment. Journal of General Internal Medicine 20(2):143–147.
Hawkins, C. M., C. G. Anton, W. M. Bankes, A. D. Leach, M. J. Zeno, R. M. Pryor, and D. B. Larson. 2014. Improving the availability of clinical history accompanying radiographic examinations in a large pediatric radiology department. American Journal of Roentgenology 202(4):790–796.
HealthIT.gov. 2013. Clinical decision support (CDS). www.healthit.gov/policy-researchers-implementers/clinical-decision-support-cds (accessed June 6, 2015).
Hill, R. G., Jr., L. M. Sears, and S. W. Melanson. 2013. 4000 clicks: A productivity analysis of electronic medical records in a community hospital ED. American Journal of Emergency Medicine 31(11):1591–1594.
HIMSS (Healthcare Information and Management Systems Society). 2009. Defining and testing EMR usability: Principles and proposed methods of EMR usability evaluation and rating. Chicago, IL: HIMSS.
HIMSS. 2014. What is interoperability? www.himss.org/library/interoperability-standards/what-is-interoperability (accessed February 9, 2015).
Holmes, C., M. Brown, D. St. Hilaire, and A. Wright. 2012. Healthcare provider attitudes towards the problem list in an electronic health record: A mixed-methods qualitative study. BMC Medical Informatics and Decision Making 12(1):127.
Houston, J. D., and F. W. Rupp. 2000. Experience with implementation of a radiology speech recognition system. Journal of Digital Imaging 13(3):124–128.
Howard, J., E. C. Clark, A. Friedman, J. C. Crosson, M. Pellerano, B. F. Crabtree, B. T. Karsh, C. R. Jaen, D. S. Bell, and D. J. Cohen. 2013. Electronic health record impact on work burden in small, unaffiliated, community-based primary care practices. Journal of General Internal Medicine 28(1):107–113.
Hoyt, R., and A. Yoshihashi. 2010. Lessons learned from implementation of voice recognition for documentation in the military electronic health record system. Perspectives in health information management/AHIMA, American Health Information Management Association 7(Winter).
Hripcsak, G., and D. K. Vawdrey. 2013. Innovations in clinical documentation. Paper presented at HIT Policy Meaningful Use and Certification / Adoption Workgroups Clinical Documentation Hearing. Arlington, Virginia, February 13, 2013.
Hripcsak, G., D. K. Vawdrey, M. R. Fred, and S. B. Bostwick. 2011. Use of electronic clinical documentation: Time spent and team interactions. Journal of the American Medical Informatics Association 18(2):112–117.
Huff, C. 2014. Virtual visits pose real issues for physicians. www.acpinternist.org/archives/2014/11/virtual-visit.htm (accessed May 24, 2015).
IEA (International Ergonomics Assocation). 2000. The discipline of ergonomics. www.iea.cc/whats/index.html (accessed April 10, 2015).
IOM (Institute of Medicine). 2012a. Health IT and patient safety: Building safer systems for better care. Washington, DC: The National Academies Press.
IOM. 2012b. The role of telehealth in an evolving health care environment. Washington, DC: The National Academies Press.
ISO (International Organization for Standardization). 1998. Ergonomic requirements for office work with visual display terminals (VDTs)—Part 11: Guidance on usability. www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-1:v1:en (accessed February 25, 2015).
Jacobs, L. 2009. Interview with Lawrence Weed, MD—The father of the problem-oriented medical record looks ahead. The Permanente Journal 13(3):84–89.
Johnson, M., S. Lapkin, V. Long, P. Sanchez, H. Suominen, J. Basilakis, and L. Dawson. 2014. A systematic review of speech recognition technology in health care. BMC Medical Informatics and Decision Making 14(94).
Johnson, P. T., S. L. Zimmerman, D. Heath, J. Eng, K. M. Horton, W. W. Scott, and E. K. Fishman. 2012. The iPad as a mobile device for CT display and interpretation: Diagnostic accuracy for identification of pulmonary embolism. Emergency Radiology 19(4):323–327.
The Joint Commission. 2015a. Preventing copy-and-paste errors in the EHR. www.jointcommission.org/issues/article.aspx?Article=bj%2B%2F2w37MuZrouWveszI1weWZ7ufX%2FP4tLrLI85oCi0%3D (accessed March 27, 2015).
The Joint Commission. 2015b. Sentinel event alert. www.jointcommission.org/assets/1/18/SEA_54.pdf (accessed April 30, 2015).
Jutel, A., and D. Lupton. 2015. Digitizing diagnosis: A review of mobile applications in the diagnostic process. Diagnosis 2(2):89–96.
Kayser, K., M. Beyer, S. Blum, and G. Kayser. 2000. Recent developments and present status of telepathology. Analytical Cellular Pathology 21(3–4):101–106.
Koppel, R., T. Wetterneck, J. L. Telles, and B. T. Karsh. 2008. Workarounds to barcode medication administration systems: Their occurrences, causes, and threats to patient safety. Journal of the American Medical Informatics Association 15(4):408–423.
Kostopoulou, O., A. Rosen, T. Round, E. Wright, A. Douiri, and B. Delaney. 2015. Early diagnostic suggestions improve accuracy of GPs: A randomised controlled trial using computer-simulated patients. British Journal of General Practice 65(630):e49–e54.
Krupinski, E. A. 2014. Teleradiology: Current perspectives. Reports in Medical Imaging 2014(7):5–14.
Kuhn, T., P. Basch, M. Barr, and T. Yackel. 2015. Clinical documentation in the 21st century: Executive summary of a policy position paper from the American College of Physicians. Annals of Internal Medicine 162(4):301–303.
Kuperman, G., and S. T. Rosenbloom. 2013. Paper presented at HIT Policy Meaningful Use and Certification/Adoption Workgroups Clinical Documentation Hearing. Arlington, Virginia, February 13, 2013.
Laksanasopin, T., T. W. Guo, S. Nayak, A. A. Sridhara, S. Xie, O. O. Olowookere, P. Cadinu, F. Meng, N. H. Chee, J. Kim, C. D. Chin, E. Munyazesa, P. Mugwaneza, A. J. Rai, V. Mugisha, A. R. Castro, D. Steinmiller, V. Linder, J. E. Justman, S. Nsanzimana, and S. K. Sia. 2015. A smartphone dongle for diagnosis of infectious diseases at the point of care. Science Translational Medicine 7(273):273re1.
Lau, J. K., N. Lowres, L. Neubeck, D. B. Brieger, R. W. Sy, C. D. Galloway, D. E. Albert, and S. B. Freedman. 2013. iPhone ECG application for community screening to detect silent atrial fibrillation: A novel technology to prevent stroke. International Journal of Cardiology 165(1):193–194.
Lobach, D. F., and W. E. Hammond. 1997. Computerized decision support based on a clinical practice guideline improves compliance with care standards. American Journal of Medicine 102(1):89–98.
Low, J. 2013. Telepathology: Guidance from The Royal College of Pathologists. www.rcpath.org/Resources/RCPath/Migrated%20Resources/Documents/G/G026_Telepathology_Oct13.pdf (accessed May 24, 2015).
Makam, A. N., H. J. Lanham, K. Batchelor, L. Samal, B. Moran, T. Howell-Stampley, L. Kirk, M. Cherukuri, N. Santini, L. K. Leykum, and E. A. Halm. 2013. Use and satisfaction with key functions of a common commercial electronic health record: A survey of primary care providers. BMC Medical Informatics and Decision Making 13:86.
Mangalmurti, S. S., L. Murtagh, and M. M. Mello. 2010. Medical malpractice liability in the age of electronic health records. New England Journal of Medicine 363(21):2060–2067.
Marceglia, S., P. Fontelo, and M. J. Ackerman. 2015. Transforming consumer health informatics: Connecting CHI applications to the health-IT ecosystem. Journal of the American Medical Informatics Association 22(e1):e210–e212.
Massone, C., H. P. Soyer, G. P. Lozzi, A. Di Stefani, B. Leinweber, G. Gabler, M. Asgari, R. Boldrini, L. Bugatti, V. Canzonieri, G. Ferrara, K. Kodama, D. Mehregan, F. Rongioletti, S. A. Janjua, V. Mashayekhi, I. Vassilaki, B. Zelger, B. Zgavec, L. Cerroni, and H. Kerl. 2007. Feasibility and diagnostic agreement in teledermatopathology using a virtual slide system. Human Pathology 38(4):546–554.
McLaughlin, P., S. O. Neill, N. Fanning, A. M. Mc Garrigle, O. J. Connor, G. Wyse, and M. M. Maher. 2012. Emergency CT brain: Preliminary interpretation with a tablet device: Image quality and diagnostic performance of the Apple iPad. Emergency Radiology 19(2):127–133.
Meeks, D. W., M. W. Smith, L. Taylor, D. F. Sittig, J. M. Scott, and H. Singh. 2014. An analysis of electronic health record-related patient safety concerns. Journal of the American Medical Informatics Association 21(6):1053–1059.
Meigs, J. B., E. Cagliero, A. Dubey, P. Murphy-Sheehy, C. Gildesgame, H. Chueh, M. J. Barry, D. E. Singer, and D. M. Nathan. 2003. A controlled trial of web-based diabetes disease management: The MGH diabetes primary care improvement project. Diabetes Care 26(3):750–757.
Middleton, B., M. Bloomrosen, M. A. Dente, B. Hashmat, R. Koppel, J. M. Overhage, T. H. Payne, S. T. Rosenbloom, C. Weaver, J. Zhang, and American Medical Informatics Association. 2013. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: Recommendations from AMIA. Journal of the American Medical Informatics Association 20(e1):e2–e8.
Moacdieh, N., and N. B. Sarter. 2015. Display clutter: A review of definitions and measurement techniques. Human Factors 57(1):61–100.
Mosa, A. S., I. Yoo, and L. Sheets. 2012. A systematic review of healthcare applications for smartphones. BMC Medical Informatics and Decision Making 12:67.
NIST (National Institute of Standards and Technology). 2015. Usability. www.nist.gov/healthcare/usability (accessed April 6, 2015).
Obara, P., M. Sevenster, A. Travis, Y. Qian, C. Westin, and P. J. Chang. 2015. Evaluating the referring physician’s clinical history and indication as a means for communicating chronic conditions that are pertinent at the point of radiologic interpretation. Journal of Digital Imaging 28(3):272–282.
Ober, K. P. 2015. The electronic health record: Are we the tools of our tools? The Pharos 78(1):8–14.
O’Malley, A. S., K. Draper, R. Gourevitch, D. A. Cross, and S. H. Scholle. 2015. Electronic health records and support for primary care teamwork. Journal of the American Medical Informatics Association 22(2):426–434.
ONC (Office of the National Coordinator for Health Information Technology). 2014a. Connecting health and care for the nation: A 10-year vision to achieve an interoperable health IT infrastructure. Washington, DC: The Office of the National Coordinator for Health Information Technology. www.healthit.gov/sites/default/files/ONC10yearInteroperabilityConceptPaper.pdf (accessed February 9, 2015).
ONC. 2014b. Health information technology adverse event reporting: Analysis of two databases. Washington, DC: Office of the National Coordinator for Health Information Technology.
ONC. 2015. Report on health information blocking. Washington, DC: Office of the National Coordinator for Health Information Technology. http://healthit.gov/sites/default/files/reports/info_blocking_040915.pdf (accessed April 10, 2015).
Otte-Trojel, T., A. de Bont, J. van de Klundert, and T. G. Rundall. 2014. Characteristics of patient portals developed in the context of health information exchanges: Early policy effects of incentives in the meaningful use program in the United States. Journal of Medical Internet Research 16(11):e258.
Park, J. B., H. J. Choi, J. H. Lee, and B. S. Kang. 2013. An assessment of the iPad 2 as a CT teleradiology tool using brain CT with subtle intracranial hemorrhage under conventional illumination. Journal of Digital Imaging 26(4):683–690.
Parkash, V., A. Domfeh, P. Cohen, N. Fischbach, M. Pronovost, G. K. Haines 3rd, and P. Gershkovich. 2014. Are amended surgical pathology reports getting to the correct responsible care provider? American Journal of Clinical Pathology 142(1):58–63.
Peterson, M. C., J. H. Holbrook, D. Von Hales, N. L. Smith, and L. V. Staker. 1992. Contributions of the history, physical examination, and laboratory investigation in making medical diagnoses. Western Journal of Medicine 156(2):163–165.
Poon, E. G., J. S. Haas, A. Louise Puopolo, T. K. Gandhi, E. Burdick, D. W. Bates, and T. A. Brennan. 2004. Communication factors in the follow-up of abnormal mammograms. Journal of General Internal Medicine 19(4):316–323.
Powell, A. C., A. B. Landman, and D. W. Bates. 2014. In search of a few good apps. JAMA 311(18):1851–1852.
Prevedello, L. M., S. Ledbetter, C. Farkas, and R. Khorasani. 2014. Implementation of speech recognition in a community-based radiology practice: Effect on report turnaround times. Journal of the American College of Radiology 11(4):402–406.
Quint, L. E., D. J. Quint, and J. D. Myles. 2008. Frequency and spectrum of errors in final radiology reports generated with automatic speech recognition technology. Journal of the American College of Radiology 5(12):1196–1199.
Ramirez, E. 2012. Talking data with your doc: The doctors. http://quantifiedself.com/2012/04/talking-data-with-your-doc-the-doctors (accessed July 30, 2015).
Ramnarayan, P., R. R. Kapoor, J. Coren, V. Nanduri, A. Tomlinson, P. M. Taylor, J. C. Wyatt, and J. Britto. 2003. Measuring the impact of diagnostic decision support on the quality of clinical decision making: Development of a reliable and valid composite score. Journal of the American Medical Informatics Association 10:563–572.
Ramnarayan, P., G. C. Roberts, M. Coren, V. Nanduri, A. Tomlinson, P. M. Taylor, J. C. Wyatt, and J. F. Britto. 2006. Assessment of the potential impact of a reminder system on the reduction of diagnostic errors: A quasi-experimental study. BMC Medical Informatics and Decision Making 6:22.
Ramnarayan, P., N. Cronje, R. Brown, R. Negus, B. Coode, P. Moss, T. Hassan, W. Hamer, and J. Britto. 2007. Validation of a diagnostic reminder system in emergency medicine: A multi-centre study. Emergency Medicine Journal 24(9):619–624.
Rao, V. M., D. C. Levin, L. Parker, B. Cavanaugh, A. J. Frangos, and J. H. Sunshine. 2010. How widely is computer-aided detection used in screening and diagnostic mammography? Journal of the American College of Radiology 7(10):802–805.
Reiner, B., E. Siegel, Z. Protopapas, F. Hooper, H. Ghebrekidan, and M. Scanlon. 1999. Impact of filmless radiology on frequency of clinician consultations with radiologists. American Journal of Roentgenology 173(5):1169–1172.
Robison, J. 2014. Two major insurers recognize telemedicine. Las Vegas Review-Journal, October 5. www.reviewjournal.com/business/two-major-insurers-recognize-telemedicine (accessed May 24, 2015).
Rosenbloom, S. T., J. C. Denny, H. Xu, N. Lorenzi, W. W. Stead, and K. B. Johnson. 2011. Data from clinical notes: A perspective on the tension between structure and flexible documentation. Journal of the American Medical Informatics Association 18(2):181–186.
Roshanov, P. S., J. J. You, J. Dhaliwal, D. Koff, J. A. Mackay, L. Weise-Kelly, T. Navarro, N. L. Wilczynski, and R. B. Haynes. 2011. Can computerized clinical decision support systems improve practitioners’ diagnostic test ordering behavior? A decision-maker-researcher partnership systematic review. Implementation Science 6:88.
RTI International. 2014. RTI International to develop road map for health IT safety center. www.rti.org/newsroom/news.cfm?obj=FCC8767E-C2DA-EB8B-AD7E2F778E6CB91A (accessed March 27, 2015).
Rubin, M. N., and B. M. Demaerschalk. 2014. The use of telemedicine in the management of acute stroke. Neurosurgery Focus 36(1):E4.
Schiff, G. D. 2005. Introduction: Communicating critical test results. Joint Commission Journal of Quality and Patient Safety 31(2):63–65.
Schiff, G., and D. W. Bates. 2010. Can electronic clinical documentation help prevent diagnostic errors? New England Journal of Medicine 362(12):1066–1069.
Semigran, H.L., J.A. Linder, C. Gidengil, and A. Mehrotra. 2015. Evaluation of symptom checkers for self diagnosis and triage: Audit study. BMJ 351:h3480.
Sequist, T. D., T. K. Gandhi, A. S. Karson, J. M. Fiskio, D. Bugbee, M. Sperling, E. F. Cook, E. J. Orav, D. G. Fairchild, and D. W. Bates. 2005. A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. Journal of the American Medical Informatics Association 12(4):431–437.
Shenvi, E., and R. El-Kareh. 2014. Clinical criteria to screen for inpatient diagnostic errors: A scoping review. Diagnosis 2:3–19.
Silva, G. S., S. Farrell, E. Shandra, A. Viswanathan, and L. H. Schwamm. 2012. The Status of telestroke in the United States: A survey of currently active stroke telemedicine programs. Stroke 43:2078–2085.
Simborg, D. W., B. H. Starfield, S. D. Horn, and S. A. Yourtee. 1976. Information factors affecting problem follow-up in ambulatory care. Medical Care 14(10):848–856.
Singh, H., H. S. Arora, M. S. Vij, R. Rao, M. M. Khan, and L. A. Petersen. 2007a. Communication outcomes of critical imaging results in a computerized notification system. Journal of the American Medical Informatics Association 14(4):459–466.
Singh, H., E. J. Thomas, M. M. Khan, and L. A. Petersen. 2007b. Identifying diagnostic errors in primary care using an electronic screening algorithm. Archives of Internal Medicine 167(3):302–308.
Singh, H., A. D. Naik, R. Rao, and L. A. Petersen. 2008. Reducing diagnostic errors through effective communication: Harnessing the power of information technology. Journal of General Internal Medicine 23(4):489–494.
Singh, H., T. Giardina, S. Forjuoh, M. Reis, S. Kosmach, M. Khan, and E. Thomas. 2012. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Quality and Safety 21:93–100.
Singh, H., C. Spitzmueller, N. J. Petersen, M. K. Sawhney, and D. F. Sittig. 2013. Information overload and missed test results in electronic health record-based settings. JAMA Internal Medicine 173(8):702–704.
Singh, M., and T. R. Pal. 2011. Voice recognition technology implementation in surgical pathology: Advantages and limitations. Archives of Pathology & Laboratory Medicine 135(11):1476–1481.
Sittig, D., and H. Singh. 2010. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Quality and Safety in Health Care 19:i68–i74.
Sittig, D. F., and H. Singh. 2012. Electronic health records and national patient-safety goals. New England Journal of Medicine 367(19):1854–1860.
Sittig, D. F., D. C. Classen, and H. Singh. 2014a. Patient safety goals for the proposed Federal Health Information Technology Safety Center. Journal of the American Medical Informatics Association.
Sittig, D. F., J. S. Ash, and H. Singh. 2014b. The SAFER guides: Empowering organizations to improve the safety and effectiveness of electronic health records. American Journal of Managed Care 20(5):418–423.
Sittig, D. F., D. R. Murphy, M. W. Smith, E. Russo, A. Wright, and H. Singh. 2015. Graphical display of diagnostic test results in electronic health records: A comparison of 8 systems. Journal of the American Medical Informatics Association, March 18 [Epub ahead of print]. jamia.oxfordjournals.org/content/jaminfo/early/2015/03/18/jamia.ocv013.full.pdf (accessed December 8, 2015).
Tsai, T. C., and A. K. Jha. 2014. Hospital consolidation, competition, and quality: Is bigger necessarily better? JAMA 312(1):29–30.
Upadhyay, D. K., D. F. Sittig, and H. Singh. 2014. Ebola US Patient Zero: Lessons on misdiagnosis and effective use of electronic health records. Diagnosis October 23 [Epub ahead of print]. www.degruyter.com/dg/viewarticle.fullcontentlink:pdfeventlink/$002fj$002fdx.ahead-of-print$002fdx-2014-0064$002fdx-2014-0064.pdf?t:ac=j$002fdx.aheadof-print$002fdx-2014-0064$002fdx-2014-0064.xml (accessed December 8, 2015).
Uzuner, O., I. Goldstein, Y. Luo, and I. Kohane. 2008. Identifying patient smoking status from medical discharge records. Journal of the American Medical Informatics Association 15(1):14–24.
Vargas, B. B., D. D. Channer, D. W. Dodick, and B. M. Demaerschalk. 2012. Teleconcussion: An innovative approach to screening, diagnosis, and management of mild traumatic brain injury. Telemedicine and E Health 18(10):803–806.
Verghese, A. 2008. Culture shock—patient as icon, icon as patient. New England Journal of Medicine 359(26):2748–2751.
Wachter, R. M. 2015. The digital doctor. New York: McGraw-Hill.
Walker, J. M., P. Carayon, N. Leveson, R. A. Paulus, J. Tooker, H. Chin, A. Bothe, Jr., and W. F. Stewart. 2008. EHR safety: The way forward to safe and effective systems. Journal of the American Medical Informatics Association 15(3):272–277.
Watson, W. 2014. Texas Health Presbyterian Hospital Dallas implements changes after Ebola event. www.texashealth.org/news/ebola-update-changes-implemented-after-ebola-event (accessed April 7, 2015).
Weed, L. L. 1968. Medical Records That Guide and Teach. New England Journal of Medicine 278(11):593–600.
Weed, L. L., and L. Weed. 2011. Medicine in denial. USA: Createspace.
Weed, L. L., and L. Weed. 2014. Diagnosing diagnostic failure. Diagnosis 1(1):13–17.
Weed, L. L., and N. J. Zimny. 1989. The problem-oriented system, problem-knowledge coupling, and clinical decision making. Physical Therapy 69(7):565–568.
Westat. 2013. EHR contracts: Key contract terms for users to understand. www.healthit.gov/…/ehr_contracting_terms_final_508_compliant.pdf (accessed May 25, 2015).
XPRIZE. 2015. The prize: Empowering personal healthcare. http://tricorder.xprize.org/about/overview (accessed May 24, 2015).
Zakim, D., N. Braun, P. Fritz, and M. D. Alscher. 2008. Underutilization of information and knowledge in everyday medical practice: Evaluation of a computer-based solution. BMC Medical Informatics and Decision Making 8:50.