Improving Quality of Care by Addressing Preventable Harm
Written by Joanne Byron, BS, LPN, CCA, CHA, CHCO, CHBS, CHCM, CIFHA, CMDP, COCAS, CORCM, OHCC, ICDCT-CM/PCS
Human error is a major cause of patient harm and prevents the delivery of safe, quality care even in today’s heavily regulated environment. The Joint Commission publishes Sentinel Events and other agencies study and report statistics on human errors resulting is morbidity and/or mortality. With the advancement in technology, can we improve quality of care in a cost-effective manner through Artificial Intelligence methods? This article provides a very basic overview of a complex topic. Additional resources are listed at the end of this article.
Introduction
Artificial intelligence (AI) has emerged as a powerful tool for solving complex problems in diverse domains within healthcare. However, there are serious challenges to the development and use of these technologies as they relate to economic and regulatory considerations. Until such time, it is important to recognize harm due to human error and the potential for AI solutions to improve safety and quality of care in the United States.
With the advancement in technology, could implementing an AI solution prevent these types of errors? Let’s take a look at several cases published through the Agency for Healthcare Research and Quality (AHRQ).
“Copy and Paste” Notes and Auto-populated Text in the Electronic Health Record (posted on the Patient Safety Network October 31, 2023)
This WebM&M Case describes two cases illustrating several types of Electronic Health Record (EHR) errors, with a common thread of erroneous use of electronic text-generation functionality, such as copy/paste, copy forward, and automatically pulling information from other electronic sources to populate clinical notes.
Case #1: A 56-year-old man being treated for hypertension and diabetes requested a copy of his medical records before moving to a different state. The clinic staff gathered the necessary information, and before providing it to the patient, left the file on a physician’s desk for review. When the physician (who was new to this practice) reviewed the records, he noticed “h/o C section followed by hysterectomy” documented in the patient’s electronic problem list, along with a detailed gynecological history. Upon further review, the physician found that this history had been recorded in the patient’s record for several years.
- During a thorough review of the electronic health records, clinic staff found that the patient’s wife frequently visited the clinic with her husband, and that her medical history had been copied and pasted into his medical record.
- The patient was notified of the error before receiving a copy of his medical records with corrected information.
Case #2: A 38-year-old man was brought to the hospital with altered mental status after crashing while riding a motorcycle. On hospital day 2, the primary trauma surgery team ordered magnetic resonance imaging (MRI) of the brain, due to concern regarding possible hypoxic-ischemic encephalopathy, and spine consultants requested imaging of the spine. These studies, completed on hospital day 6, showed no evidence of traumatic or hypoxic brain injury and no vertebral or ligamentous injury; the radiologist noted “minimal cervical spine epidural blood or fluid with no compromise of the spinal cord.” This incidental finding was not documented in any progress notes.
- In the meantime, the patient slowly regained consciousness and the ability to intermittently follow commands in his primary language. On hospitals days 5 through 10, the primary team’s progress notes documented identical examinations, each auto populated from a previous note, including the phrase “moves all extremities.” However, nursing notes disagreed: a nurse documented that the patient withdrew his lower extremities (LE) to stimuli on day 7, but not on day 8. On hospital day 9, a nurse documented “unable to move BLE [bilateral LE], sensation intact, team aware.”
- Not until day 13 did the patient undergo a repeat MRI study, which showed a large C3-C7 epidural hematoma with cord compression. The patient underwent emergency laminectomy and decompression but had virtually complete paralysis below the C7 level upon hospital discharge and at follow-up six months later.
The commentary discusses other EHR-based documentation tools (such as dot phrases), the influence of new documentation guidelines, and the role of artificial intelligence (AI) tools to capture documentation. Read the article in full.
Weight and Height Juxtaposition in the Electronic Medical Record Causing an Accidental Medication Overdose (posted on the Patient Safety Network October 31, 2023)
A 2-year-old girl presented to the emergency department (ED) with joint swelling and rash following an upper respiratory infection. After receiving treatment and being discharged with a diagnosis of allergic urticaria, she returned the following day with worsening symptoms. Suspecting an allergic reaction to amoxicillin, the ED team prepared to administer methylprednisolone. However, the ED intake technician erroneously switched the patient’s height and weight in the electronic health record (EHR), resulting in an excessive dose being ordered and dispensed.
- An automatic error message was generated due to the substantial difference from previous weights, but this message was overlooked by the ED technician and the data entry error was not detected or corrected.
- The commentary discusses the importance of verifying medication orders before administration, optimizing alert notifications to minimize the risk of alert fatigue, and the role of root cause analysis to identify factors contributing to medication errors. Read more about this case.
Until improved solutions can be introduced, it is critical to have order entry staff properly trained. Computerized Physician Order Entry (CPOEP) is a critical interim step for unlicensed clinical support staff.
Delay in Malignancy Diagnosis Reflects Systemic Failures (posted on the Patient Safety Network October 31, 2023)
A 32-year-old man presented to the hospital with a comminuted midshaft femoral fracture after a bicycle accident. Imaging suggested the fracture was pathologic and an open biopsy specimen was submitted to pathology for intraoperative consultation. In this case, a confluence of individual decisions and system failures resulted in the wrong surgical procedure being performed.
Care for the patient began appropriately with radiographic imaging and an intraoperative consultation through open biopsy of the fracture site. However, this procedure was followed by a series of events that increased the likelihood for harm, including the:
- Inability to provide a definitive diagnosis at the time of frozen section examination;
- Subsequent delay in establishing the final diagnosis;
- Presence of only one bone pathologist in the department without cross coverage for leave;
- Lack of documented communication history between the bone pathologist and the orthopedic surgery team;
- Lack of effective communication and proper handoff within the surgery and pathology teams;
- Commission of a diagnostic error by a pathologist lacking expertise in bone pathology; and
- Willingness of the surgical team to overlook the discrepancy between the initial radiologic findings and the pathologic diagnosis without further inquiry.
The biopsy materials were eventually sent in consultation to a nationally recognized bone pathologist, and the diagnosis of “osteosarcoma with high-grade features” was received several days later. Given this new diagnosis, it was evident the patient had undergone the incorrect surgical procedure, although the long-term ramifications of this error remained unclear. Read more about this case.
Improving Quality by Reducing Diagnostic Errors
As demonstrated above, diagnostic errors may cause harm to patients by preventing or delaying appropriate treatment, causing unnecessary or harmful treatment, or resulting in psychological or financial repercussions.
Diagnostic error, as defined by the National Academy of Medicine, is “the failure to (a) establish an accurate and timely explanation of the patient’s health problem(s) or (b) communicate that explanation to the patient.” This definition focuses on the outcomes of the diagnostic process, recognizing that diagnosis is an iterative process that solidifies as more information becomes available. The diagnosis needs to be timely and accurate so that appropriate treatment is initiated to optimize the patient’s outcome.
- According to a report by the Society to Improve Diagnosis in Medicine, diagnostic errors affect more than 12 million Americans each year, with aggregate costs likely in excess of $100 billion!
- The National Academy of Medicine’s report on improving diagnosis states that diagnostic errors contribute to approximately 10% of patient deaths and 6% to 17% of adverse events in hospitals.
Machine learning (ML), a subfield of AI, could revolutionize diagnosis by augmenting clinical diagnostics practice resulting in earlier and better diagnoses which can save lives and avoid costs of time and money. In recent years, for example, ML technology was reported to be equivalent to medical professionals in interpreting medical data from fields like radiology and dermatology. ML technology can assist medical professionals in completing repetitive tasks without getting tired, and flagging potential medical issues at the point of care.
At the federal level, government agencies are at various stages of adopting ML-based diagnostic tests. For example, the Department of Veterans Affairs (VA), through its Veterans Health Administration (VHA), operates the largest integrated health care system in the U.S. and provides diagnostic services in support of 9 million enrolled veterans. VHA diagnostic services include clinical services of pathology and laboratory medicine, radiology, and nuclear medicine.
VA officials provided an example of one VA facility that recently started using AI to detect hemorrhages. According to these officials, the process for adopting diagnostic technologies, including those using ML, is unique to each VHA facility and depends on local mechanisms and funding. In addition, the Department of Defense’s Defense Innovation Unit reported it was working with the Defense Health Agency in training ML to help diagnose cancer.
Improving Quality by Reducing Sentinel Events
A sentinel event is a patient safety event that results in death, permanent harm or severe temporary harm. Sentinel events are debilitating to both patients and healthcare providers involved in the event. The Joint Commission has released its Sentinel Event Data after reviewing 720 serious adverse events from Jan. 1 through June 30, 2023. The most prevalent sentinel event types were:
- Falls (47%)
- Unintended retention of foreign object (9%)
- Assault/rape/sexual assault/homicide (8%)
- Wrong surgery (8%)
- Suicide (5%)
- Delay in treatment (5%)
Most reported sentinel events occurred in a hospital (88%). Of all the sentinel events, 18% were associated with patient death, 63% with severe temporary harm and 7% with permanent harm.
Can implementation of AI technology help reduce sentinel events?
- According to the American Hospital Association (AHA), yes. The use of AI has advanced patient safety by evaluating data to produce insights, improve decision-making and optimize health outcomes. Systems that incorporate AI can improve error detection, stratify patients and manage drug delivery.
- The Agency for Healthcare Research and Quality (AHRQ) conducted a systematic literature review of AI in patient safety outcomes. The review suggests that AI-enabled decision support systems can improve error detection, patient stratification, and drug management. However, additional evidence is needed to understand how well AI can predict safety outcomes. There are many other studies published supporting the use of AI to improve patient safety. However, most of these studies reflect concern over the cost and preventability of events to evaluate the potential of AI for improving safety.
Improving Quality by Reducing Retained Surgical Items (RSI)
A retained surgical item (RSI) is defined as a never-event and can have drastic consequences on the patient, provider, and hospital. This is an item unintentionally left inside a patient (e.g., sponges, towels, device components, guidewires, needles, and instruments). RSI is the sentinel event most frequently reported to the Joint Commission. Surgical sponges are the most commonly reported retained item.
According to the HiMSS October 2023 Healthcare IT News article, “Medical errors can have devastating consequences for patients, and one of the most alarming errors is the retention of surgical items in a patient’s body after surgery. Retained surgical items (RSIs) can lead to severe complications, prolonged hospital stays, additional surgeries and even fatalities. According to an article in the Patient Safety in Surgery Journal, RSI events were the number one sentinel event in 2019. Hospitals and healthcare providers constantly seek innovative solutions to prevent such occurrences and improve patient safety.” This article also addresses the promise of reducing this type of error through data-drive video technology and advanced AI analytic software.
The Challenge of Integrating Artificial Intelligence to Improve Quality
AI is designed to enhance, not replace, traditional care delivery. Thoughtful implementation of AI offers boundless opportunities for clinical care improvements. However, medical providers may be reluctant to adopt ML technologies until its real-world performance has been adequately demonstrated in relevant and diverse clinical settings, according to experts, stakeholders, and literature.
Before deciding to adopt a technology, medical providers want to know that it is appropriate for their patients and will improve outcomes. It also must be proven to be a positive economic factor and reasonably implemented in the health care setting.
One of the major challenges in effectively deploying AI in health care is managing implementation and maintenance costs. Nationally, non-profit hospital systems report an average profit margin of around 6.5%. (North Carolina State Health Plan and Johns Hopkins Bloomberg School of Public Health, 2021). These relatively slim margins encourage health care systems to be conservative in investing in unproven or novel technologies.
Robust analysis of cost savings and cost estimates in the deployment of AI in health care is still lagging, with only a small number of articles found in recent systematic reviews, most of which focus on specific cost elements (Wolff et al., 2020). Is reimbursement up to speed? Hardly. Investing in AI technology is costly and organizations will hesitate due to an unforeseen return on investment due to continued decline in reimbursement rates.
AI developers face several challenges evaluating and validating ML diagnostic technologies. First, developers have difficulty accessing high-quality representative data to train and validate their technologies. According to an industry developer, access to sufficient amounts of nonbiased, ethnically diverse, real-world training data is their primary challenge, in part because partnering with hospitals and academic centers to obtain data sets takes time, including time to build trust with these institutions. Institutions are often reluctant to share data, especially protected health information, due to privacy concerns.
The HIPAA Privacy Rule generally prohibits the use or disclosure of protected health information except in the circumstances set out in the regulations. Protected health information (PHI) is individually identifiable health information (IIHI) and includes information collected from an individual, including demographic information, that
- is created or received by a health care provider, health plan, or health care clearinghouse, and
- relates to the past, present or future physical or mental health condition of the individual, or the payment for the provision of health care, and
- identifies the individual or with respect to which there is a reasonable basis to believe the information can be used to identify the individual.
Regulatory Oversight of AI
Another challenge are the gaps in the regulatory framework which can pose a challenge to the development and adoption of ML technologies. Regulatory requirements and standards for demonstrating real world performance and clinical validity are insufficient for wide clinical adoption, according to experts, stakeholders, and the research literature. Until these gaps are filled, there are a few resources to consult regarding AI oversight:
The National Institute of Standards and Technology (NIST), offers a resource page “Trustworthy & Responsible Artificial Intelligence Resource Center (AIRC).” The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies. AIRC supports and operationalizes the NIST AI Risk Management Framework (AI RMF 1.0) and accompanying Playbook and will grow with enhancements to enable an interactive, role-based experience providing access to a wide-range of relevant AI resources.
On April 25, 2023, the Federal Trade Commission (FTC), the Civil Rights Division of the U.S. Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC) released a joint statement highlighting their commitment to "vigorously use [their] collective authorities to protect individuals" with respect to artificial intelligence and automated systems (AI).
On May 18, 2023, the FTC issued a warning that the increasing use of consumers’ biometric information and related technologies, including those powered by machine learning, raises significant consumer privacy and data security concerns and the potential for bias and discrimination.
- Read more about current regulatory oversight on the AIHC blog: Who Regulates Healthcare AI? Artificial Intelligence & Regulatory Compliance
About the author
Joanne Byron, BS, LPN, CCA, CHA, CHCO, CHBS, CHCM, CIFHA, CMDP, COCAS, CORCM, OHCC, ICDCT-CM/PCS. Joanne is currently the Chief Executive Officer of the American Institute of Healthcare Compliance, a Licensing/Certification non-profit partner with CMS. She shares her experience of over 40 years as a nurse, consultant, auditor and investigator in the health care field.
Recommended Reading
If you find this article interesting and want to learn more, Joanne suggests the following listed below. This is a short list, but if you are just getting involved in this topic, these resources are a good place to start:
- Please review other related articles: AI articles and Leadership in a Value-Based Care Environment published by the American Institute of Healthcare Compliance
- Access a variety of information posted on the Patient Safety Network which provides free CME, toolkits and the latest news on patient safety
- Joint Commission page on Patient Safety Portals with resources to improve safety and quality of care
- CMS’ Quality Improvement Organizations (QIO) page – offering downloads on the QIO program progress. QIO is a group of health quality experts, clinicians, and consumers organized to improve the quality of care delivered to people with Medicare
Copyright © 2023 American Institute of Healthcare Compliance All Rights Reserved 12 Bold Italic