There are many different organizations that work tirelessly every year to improve patient safety. What is essential isn’t just the changes they make, it’s the information that those recommendations are based on. Sometimes, half the battle is identifying the problem. In today’s hugely complex medical system, it can be hard to find the information that agencies need to enact change.
When the government or a nonprofit organization tries to evaluate a deceptively simple question like, “How many medical mistakes were there last year in Texas?” they have to draw on accurate information. The problem is, not every hospital defines “medical mistake” the same way, and many healthcare institutions track these events differently.
More than a blip on the U.S. Census.
It may seem technical, but these numbers can seriously vary. For example, you may have one system reporting 4 medical mistakes in a month and a different system reporting 393 mistakes when given the very same patients. Since these reports are so critical to how an institution rates itself or how the government rates programs like Medicare, scientists and analysts have been hard at work trying to create a better system.
In April 2011, a new study was published by the Institute for Healthcare Improvement in Health Affairs that examined this question and put forth a new solution called the Global Trigger Tool. It’s been more than a year, but the question is more relevant than ever. As time has passed, the Global Trigger Tool has not spread as quickly as hoped. It’s not time to give up; it’s just time to remind the snail-paced healthcare system that there may be a better alternative out there when it comes to data collection.
To emphasize how crucial this is, Health Affairs published an article estimating the cost of preventable medical mistakes that harm patients at a whopping $17.1 billion. If that number doesn’t move you remember that this isn’t just about data points – the findings of the Global Trigger Tool can save lives.
Outside government wrestling on healthcare.
This is one of those issues that is actually beyond politics. The number of medical mistakes, or adverse eventsas the study calls them, is not directly related to party-politics or court issues like tort reform. It’s all about taking an accurate picture of the healthcare system that uses real numbers to form conclusions rather than relying on wide-reaching political conjectures or individually targeted media investigations.
While you might think that politics has infected every area of the healthcare discussion, there is still work being done outside of party lines. Different groups may use the data to promote alternate theories, but the truth surrounding problem remains the same.
What are our current systems like?
The study was designed to test a new theory against two current ones in light of past investigations. One of the most prevalent forms of adverse event detection is based on voluntary reported sentinel events. That boils down to this: a doctor or medical professional witnesses a mistake where a patient has been hurt, and then he or she reports it exactly like it happened. The system is geared towards really bad mistakes called “never events” which is where something that is never supposed to happen in modern medicine occurs. A “never event” is like dying from a blood transfusion or operating on the wrong body part. Sentinel programs are suspect because they almost always report a low number of adverse events and because they rely so heavily on a kind of medical honor system.
There are some hospitals that have tried to implement automated systems which use complex codes on medical records to identify adverse events. There are two main automated systems and both are evaluated by the study: Agency for Healthcare Research and Quality’s Patient Safety Indicators and the Utah/Missouri Adverse Event Classification system. The Utah/Missouri system works slightly differently and is more comprehensive in its approach. The criticisms of the automated systems are that they are not sensitive or specific enough to be accurate.
The article discusses one type of adverse event detecting system that it does not test but uses as a reference. The Harvard Medical Practice Study provided interesting results but the researchers did not think it was viable. The study took an exhaustive approach where a nurse or physician reviewed all patients’ complete medical records.
Introducing the Global Trigger Tool.
The last system in the study is the brand new one – The Global Trigger Tool. Developed and used in hospitals in the US and UK, the system uses clear definitions and methods to evaluate records. Closed patient charts are reevaluated by 2-3 medical employees (called “primary reviewers” usually nurses or pharmacists not doctors) who methodically look through specific chart areas for “triggers.” A trigger is basically something that looks a little fishy. The article gives examples like a medication stop order or abnormal blood work.
After a chart has been triggered, there is more in-depth information gathering and ultimately the chart finds its way to a reviewing physician (called “secondary reviewer”.) The buck stops at the doctor who has to examine and then sign-off on the chart review him or herself.
They tested all three, at the same time?
To make life more complicated, the methods were compared at three separate hospitals in randomly selected yet representative populations – at this point I’ll refer you back to the article for more details on their variable choices because it becomes very detailed.
In the end, 795 patient records were reviewed from the three different hospitals and each record was reviewed by each of the three systems. Combining all three systems, there were 393 adverse events identified.
- Sentinel programs reported 4 events (1.0%)
- Automated systems reported 35 events (8.99%)
- Global Trigger Tool reported 354 events (90.1%)
Impact of the Global Trigger Tool.
Not only did the Global Trigger Tool detect more events, it was more specific and more accurate it in its findings. Now to be fair, the Global Trigger Tool’s definition of “adverse event” is not as dramatic as a “never event,” instead it is one where the patient is unduly harmed. Harm defined as “unintended physical injury resulting from or contributed to by medical care that requires additional monitoring, treatment, or hospitalization, or that results in death.”
Finding 393 mistakes sounds like a lot – and it is, but it includes a variety of mistakes that vary in their extreme. The Tool uses a numbering system to indicate severity based on a preexisting National Coordinating Council for Medication Error Reporting and Prevention Index for Categorizing Errors. Of the 393 mistakes, only 8 were fatalities. The biggest chunk of adverse events, 150 events, were the result of medication-related errors of which more than a hundred were harmful but not life-threatening.
So what do the study results mean?
It means that adverse events are being widely underreported at a rate that concerns statisticians, politicians, patients and hospitals alike. A study done by Department of Health and Human Services Office of Inspector General around the same time as this one came to the same conclusions.
The article concludes that while voluntary sentinel event (“never event”) reporting systems are the most common and “despite sizeable investments and aggressive promotional efforts by local hospitals,” the system simply does not detect most medical mistakes. To use the study as an example, of the 8 fatalities caused by medical mistakes during the study not a single one was reported by the voluntary sentinel programs. The sentinel system may have its place in some incidents, but as a major reporting tool it is grossly inaccurate.
And the solution is… ?
In defense of physicians and medical professionals on which the sentinel system relies, the voluntary sentinel system places them in an awkward position where they must voluntarily risk their reputations or those of their peers. The lack of mandatory oversight here is dangerous. On the other hand, more direct, mandatory approaches like the Harvard-style study would be unfair to medical professionals. The Harvard-style processes take up valuable time that doctors need to see patients. Hospitals and doctors’ offices are already overloaded. So how do we involve doctors without involving doctors too much?
Real hope lies in a system where physician involvement is at an efficient minimum. The Harvard-model is too demanding on physicians and a sentinel program puts medical professionals at odds with each other. Right now the Global Trigger Tool has taken an approach based on a mixed group of medical professionals, but in the end a digitized system would be ideal. In the future more time should be spent correcting medical mistakes than identifying them, which does not always seem to be the case in today’s system.
Presently, the two automated systems are not as accurate as the Global Trigger Tool, but the goal is that one day many human aspects of the triggering system will be computerized. Unfortunately, the sophisticated program necessary isn’t yet in place. If the right resources were invested in that kind of system, the healthcare industry could make some really positive changes in patient care. Until then, the Global Tracking Tool is the best option we have. It’s already been more than a year since the study showing the tool’s merits was published, but even with the best of knowledge and intentions the healthcare system is slow to change. I hope that more hospitals will embrace the Global Trigger Tool, so that we finally have the information to start a real, open discussion on healthcare.