Re-Examining Clinical Decision Rules: A Cautionary Tale

By Chuck Dinerstein, MD, MBA — May 31, 2019
Clinical Decision Rules help physicians make judgments when they are uncertain. Unlike the Ten Commandments, they may grow less helpful as they age, even when based on sound, verified science.
By Philippe de Champaigne [Public domain]

One of the benefits of data and scientific inquiry is providing a framework for making complex decisions about life and death, often when there is incomplete information or little time to carefully assess the situation. In the analog age, those guidelines were aphorisms, little coded messages, like “If it ain’t broke, why fix it,” or “When you hear hoofbeats, think horses, not zebras.” In our more modern times, we use a variety of statistical techniques, big data sets, and expert opinion to more objectively generate these “clinical decision rules.” (CDR) A new study provides a cautionary tale

If your heart is going to stop, what better place than in a hospital? A location with the personnel, knowledge, and equipment to jump-start your heart. In some cases, cardiac arrest is unwitnessed, perhaps the patient wasn’t on a monitor. Now recognizing that this is the exception and not the rule, a guideline was developed for physicians to determine how long to keep attempting to resuscitate a patient when you have little idea when their heart stopped. For the cynically minded, it is more than a simple efficiency decision about resources and time, it gets at a harder problem, futility. Nearly 20 years ago, a prospective study of almost 2200 patients found three variables predicted those patients who “had no chance for survival;” an unwitnessed cardiac arrest, a heart rhythm that did not respond to shocks, and an inability to restore spontaneous circulation within 10 minutes. 

The current study made use of the in-hospital cardiac arrest registry of the American Heart Association. From the nearly 200,000 patients from 750 hospitals with cardiac arrests, 96,509 meet some or all of those three criteria for having “no chance at survival.” The pertinent demographics and the characteristics of both the patient and their cardiac arrest [1] were essentially the same as the research initially used to define and validate the three components. 

“In summary, when applied to a large, diverse patient population, we found that approximately 1 in 5 patients with an IHCA met the UN10 CDR. Rates of survival to discharge and favorable neurologic survival were approximately 6% and 5%, respectively…”

It seems that a clinical decision rule, based on sound science and a well-performed validation was incorrect for 6% of patients. In more statistical terms the positive predictive value of the rule, that when it says they will die, they, in fact, die had dropped from 100% in the validation to 93.7% when applied outside the confines of research. Why would that happen?

Like many of the much-publicized artificial intelligence algorithms, the scope of this rule is tightly constrained to the patients and protocols used in its validation, and after 20 years they have changed; changed enough to make their use suspect. A good rule then, may not be as good a rule today. This raises a new concern. How will the new AI systems be maintained, or will they be forever locked into their validation studies that grow less relevant with time, just as happened with this rule? Are there to be software upgrades, and if so, will they have to be approved and validated and by whom? Ask a 737 Max pilot how that is working out. 

I’ve had discussions with families about futile care and for some, but not all, a 6% chance is not futile at all, because hope and loss are entangled in those judgments. In the current study, a thousand patients survived, the majority, about 75% with no to moderate neurologic injury by ignoring the rule. Guidelines, aphorisms, heuristics, they all help physicians frame decisions but are not a substitute for judgment accompanied by compassionate, empathetic communication. Authority derives from taking responsibility, the track record for finding the algorithm or software to be “at fault” and found accountable is poor. If you want to know how it will play out in health care, watch how it plays out for Boeing. 

[1] Age, gender, the percentages with and without effective electrical activity by the heart, and mean duration of the attempts at resuscitation.

Source: Reexamination of the UN10 Rule to Discontinue Resuscitation During In-Hospital Cardiac Arrest JAMA Network Open DOI: 10.1001/jamanetworkopen.2019.4941


Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author: