The Centers for Medicare Services (CMS) released its ratings of our nation’s hospitals last week. This caused considerable consternation among hospitals and academic medical centers. Here is the breakdown of the star ratings: (1)
As Kaiser Health News (KHN) reported,
“Steven Lipstein, the president of BJC HealthCare, which runs Barnes-Jewish Hospital and 13 others, said …the major difference, he said, was the comparative affluence of the patients each served, with poorer scoring hospitals located in lower income areas. The stars tell you more about the socio-demographics of the population being served than the quality of the hospital.”
CMS found that hospitals that treated a large percentage of low-income patients “tended to do worse.” And KHN reported:
“Out of 288 hospitals that teach significant numbers of residents, six in 10 received below-average scores, the analysis found. Teaching hospitals comprised one-third of the facilities receiving one-star. A number were in high poverty areas, including two in Newark, N.J., and three in Detroit.”
The initial response of the Association of American Medical Colleges was captured in a statement by Dr. Janis Orlowski, in referring to the prestigious academic centers, “These are hospitals that everyone in the know tries to get into, so we need to be careful about the consequences, that this star rating can be misleading."
But Dr. Darrell Kirch, president of the AAMC was able to spin it more, well, academically"
“Hospitals cannot be rated like movies… We are extremely concerned about the potential consequences for patients that could result from portraying an overly simplistic picture of hospital quality with a star rating system that combines many complex factors and ignores the socio-demographic factors that have a real impact on health.”
As our own New York Center for Public Policy noted, New York was dead last in star ratings among all states where data was collected.
There is legitimate concern about the methodology used by CMS in its rankings. For example, the measures used in determining the ratings were developed and vetted by academicians, so it is somewhat disingenuous for them to now claim that these measures were inadequate.
While the statement that New York was dead last is technically correct, when you consider that the average star ratings for New York hospitals is based on a rating system that itself is a weighted average of 64 measures. So, the results are highly dependent upon the weighting, which, at the very least, somewhat arbitrary. So, I do not think the report contains a great deal of useful information.
Here is the equation that is driving medical care today.
Value = Quality/Cost
While costs are readily determined, the definitions of cost are not. Healthcare is an ongoing process, so longer timeframes, not hospital admissions, are a better measure of expenses. Quality is harder to define, but CMS, working with the the same academicians who created the ratings, developed reasonable measures of quality.
The star ratings consider three general areas. Measurable outcomes carry the greatest weight. An equivalent weight is given to the experience of patients with the health system (patient satisfaction). And finally, CMS looks at how we provide care—process measures. Here is the weighting:
The stars summarize the 64 measures making up these categories. But, with summation, we lose valuable, nuanced information. To understand New York hospital scores, we need to take a more detailed look. There is one last caveat.
The measures that predominantly reflect the experience of Medicare patients – the severity of illness must be taken into account so we can compare apples to apples. Let us first consider the three measures of outcome.
CMS looked at 30-day mortality – the number of patients dying within 30 days of being admitted to the hospital, regardless of whether they were at home, in a nursing home or still in the hospital on day 30. Five medical conditions were used to determine that ratings (2). These conditions encompass Medicare’s top expenses.
I left out "average" from the table so you could see in which areas New York performed better or worse. The state was rated better for heart failure, stroke, and acute myocardial infarctions. For pneumonia and CABG we were both above and below the benchmark, depending on the hospital. But overall, our better hospitals outweighed our worst. For surgical deaths, I would call it a draw; we were average. While we could improve our care for pneumonia and strokes, I would say overall that New York was "better" in preventing mortality.
Safety of Care
The ratings measured patient safety in major areas: (1) infections acquired in the hospital from surgery, procedures or just from being there; (2) the complications of total hip, and (3) the PSI-90— a composite category that encompassed other safety concerns. Here are the infection data:
The good news— New York does better on avoiding infections from the dangerous C. Difficile bacteria. For MRSA (Methicillin-resistant staph aureus), let's call it a draw. For the remaining infectious safety issues— central lines, urinary catheters and surgical infections, we do poorly, especially for infections following colon surgery.
The PSI-90 composite score has its critics, especially the accurate reporting of events. The score includes pressure ulcers, postoperative hip fractures, sepsis or wound breakdowns. It also includes post-operative blood clots in the leg or lung as well as injuries to the lung from the placement of catheters and inadvertent injuries to organs during surgery. The composite score mimics that of general care.
The good news is that joint replacements surgery in New York is safe. On the other hand, the PSI-90 composite score is abysmal. Bottom line: safe care in New York hospitals is a significant issue, but in certain areas, the state performs well.
Readmission of a patient within 30 days of discharge from the hospital is bad by any measure. It is resource intensive and extremely costly; doubling the cost of treating that patient. The CMS readmission measures are based on the same conditions as the mortality data, with the exception of substituting hip and knee replacement for surgical care. Additionally, CMS considers readmissions for any reason.
New York has fewer hospitals reporting readmissions for hip and knee surgeries, which is consistent with the overall safety of the state's hospitals. We meet the national benchmark for readmissions following acute myocardial infarctions and for CABG. For the rest, we do much worse – 3 to 4 times more poorly.
Readmissions are related to the care patients receive following hospitalization: The ability to pay for and understand how to take medications, to see your doctor for follow-up appointments, and to perform at home the activities of daily living – eating, bathing, getting dressed, and getting out of bed. All of these impact readmission. The result of these ‘non-medical’ concerns makes readmission measures and comparisons more difficult to compare. The socio-demographic problems of urban and rural New Yorkers, combined with our inability to provide an adequate safety net for patients who need one increases our readmission rate. While readmissions are attributed primarily to hospitals, this is really a larger social issue, and is why hospitals that serve these populations want socio-demographics – education, income, language ability, transportation to be part of the rating mix so we be fairly compared to states with fewer of these problems.
In part two, we will consider the most surprising information that was found in the star measures—what did patients think about their experience?
(1) A total of 937 hospitals were not rated because there were too few cases to provide reliable evaluation. No Maryland hospitals were rated, since Medicare does not collect data in the state.
(2) Acute myocardial infarction, chronic obstructive pulmonary disease (COPD), heart failure (HF), pneumonia (PN) and acute ischemic stroke and two surgical conditions, coronary artery bypass surgery (CABG) and deaths among surgical patients.