Tag Archives: quality metrics

AHRQ and CMS Public Reporting Measures Fail to Describe the True Safety of Hospitals

A new study from the Johns Hopkins Armstrong Institute for Patient Safety and Quality, published in the journal Medical Care, performed a systematic review and meta-analysis of two sets of safety measures used for pay-for-performance and public reporting The measures evaluated in the study are used by several public rating systems, including U.S. News and World Report’s Best Hospitals, Leapfrog’s Hospital Safety Score, and the Center for Medicare and Medicaid Services’ (CMS’) Star Ratings.

The two sets of measures evaluated are:

The investigators first performed a systematic review of all published medical research since 1990, looking for studies that addressed the validity of the HAC  and PSI measures.  They identified only 5 of these 40 safety measures with enough data in these prior studies to permit a pooled meta-analysis:

  • A. Iatrogenic Pneumothorax (PSI 6/HAC 17)
  • B. Central Line-associated Bloodstream Infections (PSI 7)
  • C. Postoperative hemorrhage/hematoma (PSI 9)
  • D. Postoperative deep vein thrombosis/pulmonary embolus (PSI 12)
  • E. Accidental Puncture/Laceration (PSI 15)

The investigators then performed a meta-analysis, pooling the results of all studies about the validity of each of these measures.  Their findings in the figure below show that in pooled studies (the diamond at the bottom of each lettered rectangle) only one measure–Measure E, PSI 15 (Accidental Puncture and Laceration)–met the investigators’ criteria for validity: a positive predictive value of at least 80% (indicating that at least 80% of the patients determined by the measure to have an accidental puncture or laceration truly had an accidental puncture or laceration.) Actual occurrence (reference standard) of each reported safety event was determined, in each individual study, by medical chart review.

Screen Shot 2016-06-02 at 6.12.26 AM

Measure D, PSI 9 (Postoperative hemorrhage or hematoma) came close to the 80% PPV threshold, with a pooled PPV of 78.6%.

Based on these findings, the investigators conclude that these measures, widely used for public reporting and pay-for-performance, should not be used for either purpose:

 CMS and others have created payment incentives based on hospitals’ performance for a variety of hospital-acquired complications, which are measured with the respective PSIs and HAC measures. Policy makers and payers have argued that the PSIs and HAC measures are good enough for reporting and pay-for-performance, whereas many providers believe they are not. Our results suggest that the PSIs and HAC measures may not be valid enough and/or have insufficient data to support their use for these purposes. This is especially true given the potential financial impact these pay-for-performance approaches may have on the narrow financial margins on which most hospitals function.

 

Advertisements

Emergency Department Return Visits as a Quality Metric

A recent JAMA publication lead-authored by Dr. Amber Sabbatini examined the scientific soundness of emergency department (ED) return visits as a measure of the ED’s quality of care. Emergency department return visits have been considered for wider adoption as a quality metric, especially for those patients who are hospitalized during the return ED visit. The “quality” that this metric is intended to measure is the quality of ED care delivered, including the safety of the ED physician’s decision to discharge the patient.  Patients returning to the ED within 7, 14 and 30 days of the initial visit thus are thought to reflect lower quality of care, particularly if readmitted, as this reflects progression of the patient’s illness to a more severe state after they were mistakenly sent home.

The authors compared in-hospital clinical and utilization outcomes (deaths, need for intensive care unit (ICU) care, length of stay and cost) between two groups of patients: those who were admitted during their initial ED visit, and those who returned to the ED and were hospitalized. They found that

patients who experienced an ED return visit that was associated with admission shortly after ED discharge had significantly lower rates of in-hospital mortality, ICU admission, and costs, but higher lengths of stay compared with admissions among patients without a return visit to the ED.

Patients who are initially sent home from the ED and then return and are readmitted are actually less sick than those admitted to the hospital initially.  In aggregate, they are not experiencing increasing severity of illness after discharge from the initial ED visit–in fact, they are less sick than those admitted initially.  In some ED’s, this effect may reflect dilution, in that a revisit alone is reason to admit patients regardless of how sick they are medically. A tongue-in-cheek ED adage states that  if the pizza boy returns to the ED to deliver another pizza, you admit him.

Putting these findings in context of Donabedian’s structure-process-outcome framework for measuring health care quality, ED revisits are being used to measure ED quality of care.  ED quality of care is a process measure as it is a health care-related activity performed for or by a patient, but quality is unmeasured here because it is so hard to measure directly for all diagnoses together.  Current ED measures of care quality include throughput metrics–ED length of stay and time from disposition decision to admission–as well as  have condition-specific metrics such as time to fibrinolytic treatment for ED patients with acute myocardial infarction.

In Donabedian’s framework, outcomes measure the  health state of the patient resulting from health care. Revisits to the ED are not a health state; they are used as a proxy for the outcome of “worsened health status”.  By looking at the clinical course of those readmitted during a readmission post discharge from the ED, the authors show that ED revisits are not a good proxy for post-ED-discharge health status.Screen Shot 2016-03-10 at 7.00.37 AM

Thus, ED revisits do not have good construct validity as a proxy for ED quality of care–they do not measure what they purport to measure. One important contributor to this poor validity is that patient-level factors beyond the control of the hospital are significant risk factors for revisits. Social determinants—the circumstances in which people live and work—powerfully affect health; they are estimated to have twice the impact of the quality of an individual’s health care on that individual’s overall health.

The concerns about construct validity and the impact of social determinants of health are similar to those I’ve discussed elsewhere related to hospital readmissions, and related to healthcare performance metrics more broadly.