Skip to main content

Pardon the bad play on words, but “it was the best of hospitals, it was the worst of hospitals.” Within the past week I interacted with two healthcare organizations and had the chance to discuss not only how they report, but improve their quality outcomes. In order to protect both the guilty and the innocent, I will only state that both are large academic medical centers with similar services, physician leadership and quality organizational structures. In comparing the two organizations, the gargantuan differences in reporting frustrate me, and the ethics of the leaders involved in quality at one institution frankly disgust me. How can we accurately measure, and ultimately improve, quality outcomes if all are not “playing” honestly? And yes, I acknowledge that “gaming” in quality scores has been occurring for decades. But does that make it right?

Hospital A’s leadership is instructing HIM leadership to rebill cases from over three years ago, changing POA statuses and capture of complications in order to improve their quality performance rankings. Now happily, this hospital’s compliance department is involved and will weigh in on whether or not this will occur. But the very fact that the quality leadership is pushing for this is discouraging. Nowhere was there a mention of efforts to actually improve outcomes. And of course, this philosophy tends to roll downhill. An attending in their orthopedics department was quoted as stating “We don’t have any complications.” Really? Nothing unexpected ever happened in the human process of practicing medicine or operating on patients (and in a teaching institution to boot)?

Hospital B had a very different perspective but still realized dramatic improvements in two of their quality outcomes for Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicators (PSI) -- PSI 4, Death Rate Among Surgical Inpatients with Serious Treatable Conditions and PSI 11, Postoperative Respiratory Failure. The CMO noted that discussions had taken place with providers about the decision to operate on patients with bleak outcomes. Leadership communicated that the decision making process had to include not only a review of the patient’s risk of surviving the surgery, but the likelihood of subsequent co-morbid conditions and/or ultimately poor outcomes. In other words, just because they “could” operate, did not mean they “should” operate. Hospital B’s leadership guided the discussion to focus not only on the decision to operate, but also whether to accept the patient as a transfer from an outlying facility with such a poor prognosis.

To improve PSI 11, they took an even more startling approach. At some of their facilities, ICU intensivists and critical care attendings documented diagnoses without clinical validity while co-managing patients in the post-operative period. They documented diagnoses of respiratory failure and cardiogenic shock in an effort to support a higher E&M claim. It's an example of finding alternate diagnoses that do not impact quality scores but allow providers to submit higher levels of E&M claims. Hospital B essentially said “too bad.” If there is not a diagnosis for the immediate weaning period, or to support the patient being safely maintained overnight in the ICU on the ventilator, no higher E&M level claim could be submitted. The organization took a strong stance on accuracy over reimbursement.

If only all institutions had the same philosophy as Hospital B, we could then have an accurate “Dickinsonian” measurement of the best and worst of hospitals.

Cheryl Manchenton is a Senior Inpatient Consultant and Project Manager for 3M Health Information Systems.