Will Hospital Report Cards Make the Grade?
Originally published by the Center for Studying Health System Change
Published: July 1997
Updated: April 8, 2026
Originally published by the Center for Studying Health System Change (HSC). HSC was a nonpartisan policy research organization funded principally by the Robert Wood Johnson Foundation.
Will Hospital Report Cards Make the Grade?
Issue Brief No. 11, July 1997
Hospital report cards documenting patients' medical outcomes were attracting increased attention for their potential to guide decisions by employers, consumers, and providers. But significant questions remained about their validity and utility. Based on an HSC seminar with two expert panels, this Issue Brief found that current efforts to collect and report clinical outcomes data were flawed -- but releasing data could still help improve clinical quality and foster an environment where quality information eventually shapes health care decisions.
How Report Cards Evolved
Risk-adjusted hospital outcome reports had been publicly available for over a decade, initially seen as revolutionary. Before outcomes data, hospitals were judged on reputation and JCAHO accreditation. HCFA launched hospital mortality reports in 1986. More than 10 years later, the record was mixed -- some initiatives had died, but overall activity was growing. States including New York, California, and Pennsylvania had mandated reporting. How the information was being used and its market impact remained unclear.
The Risk Adjustment Challenge
Risk adjustment -- accounting for patient health status differences -- was essential for credible report cards but not straightforward. Multiple approaches existed, and methodology choices strongly influenced results. Administrative data were cheaper and more available but contained limited clinical information. Medical record data were richer but expensive to collect. Iezzoni evaluated 14 severity-adjustment systems and found they varied widely in risk assignment and in which hospitals they flagged as low performers. Administrative systems were retrospective 'post-dictors' that could mask quality problems, while clinical systems like MedisGroups took admission snapshots.
Despite imperfections, risk adjustment was needed for productive physician dialogue and to minimize incentives to avoid high-risk patients. Political consensus on methodology within a market mattered more than technical perfection, according to the Cleveland program's director. But nationally, JCAHO's program allowing hospitals to choose from 61 approved reporting systems threatened to undermine comparability.
How Employers Use Performance Data
Hibbard's survey of large employers in four regions found many purchasers unaware the data existed (awareness ranged 25-83 percent). Except in Cleveland, most did not use hospital outcomes data in purchasing, preferring satisfaction surveys and accreditation status. Barriers included doubts about reliability, assumptions that plans already reviewed the data, poor formatting for purchasing decisions, and cognitive shortcuts in decision-making. Only 20 percent used objective evaluation systems for cost-quality trade-offs. About 31 percent shared performance information with employees.
Usage was evolving along a learning curve. Pennsylvania's experience showed employer demands for information kept escalating with little evidence of actual use. Employer and employee information needs also diverged -- employers focused on plan-level data while consumers made decisions about specific procedures with local providers. The panelists agreed that producing and using outcomes data involved complex issues unfolding differently across markets, but quality information might eventually become a market force in its own right.
Sources and Further Reading
AHRQ — Federal health care quality research.
CMS — Quality Initiatives — Federal quality programs.
Health Affairs — Peer-reviewed health policy.
Robert Wood Johnson Foundation — Health policy research.
Commonwealth Fund — Research on health care quality.