Politics and Policy of Comparative Effectiveness: Looking Back, Looking Ahead

Originally published by the Center for Studying Health System Change

Published: January 2005

Updated: April 8, 2026

Originally published by the Center for Studying Health System Change (HSC). HSC was a nonpartisan policy research organization funded principally by the Robert Wood Johnson Foundation.

Politics and Policy of Comparative Effectiveness: Looking Back, Looking Ahead

Mathematica Issue Brief — June 24, 2010

Eugene C. Rich, Elizabeth Docteur

By 2010, comparative effectiveness research (CER) had risen to prominence in the national health policy debate. With health care spending consuming an ever-larger share of the economy, policymakers were searching for tools to distinguish which medical treatments, procedures, and interventions worked best under different circumstances. The idea was straightforward in concept: invest in research that compared the outcomes of alternative approaches to preventing, diagnosing, and treating health conditions, then use the resulting evidence to guide clinical decisions, inform coverage policies, and improve the value of health care spending.

Legislative Progress and Remaining Tensions

This research by Eugene C. Rich, M.D., and HSC Vice President Elizabeth Docteur, published as a Mathematica Policy Research Issue Brief, examined the political and policy landscape surrounding the expanded federal investment in CER. The American Recovery and Reinvestment Act of 2009 had provided $1.1 billion in new funding for comparative effectiveness research, and the Affordable Care Act established the Patient-Centered Outcomes Research Institute (PCORI) as a permanent, independent entity to set priorities, fund studies, and disseminate findings.

While these legislative actions represented significant progress, the authors noted that a number of fundamental political and policy disagreements had been deferred rather than resolved. Congress had explicitly prohibited PCORI from developing or employing cost-per-quality-adjusted-life-year thresholds, a measure commonly used in other countries to assess whether the health benefits of a treatment justify its cost. This restriction reflected deep-seated concerns among some lawmakers and industry groups that CER findings could be used to ration care or deny patients access to treatments that their physicians recommended.

The Political Fault Lines

The politics surrounding CER were complex and cut across traditional ideological lines. Supporters argued that the U.S. health care system spent enormous sums on treatments whose relative effectiveness was unknown or poorly documented, and that rigorous head-to-head comparisons would help patients, physicians, and payers make better-informed decisions. Opponents worried that government-sponsored research comparing treatments would inevitably lead to one-size-fits-all coverage decisions that failed to account for individual patient variation, and that the research enterprise would be captured by those who wanted to use it primarily as a cost-containment tool.

The pharmaceutical and medical device industries had particular concerns that CER findings could be used to restrict coverage or reimbursement for their products, especially newer, more expensive therapies that might not demonstrate clear superiority over older, cheaper alternatives in every patient population. Patient advocacy groups were divided: some welcomed research that would help patients understand which treatments offered the best outcomes, while others feared that cost-driven coverage restrictions would limit access to care for vulnerable populations.

Unresolved Policy Questions

Rich and Docteur identified several key policy questions that remained open. How would CER findings be translated into coverage and payment decisions by Medicare, Medicaid, and private insurers? What safeguards would be needed to ensure that research priorities reflected patient needs rather than political or commercial interests? How could CER accommodate the growing understanding that treatments affect different patient subgroups differently, rather than producing one-size-fits-all recommendations? And could the new CER infrastructure maintain its credibility and independence in a politically charged environment where powerful stakeholders had strong financial interests in the outcomes?

The authors concluded that the continuing disagreements over these questions posed real challenges to the success of the nation's expanded CER initiative. Without consensus on how research findings should inform practice and policy, there was a risk that the substantial public investment in comparative effectiveness would fail to produce the improvements in care quality and cost control that its proponents envisioned. The path forward would require building trust among disparate stakeholders and developing transparent processes for translating evidence into actionable policy without crossing the politically sensitive line into perceived rationing.

Sources and Further Reading

AHRQ — Federal health care quality research agency.

Health Affairs — Peer-reviewed health policy research.

Robert Wood Johnson Foundation — Health policy research.

Commonwealth Fund — Research on health care quality.