Rethinking Cost-Sharing: The Need for Health Insurance Reform in the U.S.

In the U.S., the system designed to protect Americans from the financial devastation of illness often ends up being the very source of that devastation. While it can be debated whether healthcare is a basic human right or just another commodity, there’s no denying that it’s expensive. Without health insurance, very few Americans can afford to get sick, yet even with insurance, the burden of cost-sharing often forces people to delay or forego necessary care.

The idea of cost-sharing might sound foreign to Canadians, where charging patients any copay for visiting a doctor or a hospital is illegal; in that sense, these services are fully covered with zero out-of-pocket (OOP) cost to the patient. In contrast, private insurance plans in the U.S. are heavily influenced by the concept of moral hazard, which suggests that when consumers don’t have to pay for healthcare, they will use more of it, leading to inefficiency and increased spending [1]. And so, in the U.S., patients are typically subject to “cost sharing”.

But what do we know about moral hazard and how was it discovered in the context of healthcare?

The RAND Health Insurance Experiment

To explore the impact of cost-sharing on healthcare utilization, the government commissioned RAND to run a large-scale social experiment, which ran from 1974 to 1981. The researchers wanted to determine how different levels of OOP costs influence the demand for healthcare services. Over 5,000 individuals from six different locations across the U.S. were randomly assigned to various health insurance plans with differing cost-sharing levels: 0% (full coverage), 25%, 50%, or 95%. The hypothesis was that if moral hazard exists, those with lower cost-sharing would use more healthcare services [2]. In other words, if someone has full insurance with no OOP costs, they will visit the doctor more frequently and/or consume more prescription drugs.

The study's results confirmed that as OOP costs for health services decreased, the demand for these services increased. This trend was consistent across all types of services, including doctor and hospital visits, dental care, mental health services, and prescription drugs. Importantly, the reduction in healthcare utilization did not negatively impact the overall health of most participants, except for a minority of low-income individuals [2], [3].

Economists interpreted the RAND Health Insurance Experiment’s findings to suggest that when people reduce their demand for healthcare services without suffering negative health consequences, it indicates overutilization, meaning that some of the care they sought was not truly necessary. The study implied that people can be rational and make informed decisions about when they really need medical care. For instance, if someone wakes up with a mild stomach ache, they might opt to wait a couple of days and try home remedies before immediately seeking medical attention or prescription drugs [2], [3].

As a result, the study concluded that having some level of out-of-pocket cost is essential to discourage unnecessary healthcare spending and excessive prescription drug consumption. This principle – that patients should share in the cost of their care – became the gold standard for designing private health insurance plans in the U.S. [3]

A Typical Private Health Insurance Plan

To enroll in a private health insurance plan, Americans must pay a monthly membership fee known as a premium. A healthy individual might visit the doctor once a year, and the average Canadian may naively expect that the monthly premium they have been paying all year would cover that visit. It turns out that the premium is just the entry fee and doesn’t guarantee any coverage. When medical services or prescription drugs are needed, the person has to first pay his deductible – a 100% OOP cost. Only after reaching the deductible do individuals enter the co-insurance phase, where they still pay a portion – typically 20% -- of every dollar spent on healthcare. And it doesn’t stop there. It’s only when they’ve reached the maximum OOP cost, which can be $9,000 annually or higher, does their insurance finally kick in to cover 100% of their medical expenses [4].

This system, where you pay up-front and pay again at the point of care, has left many Americans frustrated with the promise and the premise of health insurance coverage. In America, “being insured” can feel more like a distant dream than a reality, even for someone paying their monthly premiums.

The Need for Health Insurance Reform in the U.S.

The idea that some cost-sharing is necessary to prevent moral hazard has been stretched beyond its limits in the U.S. where countless people can’t afford healthcare, remain uninsured, or are forced to avoid hospitals and medication due to financial constraints. Indeed, in 2018, Americans launched over 250,000 medical campaigns on GoFundMe to raise funds for medical expenses. Even a Nobel laureate physicist had to sell his medal to pay his medical bills [5]. Between 2013 and 2016, over 60% of bankruptcies in the U.S. were due to medical expenses [5]. This is not just statistics; it’s a crisis.

True healthcare reform must address two critical issues.

The first needed reform is to cap health insurance out-of-pocket costs. Regardless of the price of a surgery, treatment, or drug, if a patient can't afford their deductible, they won't be able to access the service. For example, whether a drug costs $10,000 or $100,000, if a patient has a $5,000 deductible they can't meet, that drug remains out of reach because they must first pay their deductible before insurance coverage begins.

Price controls on branded medicines are a popular solution to high healthcare costs. But the hypothetical $100,000 drug could be price-capped at $10,000 and remain unaffordable without proper insurance. Not all price-controls are so pointless. The fact is that most medicines have their own price control already built into them: they go generic. The seemingly high prices of these drugs while they are branded is more akin to a mortgage that society pays to own the drug as an inexpensive public good for the rest of time.

If the market doesn’t offer that mortgage-period reward – or can’t, because of price-controls before a drug goes generic – then investors won’t fund the R&D that generates all those novel medicines. So, if we want more medicines that can go generic, we have to count on insurance with reasonable OOP costs, not price controls, to make branded medicines affordable.

The second needed reform should address when medicines fail to go generic as intended by the patent system. This might happen because they are too complex to even be copied, as is the case for gene therapies and complex biologics. In cases like this – or when companies find odious ways to extend their drugs’ mortgage periods – after the initial patent period of market-based pricing, it would make sense to use price controls to ensure all medicines abide by the intent of the patent system by becoming inexpensive to society.

The purpose of health insurance is to make healthcare and prescription drugs affordable for those who need them. Cost-sharing may dissuade someone from making too many trips to the doctor for minor ailments, but not all utilization is “over-utilization.” Consider whether moral hazard is at play when diabetics need life-saving insulin or cancer patients fill a prescription for chemotherapy. No one is faking cancer to get free chemo, that’s absurd. And if a medicine isn’t appropriate for a patient or there’s an equally effective but much cheaper alternative, insurance has other instruments to steer patients toward more appropriate care, such as step-therapy or prior-authorization. We must re-examine the concept of “skin-in-the-game.” Using cost-sharing to prevent over-utilization is causing far too much harm and healthcare reforms that cap OOP costs are essential to ensure Americans can afford the care they need.

Author: Edouard AL Chami

 

References

[1] Pauly, M. (1968). The Economics of Moral Hazard: Comment. The American Economic Review, 58, 531-537. https://www.jstor.org/stable/1813785. 

[2] Aron-Dine, A., Einav, L., & Finkelstein, A. (2013). The RAND Health Insurance Experiment, three decades later. The journal of economic perspectives : a journal of the American Economic Association, 27(1), 197–222. https://doi.org/10.1257/jep.27.1.197.

[3] Einav, L., & Finkelstein, A. (2018). Moral Hazard in Health Insurance: What We Know and How We Know It. Journal of the European Economic Association, 16(4), 957–982. https://doi.org/10.1093/jeea/jvy017.

[4] Csa, K. A., & Glover, L. (2024, May 28). Understanding copays, coinsurance and deductibles. NerdWallet. https://www.nerdwallet.com/article/health/coinsurance-vs-copay.

[5] Himmelstein, D. U., Lawless, R. M., Thorne, D., Foohey, P., & Woolhandler, S. (2019). Medical Bankruptcy: Still Common Despite the Affordable Care Act. American journal of public health, 109(3), 431–433. https://doi.org/10.2105/AJPH.2018.304901.


Published on September 15, 2024