When Algorithms Deny Care: The Insurance Industry's AI War Against Patients

The assassination of UnitedHealthcare CEO Brian Thompson in December 2024 laid bare a healthcare crisis where insurance companies use artificial intelligence to systematically deny care while posting record profits. Federal data shows UnitedHealthcare, which covers 49 million Americans, denied nearly one-third of all in-network claims in 2022 - the highest rate among major insurers.

This reflects an industry-wide strategy that insurance scholar Jay Feinman calls "delay, deny, defend" - now supercharged by AI. These systems automatically deny claims, delay payment, and force sick people to defend their right to care through complex appeals. A Commonwealth Fund survey found 45% of working-age adults with insurance faced denied coverage for services they believed should be covered.

The consequences are devastating. As documented cases show, these automated denial systems routinely override physician recommendations for essential care, creating a system where algorithms, not doctors, decide who receives treatment. For those who do appeal, insurers approve at least some form of care about half the time. This creates a perverse incentive structure where insurers can deny claims broadly, knowing most people will not fight back. For the people trapped in this system, the stakes could not be higher - this is quite literally a matter of life and death.

The Rise of AI in Claims Processing

Health insurers have increasingly turned to AI systems to automate claims processing and denials, fundamentally changing how coverage decisions are made. A ProPublica investigation revealed that Cigna's PXDX system allows its doctors to deny claims without reviewing patient files, processing roughly 300,000 denials in just two months. "We literally click and submit. It takes all of 1.2 seconds to do 50 at a time," a former Cigna doctor reported.

The scope of automated denials extends beyond Cigna. UnitedHealth Group's NaviHealth uses an AI tool called "nH Predict" to determine length-of-stay recommendations for people in rehabilitation facilities. According to STAT News, this system generates precise predictions about recovery timelines and discharge dates without accounting for people's individual circumstances or their doctors' medical judgment. While NaviHealth claims its algorithm is merely a "guide" for discharge planning, its marketing materials boast about "significantly reducing costs specific to unnecessary care."

Only about 1% of denied claims are appealed, despite high rates of denials being overturned when challenged. This creates a system where insurers can use AI to broadly deny claims, knowing most people will not contest the decisions. The practice raises serious ethical concerns about algorithmic decision-making in healthcare, especially when such systems prioritize cost savings over medical necessity and doctor recommendations.

Impact on Patient Care

The human cost of AI-driven claim denials reveals a systemic strategy of "delay, deny, defend" that puts profits over patients. STAT News reports the case of Frances Walter, an 85-year-old with a shattered shoulder and pain medication allergies, whose story exemplifies the cruel efficiency of algorithmic denial systems. NaviHealth's algorithm predicted she would recover in 16.6 days, prompting her insurer to cut off payment despite medical notes showing she could not dress herself, use the bathroom independently, or operate a walker. She was forced to spend her life savings and enroll in Medicaid to continue necessary rehabilitation.

Walter's case is not unique. Despite her medical team's objections, UnitedHealthcare terminated her coverage based solely on an algorithm's prediction. Her appeal was denied twice, and when she finally received an administrative hearing, UnitedHealthcare didn't even send a representative - yet the judge still sided with the company. Walter's case reveals how the system is stacked against patients: insurers can deny care with a keystroke, forcing people to navigate a complex appeals process while their health deteriorates.

The fundamental doctor-patient relationship is being undermined as healthcare facilities face increasing pressure to align their treatment recommendations with algorithmic predictions. The Commonwealth Fund found that 60% of people who face denials experience delayed care, with half reporting their health problems worsened while waiting for insurance approval. Behind each statistic are countless stories like Walter's - people suffering while fighting faceless algorithms for their right to medical care.

The AI Arms Race in Healthcare Claims

Healthcare providers are fighting back against automated denials by deploying their own AI tools. New startups like Claimable and FightHealthInsurance.com help patients and providers challenge insurer denials, with Claimable achieving an 85% success rate in overturning denials. Care New England reduced authorization-related denials by 55% using AI assistance.

While these counter-measures show promise, they highlight a perverse reality: healthcare providers must now divert critical resources away from patient care to wage algorithmic warfare against insurance companies. The Mayo Clinic has cut 30 full-time positions and spent $700,000 on AI tools simply to fight denials. As Dr. Robert Wachter of UCSF notes, "You have automatic conflict. Their AI will deny our AI, and we'll go back and forth."

This technological arms race exemplifies how far the American healthcare system has strayed from its purpose. Instead of focusing on patient care, providers must invest millions in AI tools to combat insurers' automated denial systems - resources that could be spent on direct patient care, medical research, or improving healthcare delivery. The emergence of these counter-measures, while potentially helpful for providers and patients seeking care, highlights fundamental flaws in our healthcare system that require policy solutions, not just technological fixes.

AI Bias: Amplifying Healthcare Inequities

The potential for AI systems to perpetuate and intensify existing healthcare disparities is deeply concerning. A comprehensive JAMA Network Open study examining insurance claim denials revealed that at-risk populations experience significantly higher denial rates.

The research found:

  • Low-income patients had 43% higher odds of claim denials compared to high-income patients

  • Patients with high school education or less experienced denial rates of 1.79%, versus 1.14% for college-educated patients

  • Racial and ethnic minorities faced disproportionate denial rates:

    • Asian patients: 2.72% denial rate

    • Hispanic patients: 2.44% denial rate

    • Non-Hispanic Black patients: 2.04% denial rate

    • Non-Hispanic White patients: 1.13% denial rate

The National Association of Insurance Commissioners (NAIC) Consumer Representatives report warns that AI tools, often trained on historically biased datasets, can "exacerbate existing bias and discrimination, particularly for marginalized and disenfranchised communities."

These systemic biases stem from persistent underrepresentation in clinical research datasets, which means AI algorithms learn and perpetuate historical inequities. The result is a feedback loop where technological "efficiency" becomes a mechanism for deepening healthcare disparities.

Legislative Response and Regulatory Oversight

While California's Physicians Make Decisions Act and new Centers for Medicare & Medicaid Services (CMS) rules represent progress in regulating AI in healthcare claims, the NAIC warns that current oversight remains inadequate. California's law prohibits insurers from using AI algorithms as the sole basis for denying medically necessary claims and establishes strict processing deadlines: five business days for standard cases, 72 hours for urgent cases, and 30 days for retrospective reviews.

At the federal level, CMS now requires Medicare Advantage plans to base coverage decisions on individual circumstances rather than algorithmic predictions. As of January 2024, coverage denials must be reviewed by physicians with relevant expertise, and plans must follow original Medicare coverage criteria. CMS Deputy Administrator Meena Seshamani promises audits and enforcement actions, including civil penalties and enrollment suspensions for non-compliance.

The insurance industry opposes these safeguards. UnitedHealthcare's Medicare CEO Tim Noel argues that restricting "utilization management tools would markedly deviate from Congress' intent." But as the NAIC emphasizes, meaningful transparency requires more than superficial disclosures - insurers must document and justify their AI systems' decision-making criteria, training data, and potential biases. Most critically, human clinicians with relevant expertise must maintain true decision-making authority, not just rubber-stamp algorithmic recommendations.

Recommendations for Action

The NAIC framework provides a roadmap for protecting patients while ensuring appropriate oversight of AI in healthcare claims. Key priorities for federal and state regulators:

  • Require comprehensive disclosure of AI systems' training data, decision criteria, and known limitations

  • Mandate documentation of physician recommendation overrides with clinical justification

  • Implement regular independent audits focused on denial patterns affecting marginalized communities

  • Establish clear accountability and substantial penalties when AI denials cause patient harm

  • Create expedited appeal processes for urgent care needs

Healthcare providers should:

  • Document all cases where AI denials conflict with clinical judgment

  • Track patient impacts from inappropriate denials, including worsened health outcomes

  • Report systematic discrimination in algorithmic denials

  • Support patient appeals with detailed clinical documentation

  • Share denial pattern data with regulators and policymakers

The solutions cannot rely solely on technological counter-measures. As the NAIC emphasizes, "The time to act is now."

Conclusion

The AI-driven denial of care represents more than a technological problem - it's a fundamental breach of the healthcare system's ethical foundations. By prioritizing algorithmic efficiency over human medical judgment, insurers have transformed life-saving care into a battlefield where profit algorithms determine patient survival.

Meaningful change requires a multi-pronged approach: robust regulatory oversight, technological accountability, and a recommitment to patient-centered care. We cannot allow artificial intelligence to become an instrument of systemic denial, transforming healthcare from a human right into an algorithmic privilege.

Patients, providers, and policymakers must unite to demand transparency, challenge discriminatory systems, and restore the primacy of human medical expertise. The stakes are too high to accept a future where lines of code determine who receives care and who is left behind. Our healthcare system must be rebuilt around a simple, non-negotiable principle: medical decisions should serve patients, not corporate balance sheets.

Travis Manint - Advocate and Consultant

Travis, entrepreneur and VP of the board at Connect Northshore, has a rich marketing background, having shaped narratives for Fortune 500 giants. Today, he's a fervent advocate for LGBTQIA+ rights, driven by personal experiences with HIV and substance use disorder. His dedication was pivotal in launching Connect Northshore's inaugural LGBTQIA+ Pride event, marking a significant stride towards inclusivity.Focused on community action and policy-making, Travis emphasizes the health needs of gay, bisexual, and trans/nonbinary communities, aiming for compassionate, actionable changes in policy and community ethos. A globetrotter, he's ventured through 8% of the world's countries and 34 US States. His zest for travel parallels his love for Saints and LSU football. At home, his rescue pups, Jake and Ellie, are his joy, and moments with his lively Italian family are cherished.In all endeavors, Travis is committed to celebrating and integrating LGBTQIA+ rights into policy and community life.

https://www.linkedin.com/in/travismanint/
Previous
Previous

DoxyPEP's Impact: New Evidence Shows Promise and Challenges in STI Prevention

Next
Next

Jen’s Half Cents: On Prognostication and Pragmatism