Travis Manint - Communications Consultant Travis Manint - Communications Consultant

System Failure: Inside the Collapse of HIV Data Protections

The privacy frameworks that protect people living with HIV—built over decades through advocacy, legislation, and the lived consequences of stigma and surveillance—are now on the brink of collapse. Recent reporting from WIRED reveals that "much of the IT and cybersecurity infrastructure underpinning the nation’s health system is in danger of a possible collapse" following deep staffing cuts at the U.S. Department of Health and Human Services (HHS). Agency insiders warn that "within the next couple of weeks, everything regarding IT and cyber at the department will start to reach a point of no return operationally."

These reductions—orchestrated under the banner of "efficiency"—have eliminated the technical expertise necessary to maintain the very systems designed to protect patient information while enabling effective public health response. What took decades of careful negotiation to build could unravel in weeks.

A History of Mistrust and What It Built

In response to early AIDS panic and political scapegoating, HIV reporting systems were designed to protect privacy while still enabling public health surveillance. States initially resisted name-based reporting, opting instead for coded identifiers. These systems directly resulted from community resistance to the idea that a centralized government entity would hold a list of PLWH. By the late 1990s, the Centers for Disease Control and Prevention (CDC) and Health Resources and Services Administration (HRSA) had settled into a delicate dance: collect enough data to direct resources without breaking trust with the communities most impacted.

The Ryan White HIV/AIDS Program (RWHAP), created in 1990, is a reflection of this balance. Providers are required to report client-level data annually through the Ryan White Services Report (RSR), but it must be de-identified. Each grantee—whether a city, state, clinic, or community-based organization—must report separately, even if they serve the same client. This redundancy is intentional. It's how we avoid co-mingling funds and how we ensure that data is not aggregated in a way that risks patient re-identification. It’s messy, yes. But it’s designed to protect people, not just count them.

Why It’s So Complicated

At a structural level, RWHAP is segregated by design. Part A grantees are typically cities, Part B is for states, Part C goes to clinics, and Part D supports programs for women, infants, children, and youth. Each grantee and subgrantee reports separately. A person receiving services from a city-funded housing program and a clinic-funded medical program will appear in two different reports. They’ll be encrypted, anonymized, and counted twice—because each program needs its own audit trail. This is not a flaw. It’s a firewall.

It’s also one of the biggest complaints from providers. Clinics and case managers spend untold hours cleaning and submitting the same data to multiple entities for different grants every year. State agencies complain about the burden. But buried underneath the frustration is the reality: these walls are what keep private information from being aggregated, shared, and potentially exposed (or worse, used to target).

Molecular Surveillance and the Reemergence of Privacy Concerns

Parallel to the RSR reporting, the CDC continues to manage HIV surveillance through diagnostic reports, lab data, and increasingly, molecular surveillance—using genomic data from viral samples to track clusters and potential outbreaks. These systems operate independently from care-based reporting systems like the RSR. They’re not supposed to overlap. That’s on purpose.

Molecular surveillance is a powerful tool. It can detect transmission networks, identify gaps in care, and help allocate resources. But it also raises serious privacy concerns. People have no ability to opt out of having their viral sequence data analyzed. Community advocates have raised alarms about how this data could be misused—especially in states with HIV criminalization laws or where public health trust is already low.

When properly separated from care systems, surveillance data can inform public health strategy without endangering patient privacy. But the more these systems are tampered with, neglected, or mismanaged, the greater the risk of privacy breaches and data misuse.

The DOGE Playbook: Gutting Public Health from Within

None of this works without infrastructure. And right now, that infrastructure is being hollowed out.

On April 1, the Department of Health and Human Services (HHS) laid off roughly 10,000 employees—about 25% of its workforce. That includes entire IT teams, cybersecurity experts, and staff responsible for maintaining the systems that house Ryan White and surveillance data. As WIRED reported, these cuts have left HHS systems teetering on the edge of collapse.

The layoffs were orchestrated by the Department of Government Efficiency (DOGE), a Musk-backed initiative with a mandate to slash spending and "modernize" systems. In reality, DOGE operatives have cut critical personnel and attempted to rebuild complex legacy systems—like Social Security's COBOL codebase—without the necessary expertise. As NPR reported, DOGE staff have also sought sweeping access to sensitive federal data, raising serious concerns about the security and ethical use of health information.

A retired Social Security Administration (SSA) official warned that in such a chaotic environment, "others could take pictures of the data, transfer it… and even feed it into AI programs." Given Musk's development of "Grok," concerns have been raised that government health data might be used to "supercharge" his AI without appropriate consent or oversight.

The value of this data—especially when aggregated across systems like HHS, SSA, Veterans Affairs, and the Internal Revenue Service—is enormous. On the black market, a single comprehensive medical record can command up to $1,000 depending on its depth and linkages to other data sets. For commercial AI training, the value is even greater—not in resale, but in the predictive and market power that comes from large, high-quality datasets. If private companies were paying for this kind of dataset, it would cost billions. Musk may be getting it for free—with no consent, no oversight, and no consequences.

Meanwhile, at USAID, funding portals were shut off. Grantees couldn’t access or draw down funds. Even after systems came back online, no one was there to process payments. The same scenario is now playing out at HHS. Grantees have reported delays, missed communications, and uncertainty about reporting requirements—because the people who used to run the systems have been fired.

What's at Stake: Beyond Data Points

The crisis we're witnessing isn't merely technical—it threatens the foundation of HIV services in America. When data systems fail, grants cannot be properly administered. When grants are disrupted, services are compromised. And when privacy protections collapse, people living with HIV may avoid care rather than risk unwanted disclosure of their status.

We've been here before. In the early days of the epidemic, mistrust of government systems drove people away from testing and treatment. The privacy frameworks built into today's reporting systems were designed specifically to overcome that mistrust, enabling effective public health response while respecting human dignity.

A Call for Immediate Action

To address this growing crisis, we need action at multiple levels:

  1. Congress must exercise oversight over DOGE's activities by requiring transparent reporting on HHS staffing changes and their operational impacts, and by establishing strict limits on data access and audit trails to ensure administrative accountability.

  2. HHS must rapidly rehire technical expertise with the institutional knowledge needed to maintain these complex systems before contracts expire and systems fail.

  3. Advocacy organizations should demand clear guardrails on any use of healthcare data, particularly regarding AI applications, including explicit prohibitions on repurposing data collected for public health for commercial training without consent or compensation.

  4. HRSA must immediately address the continuity of the RSR and other reporting systems to ensure grant requirements don't become impossible to meet due to system failures.

But let’s be clear: none of this is a call to keep broken systems frozen in time. Public health data infrastructure can—and should—be modernized. There is real opportunity to streamline reporting, reduce administrative burden, and build tools that serve patients more effectively. But modernization must be done carefully, collaboratively, and with privacy at the center—not with a chainsaw in one hand and a Silicon Valley slogan in the other.

The “move fast and break things” ethos may work for social media startups, but it has no place in systems that safeguard the lives and identities of people living with HIV. What we’re witnessing is not innovation—it’s ideological demolition. The goal isn’t better care or stronger systems. It’s control, profit, and a reckless dismantling of public trust.

The myth that federal IT systems are merely bloated bureaucracies in need of disruption ignores their critical role in protecting sensitive information. Our public health data infrastructure has been built layer by layer, through hard-fought battles over privacy, accountability, and service delivery. Dismantling these systems doesn’t represent modernization—it threatens to erase decades of progress in building frameworks that enable effective care while respecting the rights of people living with HIV.

The privacy architectures designed in response to the early AIDS crisis weren’t just policy innovations—they were survival mechanisms for communities under threat. We cannot afford to let them collapse through neglect, arrogance, or privatized pillaging. The stakes—for millions of Americans receiving care through these programs—couldn't be higher.

Read More
Travis Manint - Communications Consultant Travis Manint - Communications Consultant

When Algorithms Deny Care: The Insurance Industry's AI War Against Patients

The assassination of UnitedHealthcare CEO Brian Thompson in December 2024 laid bare a healthcare crisis where insurance companies use artificial intelligence to systematically deny care while posting record profits. Federal data shows UnitedHealthcare, which covers 49 million Americans, denied nearly one-third of all in-network claims in 2022 - the highest rate among major insurers.

This reflects an industry-wide strategy that insurance scholar Jay Feinman calls "delay, deny, defend" - now supercharged by AI. These systems automatically deny claims, delay payment, and force sick people to defend their right to care through complex appeals. A Commonwealth Fund survey found 45% of working-age adults with insurance faced denied coverage for services they believed should be covered.

The consequences are devastating. As documented cases show, these automated denial systems routinely override physician recommendations for essential care, creating a system where algorithms, not doctors, decide who receives treatment. For those who do appeal, insurers approve at least some form of care about half the time. This creates a perverse incentive structure where insurers can deny claims broadly, knowing most people will not fight back. For the people trapped in this system, the stakes could not be higher - this is quite literally a matter of life and death.

The Rise of AI in Claims Processing

Health insurers have increasingly turned to AI systems to automate claims processing and denials, fundamentally changing how coverage decisions are made. A ProPublica investigation revealed that Cigna's PXDX system allows its doctors to deny claims without reviewing patient files, processing roughly 300,000 denials in just two months. "We literally click and submit. It takes all of 1.2 seconds to do 50 at a time," a former Cigna doctor reported.

The scope of automated denials extends beyond Cigna. UnitedHealth Group's NaviHealth uses an AI tool called "nH Predict" to determine length-of-stay recommendations for people in rehabilitation facilities. According to STAT News, this system generates precise predictions about recovery timelines and discharge dates without accounting for people's individual circumstances or their doctors' medical judgment. While NaviHealth claims its algorithm is merely a "guide" for discharge planning, its marketing materials boast about "significantly reducing costs specific to unnecessary care."

Only about 1% of denied claims are appealed, despite high rates of denials being overturned when challenged. This creates a system where insurers can use AI to broadly deny claims, knowing most people will not contest the decisions. The practice raises serious ethical concerns about algorithmic decision-making in healthcare, especially when such systems prioritize cost savings over medical necessity and doctor recommendations.

Impact on Patient Care

The human cost of AI-driven claim denials reveals a systemic strategy of "delay, deny, defend" that puts profits over patients. STAT News reports the case of Frances Walter, an 85-year-old with a shattered shoulder and pain medication allergies, whose story exemplifies the cruel efficiency of algorithmic denial systems. NaviHealth's algorithm predicted she would recover in 16.6 days, prompting her insurer to cut off payment despite medical notes showing she could not dress herself, use the bathroom independently, or operate a walker. She was forced to spend her life savings and enroll in Medicaid to continue necessary rehabilitation.

Walter's case is not unique. Despite her medical team's objections, UnitedHealthcare terminated her coverage based solely on an algorithm's prediction. Her appeal was denied twice, and when she finally received an administrative hearing, UnitedHealthcare didn't even send a representative - yet the judge still sided with the company. Walter's case reveals how the system is stacked against patients: insurers can deny care with a keystroke, forcing people to navigate a complex appeals process while their health deteriorates.

The fundamental doctor-patient relationship is being undermined as healthcare facilities face increasing pressure to align their treatment recommendations with algorithmic predictions. The Commonwealth Fund found that 60% of people who face denials experience delayed care, with half reporting their health problems worsened while waiting for insurance approval. Behind each statistic are countless stories like Walter's - people suffering while fighting faceless algorithms for their right to medical care.

The AI Arms Race in Healthcare Claims

Healthcare providers are fighting back against automated denials by deploying their own AI tools. New startups like Claimable and FightHealthInsurance.com help patients and providers challenge insurer denials, with Claimable achieving an 85% success rate in overturning denials. Care New England reduced authorization-related denials by 55% using AI assistance.

While these counter-measures show promise, they highlight a perverse reality: healthcare providers must now divert critical resources away from patient care to wage algorithmic warfare against insurance companies. The Mayo Clinic has cut 30 full-time positions and spent $700,000 on AI tools simply to fight denials. As Dr. Robert Wachter of UCSF notes, "You have automatic conflict. Their AI will deny our AI, and we'll go back and forth."

This technological arms race exemplifies how far the American healthcare system has strayed from its purpose. Instead of focusing on patient care, providers must invest millions in AI tools to combat insurers' automated denial systems - resources that could be spent on direct patient care, medical research, or improving healthcare delivery. The emergence of these counter-measures, while potentially helpful for providers and patients seeking care, highlights fundamental flaws in our healthcare system that require policy solutions, not just technological fixes.

AI Bias: Amplifying Healthcare Inequities

The potential for AI systems to perpetuate and intensify existing healthcare disparities is deeply concerning. A comprehensive JAMA Network Open study examining insurance claim denials revealed that at-risk populations experience significantly higher denial rates.

The research found:

  • Low-income patients had 43% higher odds of claim denials compared to high-income patients

  • Patients with high school education or less experienced denial rates of 1.79%, versus 1.14% for college-educated patients

  • Racial and ethnic minorities faced disproportionate denial rates:

    • Asian patients: 2.72% denial rate

    • Hispanic patients: 2.44% denial rate

    • Non-Hispanic Black patients: 2.04% denial rate

    • Non-Hispanic White patients: 1.13% denial rate

The National Association of Insurance Commissioners (NAIC) Consumer Representatives report warns that AI tools, often trained on historically biased datasets, can "exacerbate existing bias and discrimination, particularly for marginalized and disenfranchised communities."

These systemic biases stem from persistent underrepresentation in clinical research datasets, which means AI algorithms learn and perpetuate historical inequities. The result is a feedback loop where technological "efficiency" becomes a mechanism for deepening healthcare disparities.

Legislative Response and Regulatory Oversight

While California's Physicians Make Decisions Act and new Centers for Medicare & Medicaid Services (CMS) rules represent progress in regulating AI in healthcare claims, the NAIC warns that current oversight remains inadequate. California's law prohibits insurers from using AI algorithms as the sole basis for denying medically necessary claims and establishes strict processing deadlines: five business days for standard cases, 72 hours for urgent cases, and 30 days for retrospective reviews.

At the federal level, CMS now requires Medicare Advantage plans to base coverage decisions on individual circumstances rather than algorithmic predictions. As of January 2024, coverage denials must be reviewed by physicians with relevant expertise, and plans must follow original Medicare coverage criteria. CMS Deputy Administrator Meena Seshamani promises audits and enforcement actions, including civil penalties and enrollment suspensions for non-compliance.

The insurance industry opposes these safeguards. UnitedHealthcare's Medicare CEO Tim Noel argues that restricting "utilization management tools would markedly deviate from Congress' intent." But as the NAIC emphasizes, meaningful transparency requires more than superficial disclosures - insurers must document and justify their AI systems' decision-making criteria, training data, and potential biases. Most critically, human clinicians with relevant expertise must maintain true decision-making authority, not just rubber-stamp algorithmic recommendations.

Recommendations for Action

The NAIC framework provides a roadmap for protecting patients while ensuring appropriate oversight of AI in healthcare claims. Key priorities for federal and state regulators:

  • Require comprehensive disclosure of AI systems' training data, decision criteria, and known limitations

  • Mandate documentation of physician recommendation overrides with clinical justification

  • Implement regular independent audits focused on denial patterns affecting marginalized communities

  • Establish clear accountability and substantial penalties when AI denials cause patient harm

  • Create expedited appeal processes for urgent care needs

Healthcare providers should:

  • Document all cases where AI denials conflict with clinical judgment

  • Track patient impacts from inappropriate denials, including worsened health outcomes

  • Report systematic discrimination in algorithmic denials

  • Support patient appeals with detailed clinical documentation

  • Share denial pattern data with regulators and policymakers

The solutions cannot rely solely on technological counter-measures. As the NAIC emphasizes, "The time to act is now."

Conclusion

The AI-driven denial of care represents more than a technological problem - it's a fundamental breach of the healthcare system's ethical foundations. By prioritizing algorithmic efficiency over human medical judgment, insurers have transformed life-saving care into a battlefield where profit algorithms determine patient survival.

Meaningful change requires a multi-pronged approach: robust regulatory oversight, technological accountability, and a recommitment to patient-centered care. We cannot allow artificial intelligence to become an instrument of systemic denial, transforming healthcare from a human right into an algorithmic privilege.

Patients, providers, and policymakers must unite to demand transparency, challenge discriminatory systems, and restore the primacy of human medical expertise. The stakes are too high to accept a future where lines of code determine who receives care and who is left behind. Our healthcare system must be rebuilt around a simple, non-negotiable principle: medical decisions should serve patients, not corporate balance sheets.

Read More