System Failure: Inside the Collapse of HIV Data Protections
The privacy frameworks that protect people living with HIV—built over decades through advocacy, legislation, and the lived consequences of stigma and surveillance—are now on the brink of collapse. Recent reporting from WIRED reveals that "much of the IT and cybersecurity infrastructure underpinning the nation’s health system is in danger of a possible collapse" following deep staffing cuts at the U.S. Department of Health and Human Services (HHS). Agency insiders warn that "within the next couple of weeks, everything regarding IT and cyber at the department will start to reach a point of no return operationally."
These reductions—orchestrated under the banner of "efficiency"—have eliminated the technical expertise necessary to maintain the very systems designed to protect patient information while enabling effective public health response. What took decades of careful negotiation to build could unravel in weeks.
A History of Mistrust and What It Built
In response to early AIDS panic and political scapegoating, HIV reporting systems were designed to protect privacy while still enabling public health surveillance. States initially resisted name-based reporting, opting instead for coded identifiers. These systems directly resulted from community resistance to the idea that a centralized government entity would hold a list of PLWH. By the late 1990s, the Centers for Disease Control and Prevention (CDC) and Health Resources and Services Administration (HRSA) had settled into a delicate dance: collect enough data to direct resources without breaking trust with the communities most impacted.
The Ryan White HIV/AIDS Program (RWHAP), created in 1990, is a reflection of this balance. Providers are required to report client-level data annually through the Ryan White Services Report (RSR), but it must be de-identified. Each grantee—whether a city, state, clinic, or community-based organization—must report separately, even if they serve the same client. This redundancy is intentional. It's how we avoid co-mingling funds and how we ensure that data is not aggregated in a way that risks patient re-identification. It’s messy, yes. But it’s designed to protect people, not just count them.
Why It’s So Complicated
At a structural level, RWHAP is segregated by design. Part A grantees are typically cities, Part B is for states, Part C goes to clinics, and Part D supports programs for women, infants, children, and youth. Each grantee and subgrantee reports separately. A person receiving services from a city-funded housing program and a clinic-funded medical program will appear in two different reports. They’ll be encrypted, anonymized, and counted twice—because each program needs its own audit trail. This is not a flaw. It’s a firewall.
It’s also one of the biggest complaints from providers. Clinics and case managers spend untold hours cleaning and submitting the same data to multiple entities for different grants every year. State agencies complain about the burden. But buried underneath the frustration is the reality: these walls are what keep private information from being aggregated, shared, and potentially exposed (or worse, used to target).
Molecular Surveillance and the Reemergence of Privacy Concerns
Parallel to the RSR reporting, the CDC continues to manage HIV surveillance through diagnostic reports, lab data, and increasingly, molecular surveillance—using genomic data from viral samples to track clusters and potential outbreaks. These systems operate independently from care-based reporting systems like the RSR. They’re not supposed to overlap. That’s on purpose.
Molecular surveillance is a powerful tool. It can detect transmission networks, identify gaps in care, and help allocate resources. But it also raises serious privacy concerns. People have no ability to opt out of having their viral sequence data analyzed. Community advocates have raised alarms about how this data could be misused—especially in states with HIV criminalization laws or where public health trust is already low.
When properly separated from care systems, surveillance data can inform public health strategy without endangering patient privacy. But the more these systems are tampered with, neglected, or mismanaged, the greater the risk of privacy breaches and data misuse.
The DOGE Playbook: Gutting Public Health from Within
None of this works without infrastructure. And right now, that infrastructure is being hollowed out.
On April 1, the Department of Health and Human Services (HHS) laid off roughly 10,000 employees—about 25% of its workforce. That includes entire IT teams, cybersecurity experts, and staff responsible for maintaining the systems that house Ryan White and surveillance data. As WIRED reported, these cuts have left HHS systems teetering on the edge of collapse.
The layoffs were orchestrated by the Department of Government Efficiency (DOGE), a Musk-backed initiative with a mandate to slash spending and "modernize" systems. In reality, DOGE operatives have cut critical personnel and attempted to rebuild complex legacy systems—like Social Security's COBOL codebase—without the necessary expertise. As NPR reported, DOGE staff have also sought sweeping access to sensitive federal data, raising serious concerns about the security and ethical use of health information.
A retired Social Security Administration (SSA) official warned that in such a chaotic environment, "others could take pictures of the data, transfer it… and even feed it into AI programs." Given Musk's development of "Grok," concerns have been raised that government health data might be used to "supercharge" his AI without appropriate consent or oversight.
The value of this data—especially when aggregated across systems like HHS, SSA, Veterans Affairs, and the Internal Revenue Service—is enormous. On the black market, a single comprehensive medical record can command up to $1,000 depending on its depth and linkages to other data sets. For commercial AI training, the value is even greater—not in resale, but in the predictive and market power that comes from large, high-quality datasets. If private companies were paying for this kind of dataset, it would cost billions. Musk may be getting it for free—with no consent, no oversight, and no consequences.
Meanwhile, at USAID, funding portals were shut off. Grantees couldn’t access or draw down funds. Even after systems came back online, no one was there to process payments. The same scenario is now playing out at HHS. Grantees have reported delays, missed communications, and uncertainty about reporting requirements—because the people who used to run the systems have been fired.
What's at Stake: Beyond Data Points
The crisis we're witnessing isn't merely technical—it threatens the foundation of HIV services in America. When data systems fail, grants cannot be properly administered. When grants are disrupted, services are compromised. And when privacy protections collapse, people living with HIV may avoid care rather than risk unwanted disclosure of their status.
We've been here before. In the early days of the epidemic, mistrust of government systems drove people away from testing and treatment. The privacy frameworks built into today's reporting systems were designed specifically to overcome that mistrust, enabling effective public health response while respecting human dignity.
A Call for Immediate Action
To address this growing crisis, we need action at multiple levels:
Congress must exercise oversight over DOGE's activities by requiring transparent reporting on HHS staffing changes and their operational impacts, and by establishing strict limits on data access and audit trails to ensure administrative accountability.
HHS must rapidly rehire technical expertise with the institutional knowledge needed to maintain these complex systems before contracts expire and systems fail.
Advocacy organizations should demand clear guardrails on any use of healthcare data, particularly regarding AI applications, including explicit prohibitions on repurposing data collected for public health for commercial training without consent or compensation.
HRSA must immediately address the continuity of the RSR and other reporting systems to ensure grant requirements don't become impossible to meet due to system failures.
But let’s be clear: none of this is a call to keep broken systems frozen in time. Public health data infrastructure can—and should—be modernized. There is real opportunity to streamline reporting, reduce administrative burden, and build tools that serve patients more effectively. But modernization must be done carefully, collaboratively, and with privacy at the center—not with a chainsaw in one hand and a Silicon Valley slogan in the other.
The “move fast and break things” ethos may work for social media startups, but it has no place in systems that safeguard the lives and identities of people living with HIV. What we’re witnessing is not innovation—it’s ideological demolition. The goal isn’t better care or stronger systems. It’s control, profit, and a reckless dismantling of public trust.
The myth that federal IT systems are merely bloated bureaucracies in need of disruption ignores their critical role in protecting sensitive information. Our public health data infrastructure has been built layer by layer, through hard-fought battles over privacy, accountability, and service delivery. Dismantling these systems doesn’t represent modernization—it threatens to erase decades of progress in building frameworks that enable effective care while respecting the rights of people living with HIV.
The privacy architectures designed in response to the early AIDS crisis weren’t just policy innovations—they were survival mechanisms for communities under threat. We cannot afford to let them collapse through neglect, arrogance, or privatized pillaging. The stakes—for millions of Americans receiving care through these programs—couldn't be higher.