Your Doctor Has a Fiduciary Duty to You. ChatGPT Doesn’t.
As confidence in public health institutions drops, reliance on AI chatbots for health information is rising
It’s 2 a.m. and you can’t sleep because your child has a fever and you notice a rash that you don’t recognize. Not too long ago, you would have called the after-hours nursing hotline at your pediatrician’s office or searched the CDC’s website for guidance. Today, you launch ChatGPT on your phone, upload an image of the rash, share your child’s age and temperature, and ask whether you should be taking a middle-of-the night trip to the ER.
Millions of small, private moments like this one reflect a massive shift in American life and where we place our trust. As confidence in public health institutions drops to its lowest point since the pandemic, reliance on AI chatbots for health information is rising. One recent national poll found trust in the CDC declined from 66% to 54% in less than a year. Even Democrats are losing confidence in the CDC, joining Republicans who have long been more skeptical of the agency.
When the institutions you once relied on seem to be captured, corrupted, or absent, you begin to look for something to fill that void.
Enter ChatGPT Health, which OpenAI launched on January 7 as an AI chatbot where users can upload their medical records and connect data from their wellness apps in order to get personalized answers about their symptoms, test results, and treatment options. Not to be outdone, just a few days later Anthropic announced Claude for Healthcare, a similar suite of tools for enterprise and consumer users.
Both announcements were timed to the JP Morgan Healthcare Conference, signaling to investors and health systems that the AI industry is now positioning itself as the new trusted intermediary for health-related information.
And it may be working. OpenAI has said that more than 230 million health-related questions are already asked on ChatGPT each week, over forty million a day, often outside of normal healthcare hours.
Fidji Simo, OpenAI’s CEO of applications, shared a personal story around the launch of ChatGPT Health. She was hospitalized for a kidney stone, and used ChatGPT to cross-check an antibiotic that a resident had prescribed for her. The chatbot flagged a risk that the antibiotic could reactivate a past serious infection. “The resident was relieved I spoke up,” Simo said. “She told me she only has a few minutes per patient during rounds.”
That was a good outcome for Simo. But people trust AI health advice even when it’s wrong. A recent study published in NEJM AI found that people often struggle to distinguish between AI-generated medical responses and those written by doctors. More troubling is the fact that they prefer AI responses, and as a result rate even low-accuracy AI answers as valid, trustworthy, and satisfactory. As a result, they also have a tendency to follow potentially harmful advice and to seek unnecessary medical attention based on what a chatbot tells them.
Our laws have not caught up with these shifting dynamics. While both OpenAI and Anthropic are explicit that their health products are not “intended for diagnosis or treatment,” those disclaimers don’t govern how people use a tool that is fluent, personalized, and always available to answer their health-related questions.
Open AI is betting that Health Insurance Portability and Accountability Act won’t apply because they are not “covered entities” under the act. Anthropic says it is offering HIPAA-ready products with Claude for Healthcare for HIPAA-compliant organizations who use Claude for Enterprise. But those legal protections are unlikely to apply to consumer use of Claude for Health, who chose to upload their lab results and health records directly to the chatbot. While both companies so far promise they won’t train their foundation models on consumer health data, those promises are only as good as their word, since they could change their terms of service tomorrow.
The FDA clarified that many AI-enabled software tools fall outside medical device regulation, at least when clinicians can “independently review” the recommendations. But chatbot use increasingly happens well beyond clinic walls and far from oversight by physicians. Including at 2 a.m. by parents deciding whether to drive their child to the ER or wait until morning.
We have HIPAA for clinical care. We have consumer protection laws for products. These intimate, personalized, always-available AI chatbots, which are now influencing health-care decision making of millions while disclaiming responsibility for them, fall into a legal grey zone. They aren’t healthcare providers, so they have no fiduciary duty to their users.
I’m not arguing that AI shouldn’t have a role in healthcare, and appreciate that chatbots can synthesize information, translate jargon, help patients prepare questions for their doctors and even potentially democratize access to health information for people in rural areas, or without insurance, or those who can’t get an appointment for weeks.
But we shouldn’t lose sight of the fact that we are rapidly normalizing the transfer of trust from accountable institutions to systems that explicitly refuse accountability. We need to answer sooner, rather than later, what legal obligations should apply to tools that function as health authorities while claiming they are not one, especially when tens of millions of daily users already treat that product as a health advisor.
Trust doesn’t disappear. It migrates. When people lose faith in the institutions designed to guide them, they find new guides. Right now, that guide is a chatbot that won’t train on your health data but might hand it over to a subpoena, built with input from physicians but bound by none of the obligations that a physician owes you.
The crisis of legitimacy in our public health institutions is real and growing. But the answer isn’t to transfer that authority to systems with no accountability at all. It’s to build something that deserves the trust people are already placing in it. Whether that’s through rebuilt institutions, regulated AI, or something we haven’t yet invented.
Until then, the migration continues. And no one’s minding the gap.



