When Alexandra Watson has a question about her heart condition, her first call is Chad. That’s not the name of her heart patient—rather, it’s her nickname for ChatGPT, which she’s been using to test her symptoms for the past few years.
Her condition is rare, and she says the LLM (Large Language Model) “cuts through the noise” to provide readable and easy-to-understand information. “I couldn’t afford to spend time with my cardiologist to talk through every question I had,” she says. “But using AI” allows me to do deep dives and hypotheticals. Doctors are useless, Google just scares you, but Chad is helpful.
In January, a report from OpenAI, the tech behind ChatGPT, claimed that more than 40 million people worldwide use the bot for health advice every day, accounting for more than 5 percent of messages sent to it globally. And last year, research from healthcare champion HealthWatch found that 9 percent of men and 7 percent of women in the UK use an AI chatbot to ask medical questions.
For Watson, the fact that the chatbot can keep track of past issues she’s asked about, to give her a more comprehensive picture, is a bonus. It refers to the questions of her heart, for example, when she asks other questions related to health.
She is aware, though, that “Chad” can have a tendency to flatter. It doesn’t have to be one for tough love. “[It] I want to feel better about myself, she says, recalling that when she “asked about proper diets the other day,” noting that she “needs to take it easy” after surgery nearly two years ago, and telling her to “be kind to yourself” during menopause.
Carol Rilton is another change. “I use ChatGPT most days for my work or travel arrangements,” he says. “It seems natural to use it [other aspects] My life, including medical information.” Like Watson, she has a heart condition. Her routine checkups, she says, sometimes look like a check sheet from the medical profession. So when she went through something with her body that she wasn’t sure about, her first port of call was ChatGPT.
The chatbot also proved useful when she was planning an international trip, prompting her to get a “fit to fly” note to travel with her medication. Its cheerful tone makes all the difference, too. “If a man is so wise and so good,” he says, “I will set an example for them.”
Given that they can be informative, convenient and incredibly personable, it’s perhaps surprising that so many of us are asking AI bots for health guidance. They may seem friendlier and less threatening than ‘Dr Google’ – and can be easier to get hold of than your GP. But most of these programs are not designed to provide medical advice, and their small print terms and conditions will remind users of this. ChatGPT’s instructions, for example, state that it is “not for use in the diagnosis or treatment of any medical condition”.
But when we’re actually back and forth with the bot, it can be easy to forget. A recent study by researchers at Stanford and Berkeley found that advertisements and warnings in response to health questions declined significantly in LLMs between 2022 and 2025, from 26.3 percent to 0.97 percent.
Like all LLMs, chatbots are notoriously prone to errors and omissions, when they generate factually incorrect or misleading information with pattern predictions. Last year, for example, an American medical journal reported the case of a 60-year-old man who replaced salt in his diet with sodium bromide after consulting ChatGPT. He ended up in psychiatric care after suffering from paranoia and hallucinations, the result of his overuse of bromides.
Then there is the question of data privacy, an issue that many of us currently choose to ignore in favor of convenience. What happens to the health information we share with Big Tech? And with all this in mind, should we proceed with extreme caution?
We were talking about ‘Dr Google’. This is a more conversational version, which makes it feel like talking to a real healthcare professional
Dr. Sonia Zamuki
OpenAI, perhaps inevitably, sees its chatbot as an “important ally” to help patients “self-advocate” and navigate the healthcare system, especially in the United States, where the process is complex and fragmented. In January, it launched ChatGPT Health for a limited group of users. This feature allows users to link their health information, such as medical records, or data from apps like Apple Health or MyFitnessPal, so they can get more personalized responses in their chats.
At the time, the company said that this latest development was designed for “support, deployment, medical care,” and explained that health information will be stored separately from other chats. It is currently unavailable in the UK, the European Economic Area and Switzerland, however, due to strict restrictions around digital privacy.
A study was published last month in the journal natural medicine The chatbot was tested in 60 medical scenarios, changing various conditions such as the patient’s gender or race, or adding test results and family comments. The researchers found that while ChatGPT Health performed well in “textbook emergencies,” where patients report sudden symptoms, it fell short elsewhere.
In 51.6 percent of cases where the patient needed to go to the hospital immediately, the chatbot advised them to stay home or wait for a routine appointment. “ChatGPT Health is most reliable when the clinical decision is of least consequence, and least reliable when it is most important,” said lead researcher Ashwin Ramaswamy. British Medical Journal.
when independent Contacted by OpenAI, they told us that they welcome independent research into AI healthcare systems, but claimed that the research does not reflect how people are generally inclined to use ChatGPT Health, or how it is designed to work in real-life scenarios. They added that they continue to ensure the safety and reliability of the program through testing and feedback before implementing it widely.
Of course, trying to access health-related information online is nothing new. Who among us can honestly say that they have never followed a website to find out more about some seemingly minor symptom, only to firmly convince themselves that said symptom is actually a terrifying harbinger of doom? “We were talking about Dr. Google,” says Dr. Sonia Samaki, a former NHS doctor who is now the founder and CEO of AI healthcare company 32Co. “It’s a more conversational version, which makes it feel like talking to a real healthcare professional.
Zamucki continues: “What people are trying to solve is not a new problem, it’s that access to doctors is difficult.” “The waiting list is long, and that’s if you just want to get to a GP.” She notes that getting more expert knowledge is even harder. “That’s because there are even more obstacles in the way. So it’s completely natural for people to go online to try and get the information they’re struggling to get.”

Consulting an LLM is not the same as finding an answer in a book, or even doing a Google search, which essentially “pulls out the truth and presents it to you on a plate,” Samoki says. Instead, LLMs are “pattern recognizers,” he explains. “They are probabilistic mechanisms that find the most probable answer to a question, [and have learnt from billions of texts to] Try to predict what the next best word in the sequence of words is.
And, importantly, “You can’t be 100 percent sure, if you ask something of it, that it’s going to get exactly the right truth.” This, Zamucki adds, “is really where the anxiety comes from”.
In addition, the LLM will try and be very useful even when 100 percent of the answer is not known. These platforms have a habit of prioritizing contribution over, say, accuracy, Zamucki argues. Hallucinations can occur “anywhere,” he says [an LLM] It tries to fill the gap in knowledge, but says, ‘Look, maybe this is it.’
The way you write immediately can affect the response you get. When you send a message or question to a chatbot, you’re putting what you think is important, notes Dr. Carolyn Pilot, acting chief medical officer for digital clinic HealthHero. “So the prompt is prejudice in the first place.” Also, you may accidentally leave out key information that a doctor will ask you. “When I consult with someone, I let them tell me what they think is important,” he explains. But she also wonders: “Okay, but did they have this other thing that they didn’t mention?”
To work around it all, chatbot fan Alexandra Watson says she always asks for resources and cross-checks when she presents ChatGPT with a medical question.
Are doctors concerned about how “Dr ChatGPT” might change the way their patients seek medical advice? “I know the mentality of a lot of doctors, but I really don’t think so if people have done their homework and asked for a chatbot,” Dr. Pilot says. “I find it interesting to have conversations and explore their fears and concerns, and what the chatbot said.”
But it depends on the patient, he says. If someone has a fixed idea of what their problem is going to be, they may already be afraid of whatever the internet tells them.
Professor Victoria George Brown is the President of the Royal College of General Practitioners. “It’s encouraging to see patients being passionate about their health,” he says. But he cautions that chatbots are not without risks. “It’s not always clear where the data comes from, or how accurate it is,” she explained, adding that the results may contain material that is neither evidence-based nor reliable.
Even the most reputable AI providers rarely allow users to choose how long their health-related data is kept for.
Dr. Aisha Makar
He says there is “huge potential” for technology to support patients. “But it will always need to work alongside and complement the work of doctors and other health professionals.”
And it’s important to note that submitting our health information to LLMs can introduce significant data privacy risks. Dr Aisha Makar, a lecturer in computer science at the University of Derby, specializes in ethical privacy-preserving technologies. “Most AI systems store user input in a cloud environment, where models learn iteratively from the data,” he says. But this process is not always guaranteed to follow strict naming standards.
Additionally, sometimes LLMs can “determine or reconstruct sensitive personal details from the underlying patterns in the data”, even if users have tried to avoid explicit identifiers. Most of us, Makkar notes, will have little idea of how our data is being processed behind the scenes. “Even the most reputable AI providers rarely allow users to choose how long their health-related data is retained.”
He therefore advises that we should “only turn to chatbots for general medical guidance, not for personalized medical advice that requires sharing detailed health information”.
Meanwhile, the pilot is asked “all the time” whether AI will replace doctors. “I don’t see it replacing them,” he says. “I think it will help them, and they will use it as a counseling tool.”
And although friendly and eager to please, says George Brown, an AI chatbot cannot replace a conversation with a clinician who knows the patient, understands the situation and can make safe, evidence-based decisions.
#seeking #medical #advice #ChatGPT