I, along with many other physicians, have written on our physician-centric concerns about the role of AI in medicine. Those concerns have not stopped AI’s increasing involvement in clinical care, as illustrated by ChatGPT’s new healthcare version, which already has a waiting list, and a me-too version from Anthropic. Our physician-centric concerns revolve around the misinformation of chatbot “hallucinations” and an increase in workload around the chatbot’s personalized output. Yet these concerns, real as they are, fail to answer a simpler and more patient-centered question: why are chatbots so popular?
The Man and Woman in the Mirror
To find an answer, we can turn to a study published nearly a year ago in JAMA, which analyzed what people voluntarily wrote in their own words on Yelp about healthcare facilities that provided “essential health benefits,” including hospitals, clinics, and pharmacies.
The researchers used natural language processing and machine learning to review 1.1 million reviews of 138,000 healthcare facilities nationwide. The considered individual short words or phrases, expression of emotions, i.e., “not,” or “never,” and themes in the most negative and positive reviews.
What Drove Negative Reviews
What stands out most clearly is that complaints about medical competence, misdiagnosis, or technical errors did not dominate negative reviews. Instead, dissatisfaction clustered around communication failures and administrative friction. Patients repeatedly described things that did not happen—phone calls not returned, questions left unanswered, concerns unaddressed. They expected information, clarity, and responsiveness, and felt they received none of it. Administrative complaints included billing and payment issues, but they also emphasized how hard it was to reach anyone for help—long hold times, complex phone trees, and opaque processes. These experiences consistently left patients feeling ignored, disrespected, or confused.
As expected, anger and descriptions of interpersonal conflict were frequent; the reviews were often occasions to vent rather than calm critiques. To be fair to my active colleagues, the study measured perception, not clinical effectiveness - patient satisfaction is not the same as medical quality. A warm, communicative clinician can still make a wrong diagnosis; a brusque but competent system may deliver excellent care.
From a physician’s perspective, the study’s most uncomfortable message is also its most useful: patients judge health care less by what we do medically than by how clearly, respectfully, and reliably we communicate.
Negative reviews overwhelmingly describe moments when patients felt unseen, unheard, or left in the dark. These are not individual failings so much as systemic weak points—ones that create fertile ground for alternatives. Those same weak points help explain why a very different kind of health interaction has gained traction.
A tracking poll by KFF found that about one in six adults use AI chatbots at least once a month to find health information and advice, rising to one in four among adults under age 30. Among these same adults, only about a third (36%) trust chatbots to provide reliable health information. So, if it is not the chatbot’s reliability, what is the secret sauce?
Why People Like and Use Health Chatbots
People are not turning to health chatbots because they think artificial intelligence has replaced doctors. They are turning to chatbots because, too often, the health care system has failed to meet basic human needs: time, clarity, reassurance, and responsiveness. As described in a series of interviews in the New York Times
“ChatGPT has all day for me — it never rushes me out of the chat.”
In contrast to rushed appointments, unanswered portal messages, or vague reassurances, chatbots respond immediately and at length. They do not interrupt. They do not glance at the clock. For patients who feel dismissed or brushed off, that alone is powerful.
“Chatbots have differentiated themselves by giving an impression of authoritative, personalized analysis in a way traditional sources don’t. This can lead to facsimiles of human relationships and engender levels of trust out of proportion to the bots’ abilities.”
Unlike “Dr. Google,” the new generation of AI chatbots is responsive and interactive. By speaking without jargon and engaging users conversationally, they offer more than information retrieval—they create the illusion of dialogue.
Interestingly, many of the users the Times interviewed were fully aware that the dialogue was an illusion. However, just as one might suspend disbelief in a movie or show, the illusion, especially when the chatbot is “empathetic,” is sufficiently strong to reinforce unwarranted trust.
“I’m really sorry you’re going through this… While I’m not a doctor, I can help you understand what might be going on.”
To a patient who feels unseen, those words can feel profoundly validating, enhancing trust that may often be beyond what the technology warrants. Medical care often asks patients to accept uncertainty, delays, and ambiguity, rarely acknowledging the emotional toll. Chatbots, by their empathetic words, affirm that you deserve better.
“The really relevant question, I think, is: Is it better than having nowhere else to turn?”
- Dave deBronkart, patient advocate
And unfortunately for many patients, especially those who feel dismissed, anxious, or confused after an encounter with the health care system, the choice is not between a chatbot and a physician. It is between a chatbot and no one at all.
The Care Patients Desire
From a clinical perspective, the popularity of health chatbots is less an endorsement of artificial intelligence than a verdict on modern health care delivery. Chatbots do not succeed because they are better diagnosticians or understand medicine more deeply; they succeed because they reliably provide what patients say they lack—clear communication, responsiveness, reassurance, and time. In doing so, they expose a fundamental mismatch between how clinicians measure quality and how patients experience care.
The same frustrations that drive negative Yelp reviews are precisely the gaps chatbots now fill. When patients turn to AI despite doubts about its accuracy, they are signaling that being acknowledged and understood often matters as much as being technically correct—even if that tradeoff carries real risks.
The real challenge is not whether chatbots will replace physicians, but why patients so often find validation from an algorithm. Until health care systems are designed to deliver not just competent medicine but reliable communication and humane attention, chatbots will remain attractive, not because they are better doctors, but because, for too many patients, they feel like the only listener in the room.
Source: Online Reviews of Health Care Facilities JAMA Network Open DOI: 10.1001/jamanetworkopen.2025.24505
