Becoming A Digital Medical Detective: Should You Consult AI For Medical Advice?
April 22, 2026

Most of us know it’s not a great time right now to be a patient in need of a physician. Due to physician shortages, physician burnout, patient overload, inadequate or unavailable insurance, and system bureaucracy, it’s virtually impossible to get in to see your usual physician (or start a relationship with a new physician). Some may find help by switching to a physician assistant or nurse practitioner, both of whom can handle most everyday matters. Others may go the route of a telemedicine practice for most of their care, which in today’s world may be easier to access. Still others may seek support, advice, and even treatment recommendations from the internet: Specifically, there is a growing movement followed by many individuals and even health care systems to use AI (artificial intelligence) to support patients or to ask for medical help and find out answers to medical problems. While AI may not have the bedside manner of your favorite physician, no one can dispute the accessibility and cost of seeking “care” from an AI chatbot. Whether that information is accurate, helpful, or beneficial, however, is another matter.
A slew of recent surveys documents this trend toward seeking answers to medical questions from an AI Chatbot. For example, a poll from last summer conducted by The National Poll on Healthy Aging examined how older adults use and think about AI. About 55% of those surveyed said they use AI for reasons including health information and social connection. A recent poll from the Pew Research Center reports that ⅓ of Americans get health information from social media at times and 22% get health information from AI chatbots. While most mentioned they do not view these as highly accurate or personalized sources, they did enjoy the ease of access and understandability of the information received from these sources. The Gallup organization just released its own survey regarding Americans turning to AI to supplement their healthcare visits. This survey found that 59% of those who use AI for health care information do so either before or after a physician visit. Again, while few report necessarily trusting the information they receive from AI, apparently millions of people revealed that they skipped a visit to a physician after finding out information on AI. What topics do AI chatbot visitors ask about? Everything from nutrition and exercise to medication side effects and a better understanding of a diagnosis. Most users are pleased with the instant access to information that can be gotten from an AI chatbot, and the reality is, it’s not so different from doing a Google search for health information, which millions of us have done for decades. One last recent survey is from the Kaiser Family Foundation, and it too reports that about ⅓ of all adults in the US have turned to chatbots for health information. This is about the same percentage of people who seek out medical information on social media. What this report also revealed was that about 13% of the US public has uploaded personal medical information to an AI Chatbot to find out personally specific advice related to their health. And, according to OpenAI, the founders of ChatGPT (one of the AI chatbots), about 40 million people ask ChatGPT a health question each day.
The question is, then, how safe and accurate is it to seek out medical advice and share your medical history with a chatbot? The mainstream media and, increasingly, medical journals are replete with anecdotes and data about the unreliability of information obtained from AI chatbots. While some of the problems have to do with how the user asks a question and how much pertinent information they share with the chatbot, there are increasingly worries that the information obtained from a chatbot can be no better than what is found in a Google search or can even be misleading or false, potentially causing harm to patients, especially if they don’t follow up the information with an actual physician visit. One such study of medical information obtained from a chatbot was recently published in Nature Medicine. While AI companies state that they are constantly releasing updated and smarter AI models that know what kinds of follow-up questions to ask in order to provide more accurate information, much is left to chance depending on how the user phrases the questions asked. Two recent studies published in BMJ Open and JAMA Network Open also reported inaccuracies or even outright falsehoods in the information provided by chatbots when various realistic health questions were posed. Moreover, the information presented can at times seem overly confident and indisputable, never with the admission that perhaps the AI didn’t know enough to accurately respond. While it appears that many, if not the majority of users, approach these interactions with some skepticism about how trustworthy they are, it seems likely that some will be harmed by relying on the information without further checking in with a health care provider.
As already stated, many people are nonetheless uploading their personal information to an AI Chatbot to see what lessons can be learned and what information can be gleaned. Some of these anecdotes seem to end reasonably well, while others seem to display a healthy skepticism of how valuable this was. Unfortunately, one recent story in The New York Times about a cancer patient who was unwilling to listen to his oncologist after digging deep into AI about his condition ended tragically with the patient’s unnecessary death. So, what’s the best way to use AI for your health and well-being, given the current impossibility of determining whether the benefits of AI medical advice outweigh the risks? A recent post from Kettering Health seems to provide some useful advice (and a useful glossary of AI terms here). First, remember that the HIPAA privacy rule does not apply to information you have uploaded to a chatbot, as the chatbot is not a recognized medical provider, so beware that you are putting the privacy of your medical information at risk. If possible, do not upload any personally identifying information. Moreover, it’s suggested that your use of an AI chatbot be confined to help you prep for an appointment (perhaps asking for a list of pertinent questions to ask your doctor) or to help you create a clear summary of what’s been going on with your health. You may also find it useful to use AI after a visit to clarify medical terms or help you create a routine to follow your doctor’s orders, such as a tracking tool to document symptoms you are having. The article concludes that “You can use AI, but trust your provider.” For now, that appears to be sound advice in the constantly evolving world of AI chatbots and information.






