Your Computer Will See You Now: Using AI For Medical Information

Your Computer Will See You Now: Using AI For Medical Information
November 5, 2025
Virtually anyone who’s had interaction these days with the health care system finds plenty to complain about. Whether it’s affordability, insurance, access to care, or even the old-fashioned and often missing human interaction of the doctor-patient relationship, nothing feels easy or adequate anymore. It’s enough to throw up your hands and pray for a miracle. But for some, instead, it’s now come to clicking their fingers on a keyboard and asking their artificial intelligence (AI) program whatever questions or concerns come to mind. And according to recent surveys, that’s just what a significant number of us are doing. While the overwhelming majority of us seem to trust the health information we receive from our human providers, the reality is many now turn to online sources- including AI chatbots- for answers and information about specific symptoms or health conditions. According to a survey done in April 2025 by the Annenberg Public Policy Center, nearly 8 in 10 adults say they’re likely to go online for information, and nearly ⅔ said they have seen AI-generated responses- beyond a traditional Google search- for answers to their questions. Most Americans (63%) also think that AI-generated health information is somewhat or very reliable. What that means is that the majority of those connecting with AI chatbots may not be sufficiently scrutinizing the information they receive or checking its factual accuracy, strategies that most experts recommend for people seeking health information from their computers. But as one recent commentator who has embraced “Dr. Chabot” makes clear, “My human doctors rarely have time to talk for long, and don’t seem all that interested in the big picture. Chatbots are different.”
So if your strategy is now to seek out guidance and support from ChatGPT, Gemini, or some other AI platform, what should you know about the process and the reliability of the information? There’s no doubt that an AI chatbot has the potential to give you good information about a diagnosis you’ve received, medical jargon you may not understand, or explanations about disease progression or available treatment options. Such a platform can also help prepare you by providing a list of questions you should ask your provider, and can give you those questions in a concise and logical order so that you make the most of the time with your doctor when you’re in his or her office. That being said, experts warn that you need caution and critical thinking before you blindly accept what the Chatbot tells you. As the MD Anderson website tells patients about conferring with a Chatbot, “Trust but verify.”
So what does that mean for the interaction that you have and the information you receive? While AI can be an educational resource and can complement the advice or information you receive from a provider, in most circumstances, it does not replace the actual need to see a healthcare practitioner. First, the information you receive is only as good as the details and context you give the chatbot- leave out important pertinent details for your specific situation, and the response you get may not address your particular needs. Second, chatbots tend to want to validate what you seek, so you need to ask open-ended questions, or you may get responses that provide feedback in a skewed way- it all depends upon how you phrase the questions you ask. Third, there is some concern about what happens to the information you feed into the chatbot. If you provide detailed, intimate, and identifying information, will it be kept private? You are advised to avoid uploading a full medical record with identifying information, and experts suggest you seek out a chatbot that is HIPAA compliant to guard your privacy. AI platforms such as My Doctor Friend, Counsel Health, or Doctronic purport to be HIPAA compliant, as does a brand new AI platform, OpenStuff.
Other advice to verify you are receiving trustworthy and accurate information? Always check the sources that your AI information cites, and if no sources are provided, then you may want to check elsewhere. Also, be aware that information from a chatbot is not always the most up-to-date. It’s even possible that the chatbot can make up sources that don’t actually exist (yes, they appear willing to do that). Finally, make sure you bring the information you acquire to the attention of your care team. While many doctors themselves will turn to AI to verify diagnoses or fill in gaps in their knowledge, the reality is that each person presents with a unique healthcare issue, and that generalities or typical disease progression or treatments may not work in your particular situation. While there’s no denying that your computer creates a sense of ease and allure in being able to quickly respond to your medical questions, it likely doesn’t know much about you as a specific human with unique circumstances, so it may not be able to adequately, sufficiently, or correctly provide the help you need. And of course, in any emergency, dial 911 rather than ask for help from your “chatbot” doctor. An emergency requires a human touch, not a virtual chat.






