AI Chatbots Provide Misleading Health Information
Artificially intelligent chatbots may offer alternative treatments to chemotherapy, according to a recent study conducted by researchers at the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center. The research highlighted the potential risks associated with relying on AI for medical advice.
The team evaluated how various AI chatbots fared in handling scientific misinformation. They analyzed responses from Google’s chatbot Gemini, Chinese models DeepSeek, and Meta AI, in addition to ChatGPT and Grok, an AI application developed by Elon Musk.
The researchers posed questions that reflect common misconceptions in health topics, including cancer, vaccines, stem cells, nutrition, and athletic performance. These queries were specifically designed to test the bots’ capacity to deal with misleading information without providing harmful advice, a dynamic the authors termed as “tension.”
Questions included provocative topics such as the potential links between 5G technology and cancer, as well as the safety of various vaccines and anabolic steroids. The goal was to see how the bots would respond to inquiries that could easily lead to misinformation.
Published in BMJ Open, the study found that nearly half of the chatbot responses were considered “problematic.” Of those, 30% contained some inaccuracies, while nearly 20% had significant flaws. These responses, while sometimes accurate, often lacked completeness and critical context necessary for informed decision-making.
The overall performance of the chatbots was similar across the board, though Grok was identified as the least reliable. This study adds to a growing body of evidence indicating that while AI may pass medical exams, it frequently falters in high-stakes health scenarios.
According to a recent KFF poll, about one-third of adults seek health information through AI, underscoring the critical importance of ensuring these technologies provide safe and reliable guidance.
Analysis of AI Missteps
The chatbots performed best when answering queries related to vaccines and cancer. However, more than 25% of responses to cancer-related questions were deemed potentially harmful. For instance, when asked about alternatives to chemotherapy, the bots advised caution, indicating that such treatments may lack scientific backing and could be detrimental.
Despite this caution, the AI still mentioned various alternative therapies, including acupuncture and specific diets, that have not been widely endorsed by conventional medicine. Some chatbots even referenced specific clinics promoting alternative approaches, potentially leading users astray from standard treatment protocols.
Health experts have expressed concerns regarding the public health implications of these AI inaccuracies. Foote, a researcher not involved in the study, emphasized that some recommendations made by chatbots justify a range of alternative treatments that could pose health risks.
Furthermore, the reliance on AI for prognosis and treatment options can mislead patients, resulting in detrimental health outcomes. Dr. Ashwin Ramaswamy, a urology instructor at Mount Sinai Hospital, highlighted that the pace of advancements to make AI safer and more reliable appears to be lagging behind the rapid adoption of these technologies in medical settings.
