AI Chatbots and Cancer Information
A study conducted by the University of Sharjah, in collaboration with Flinders University (Australia), Massachusetts General Hospital/Harvard Medical School (USA), and Prince of Songkla University (Thailand), has evaluated the ability of AI-powered chatbots to provide accurate and reliable cancer-related information.
Published in the European Journal of Cancer, the research assessed seven major AI chatbots—ChatGPT, Google’s Gemini, Microsoft’s Co-Pilot, MetaAI, Claude, Grok, and Perplexity—across eight languages: English, Arabic, French, Chinese, Thai, Hindi, Nepali, and Vietnamese.
The study explored whether these chatbots could provide factually correct, well-referenced, and easily understandable cancer information to users worldwide.
Key Findings:
- English responses were the most reliable, with no major inaccuracies.
- Non-English responses had some issues—7 out of 294 answers contained mistranslations, incorrect drug names, or misleading treatment recommendations.
- 48% of responses included references, but many cited unreliable .com websites, raising concerns about the quality of sources.
- Readability was inconsistent, with some AI-generated responses being too complex for the general public to understand.
Why This Matters:
As more people rely on AI chatbots for medical advice, ensuring these tools deliver trustworthy, well-referenced, and accessible health information is critical. This study highlights the significant progress AI has made while also identifying key areas for improvement, particularly in multilingual accuracy, reference reliability, and readability.
Read the full study in the European Journal of Cancer: DOI: 10.1016/j.ejca.2025.115274