Skip to content

mitmedialab/medical_hallucination

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 

Repository files navigation

Medical Hallucination

The emergence of Foundation Models (e.g., Large Language Models [LLM] and Large Vision Models [LVM]) capable of processing and generating multi-modal data has transformed AI’s role in medicine. However, these models have limitations, particularly hallucination, where inaccurate or fabricated information is generated. We define medical hallucination as instances where models, particularly LLMs, produce misleading medical information that can adversely affect clinical decision-making.

By examining their unique characteristics, underlying causes, and critical implications, we categorize and exemplify medical hallucinations, highlighting their potentially life-threatening outcomes. Our contributions include a taxonomy for understanding and addressing medical hallucinations, experiments using medical hallucination benchmarks and physician-annotated medical records, and insights from a multi-national clinician survey on their experiences with medical hallucinations.

Our experiments reveal that inference techniques such as Chain-of-Thought (CoT), Retrieval-Augmented Generation (RAG), and Internet Search impact hallucination rates, emphasizing the need for tailored mitigation strategies. These findings underscore the ethical and practical imperative for robust detection and mitigation strategies, establishing a foundation for regulatory policies that prioritize patient safety and uphold clinical integrity as AI integrates further into healthcare.

📣 News

[2025-02-16] 🎉🎉🎉 Our preprint paper has been submitted to arxiv:

Contents

What makes Medical Hallucination Special?

Hallucinations in Medical LLMs

Title Institute Date Code
MedHalu: Hallucinations in Responses to Healthcare Queries by Large Language Models University of Surrey
Georgia Institute of Technology
2024-09 N/A

Medical Hallucination Benchmarks

Title Institute Date Code
Med-HALT: Medical Domain Hallucination Test for Large Language Models Saama AI Research 2023-10 https://medhalt.github.io/

Dection Methods for Medical Hallucination

Title Institute Date Code

Mitigation Methods for Medical Hallucination

Title Institute Date Code

📑 Citation

Please consider citing 📑 our paper if our repository is helpful to your work, thanks sincerely!

TBA

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published