← Back to Best Use
μ = E[X] σ² = E[X²] − μ² ∑(i=1→n) P(xᵢ) = θ P(A|B) = P(B|A) × P(A)/P(B) β̂ = (X'X)⁻¹X'Y R² = 1 − SSᵣₑₛ ∂f/∂x = lim h→0 H(x) = −∑p(x) log p(x) ∫₀^∞ e⁻ˣ²dx = √π/2 Γ(n) = (n−1)! χ² = ∑(Oᵢ−Eᵢ)² p < 0.05 λ = np σ = √λ F(x) = P(X≤x) z = (x−μ)/σ μ = E[X] σ² = E[X²] − μ² ∑(i=1→n) P(xᵢ) = θ P(A|B) = P(B|A) × P(A)/P(B) β̂ = (X'X)⁻¹X'Y R² = 1 − SSᵣₑₛ ∂f/∂x = lim h→0 H(x) = −∑p(x) log p(x) ∫₀^∞ e⁻ˣ²dx = √π/2 Γ(n) = (n−1)! χ² = ∑(Oᵢ−Eᵢ)² p < 0.05 λ = np σ = √λ F(x) = P(X≤x) z = (x−μ)/σ

CHANCY.AI

Factual Research — Accurate Forecasts — No Fabrications

Sample Transcript

User Query

What are the typical rates of hallucinations or fabrications delivered by the most recent AI chatbots?

🔮 Chancy Response

Based on the most recent research data from 2024-2026, modern AI chatbots continue to exhibit significant hallucination rates, though with some improvements in newer versions and variations across different models and applications.

Current Model Performance (2024-2026)

According to 2026 research, Gemini leads with hallucinations up to 38% of the time, while ChatGPT makes up information around 35% of the time. However, other studies from this period show different ranges, with hallucination percentages varying from 17-35% across different testing methodologies, indicating that measurement approaches significantly impact reported rates.

The most recent systematic analysis found that GPT-4 demonstrated a 28.6% hallucination rate, while GPT-3.5 showed 39.6%, suggesting some improvement in newer model versions. Notably, Google's Bard exhibited an extremely high 91.4% hallucination rate in systematic review tasks, making it particularly unreliable for research-based queries.

Citation and Reference Fabrication Persistence

Despite advances in AI technology, fabricated citations remain a critical issue in 2024-2026. Recent medical AI research confirms that "when ChatGPT and other AI chatbots were challenged to authenticate their outputs with references, they generated multiple citations with detailed bibliographic data that seemed perfectly authentic, but most were actually fabricated, contained at least one falsified citation data, or were completely erroneous when verified through medical literature resources."

A 2026 comprehensive analysis found that over 60% of AI-generated citations were either broken or completely fabricated, with these citations appearing professional and using real-sounding publication names that were often indistinguishable from valid references. This represents a persistent problem across model generations.

Domain-Specific Variations

Recent research reveals significant variation based on application context. In cancer information queries using retrieval-augmented generation (RAG) systems, modern chatbots showed response generation rates of 52% for GPT-4 and 71% for GPT-3.5 when asked about information not covered in their knowledge base, often including fabricated content to fill knowledge gaps.

Vectara's updated research estimates that current chatbots fabricate information at least 3 percent of the time and sometimes as much as 27 percent, with the variation largely dependent on query complexity and domain specificity.

Dangerous Medical Misinformation

A particularly concerning 2026 study found that leading AI models, including GPT-4o and Gemini 1.5 Pro, could be manipulated to produce dangerously false medical advice. These systems generated claims that sunscreen causes skin cancer or linked 5G to infertility, accompanied by convincing but entirely fabricated citations from reputable journals like The Lancet.

Specialized vs. General-Purpose Systems

The most significant finding in recent research is the stark difference between specialized and general-purpose systems. Academic-focused AI tools like Elicit and SciSpace demonstrated negligible hallucination rates, while general chatbots like ChatGPT and Bing exhibited critical hallucination levels. This suggests that domain-specific training and reliable source integration dramatically reduce fabrication rates.

Recent studies also documented concerning psychological effects, with mental health outlets reporting users who "withdraw socially, converse compulsively with the chatbot, and begin to hallucinate textual voices when the device is off," raising questions about the broader impact of AI fabrications on vulnerable users.

Transcription and Audio Hallucinations

Beyond text generation, recent investigations revealed that OpenAI's Whisper audio transcription system "invents false content in transcriptions, inserting fabricated words or entire phrases not present in the audio," including attribution of race, violent rhetoric, and nonexistent medical treatments. Despite warnings against use in high-risk domains, over 30,000 medical workers continue using Whisper-powered tools for patient visit transcriptions.

Legal Disclaimer

This information is provided for educational purposes only based on current research data. It does not constitute financial, legal, medical, or professional advice. Always consult qualified professionals before making important decisions.

📚 Research Sources

Search 1: typical rates hallucinations fabrications most recent ai chatbots 2026
  1. 1.AI Hallucination Report 2026: Which AI Hallucinates the Most?
  2. 2.Reference Hallucination Score for Medical AI Chatbots - PMC
  3. 3.Is ChatGPT Lying? Understanding AI Hallucinations in 2026
  4. 4.Detecting AI Hallucinations: Signs, Risks, and Mitigation Strategies
  5. 5.Hallucination (artificial intelligence) - Wikipedia
Search 2: Government & Medical Sources (site:gov)
  1. 1.Reference Hallucination Score for Medical AI Chatbots - PMC
  2. 2.ChatGPT: these are not hallucinations – they're fabrications and falsifications - PMC
  3. 3.Is Artificial Intelligence Hallucinating? - PMC
  4. 4.Artificial Hallucinations in ChatGPT: Implications in Scientific Writing - PMC
  5. 5.Do generative AI chatbots increase psychosis risk? - PMC
Search 3: Academic Sources (site:edu)
  1. 1.What are AI chatbots actually doing when they 'hallucinate'? - Northeastern
  2. 2.Hallucinations - AI Literacy in the Biomedical Sciences - Tufts
  3. 3.When AI Gets It Wrong: Addressing AI Hallucinations and Bias - MIT Sloan
  4. 4.AI on Trial: Legal Models Hallucinate in 1 out of 6 - Stanford HAI
  5. 5.How to Spot and Avoid GenAI Hallucinations - Pitt Digital
Search 4: Research Studies
  1. 1.A.I. Hallucinations Are Getting Worse - The New York Times
  2. 2.Reference Hallucination Score for Medical AI Chatbots - PMC
  3. 3.Hallucination (artificial intelligence) - Wikipedia
  4. 4.AI Hallucination Report 2026: Which AI Hallucinates the Most?
  5. 5.Chatbots May 'Hallucinate' More Often Than Many Realize - NYT
Search 5: Data & Statistics 2026
  1. 1.Reference Hallucination Score for Medical AI Chatbots - PMC
  2. 2.Detecting AI Hallucinations: Signs, Risks, and Mitigation Strategies
  3. 3.Is ChatGPT Lying? Understanding AI Hallucinations in 2026
  4. 4.I asked six popular AIs the same trick questions, and every one hallucinated | ZDNET
  5. 5.JMIR Medical Informatics - Reference Hallucination Score Study
Search 6: Statistics & Reports
  1. 1.A.I. Hallucinations Are Getting Worse - The New York Times
  2. 2.Why AI 'Hallucinations' Are Worse Than Ever - Forbes
  3. 3.AI Hallucination Report 2026: Which AI Hallucinates the Most?
  4. 4.JMIR Medical Informatics - Reference Hallucination Score Study
  5. 5.Hallucination (artificial intelligence) - Wikipedia
Search 7: Study Results
  1. 1.A.I. Hallucinations Are Getting Worse - The New York Times
  2. 2.Why AI 'Hallucinations' Are Worse Than Ever - Forbes
  3. 3.AI Chatbots Hallucinate Facts in 45% of News Summaries, Study Shows
  4. 4.JMIR Medical Informatics - Reference Hallucination Score Study
  5. 5.Hallucination (artificial intelligence) - Wikipedia
Search 8: Latest 2026 Research
  1. 1.I asked six popular AIs the same trick questions, and every one hallucinated | ZDNET
  2. 2.Is ChatGPT Lying? Understanding AI Hallucinations in 2026
  3. 3.JMIR Medical Informatics - Reference Hallucination Score Study
  4. 4.Hallucination in machines: Why AI makes things up | Hindustan Times
  5. 5.Hallucination (artificial intelligence) - Wikipedia