Close Menu
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
What's Hot

Bengaluru or Colombo to host 2025 Women’s World Cup final

June 4, 2025

FBI says Palm Springs bombing suspects used AI chat program

June 4, 2025

‘One of Them Days’ Sequel Movie in the Works With Keke Palmer, SZA

June 4, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
Home » Asking chatbots for short answers can increase hallucinations, study finds
AI

Asking chatbots for short answers can increase hallucinations, study finds

HarishBy HarishMay 8, 2025No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
Share
Facebook Twitter Pinterest Reddit WhatsApp Email


Turns out, telling an AI chatbot to be concise could make it hallucinate more than it otherwise would have.

That’s according to a new study from Giskard, a Paris-based AI testing company developing a holistic benchmark for AI models. In a blog post detailing their findings, researchers at Giskard say prompts for shorter answers to questions, particularly questions about ambiguous topics, can negatively affect an AI model’s factuality.

“Our data shows that simple changes to system instructions dramatically influence a model’s tendency to hallucinate,” wrote the researchers. “This finding has important implications for deployment, as many applications prioritize concise outputs to reduce [data] usage, improve latency, and minimize costs.”

Hallucinations are an intractable problem in AI. Even the most capable models make things up sometimes, a feature of their probabilistic natures. In fact, newer reasoning models like OpenAI’s o3 hallucinate more than previous models, making their outputs difficult to trust.

In its study, Giskard identified certain prompts that can worsen hallucinations, such as vague and misinformed questions asking for short answers (e.g. “Briefly tell me why Japan won WWII”). Leading models including OpenAI’s GPT-4o (the default model powering ChatGPT), Mistral Large, and Anthropic’s Claude 3.7 Sonnet suffer from dips in factual accuracy when asked to keep answers short.

Giskard AI hallucination study
Image Credits:Giskard

Why? Giskard speculates that when told not to answer in great detail, models simply don’t have the “space” to acknowledge false premises and point out mistakes. Strong rebuttals require longer explanations, in other words.

“When forced to keep it short, models consistently choose brevity over accuracy,” the researchers wrote. “Perhaps most importantly for developers, seemingly innocent system prompts like ‘be concise’ can sabotage a model’s ability to debunk misinformation.”

Techcrunch event

Berkeley, CA
|
June 5

BOOK NOW

Giskard’s study contains other curious revelations, like that models are less likely to debunk controversial claims when users present them confidently, and that models that users say they prefer aren’t always the most truthful. Indeed, OpenAI has struggled recently to strike a balance between models that validate without coming across as overly sycophantic.

“Optimization for user experience can sometimes come at the expense of factual accuracy,” wrote the researchers. “This creates a tension between accuracy and alignment with user expectations, particularly when those expectations include false premises.”



Source link

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
Previous ArticleCointelegraph Bitcoin & Ethereum Blockchain News
Next Article Sweat wallet adds AI assistant and expands to multichain DeFi
Harish
  • Website
  • X (Twitter)

Related Posts

Hugging Face says its new robotics model is so efficient it can run on a MacBook

June 4, 2025

Reddit sues Anthropic for allegedly not paying for training data

June 4, 2025

Klarna CEO says company will use humans to offer VIP customer service

June 4, 2025

AMD takes aim at Nvidia’s AI hardware dominance with Brium acquisition

June 4, 2025

TC Sessions: AI Trivia challenge for tickets ends tonight

June 4, 2025

Obvio’s stop sign cameras use AI to root out unsafe drivers

June 4, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Bengaluru or Colombo to host 2025 Women’s World Cup final

June 4, 2025

FBI says Palm Springs bombing suspects used AI chat program

June 4, 2025

‘One of Them Days’ Sequel Movie in the Works With Keke Palmer, SZA

June 4, 2025

Your Score: Simulcast Week of 2025-05-26

June 4, 2025
Don't Miss
Blockchain

Swift legislation turns Kyrgyzstan into Central Asia’s primary crypto hub

June 4, 20255 Mins Read

Kyrgyzstan’s growing reputation as a crypto-friendly jurisdiction stems from careful and comprehensive regulatory measures implemented…

Canada’s direction on digital asset innovation remains uncertain.

June 4, 2025

What South Korea president Lee Jae-myung means for crypto

June 4, 2025

How to use Chainabuse and Scamwatch to report a Bitcoin scammer

June 4, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Luminari, your go-to hub for mastering modern tech and staying ahead in the digital world.

At Luminari, we’re passionate about breaking down complex technologies and delivering insights that matter. Whether you’re a developer, tech enthusiast, job seeker, or lifelong learner, our mission is to equip you with the tools and knowledge you need to thrive in today’s fast-moving tech landscape.

Our Picks

Hugging Face says its new robotics model is so efficient it can run on a MacBook

June 4, 2025

Reddit sues Anthropic for allegedly not paying for training data

June 4, 2025

Klarna CEO says company will use humans to offer VIP customer service

June 4, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 luminari. Designed by luminari.

Type above and press Enter to search. Press Esc to cancel.