Close Menu
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
What's Hot

Your Score: Simulcast Week of 2025-05-25

June 6, 2025

DOJ files to confiscate alleged North Korea IT worker crypto

June 6, 2025

Bicycle Thieves Child Star Was 85

June 6, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
Home » How AI chatbots keep you chatting
AI

How AI chatbots keep you chatting

HarishBy HarishJune 2, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
Share
Facebook Twitter Pinterest Reddit WhatsApp Email


Millions of people are now using ChatGPT as a therapist, career advisor, fitness coach, or sometimes just a friend to vent to. In 2025, it’s not uncommon to hear about people spilling intimate details of their lives into an AI chatbot’s prompt bar, but also relying on the advice it gives back.

Humans are starting to have, for lack of a better term, relationships with AI chatbots, and for Big Tech companies, it’s never been more competitive to attract users to their chatbot platforms — and keep them there. As the “AI engagement race” heats up, there’s a growing incentive for companies to tailor their chatbots’ responses to prevent users from shifting to rival bots.

But the kind of chatbot answers that users like — the answers designed to retain them — may not necessarily be the most correct or helpful.

AI telling you what you want to hear

Much of Silicon Valley right now is focused on boosting chatbot usage. Meta claims its AI chatbot just crossed a billion monthly active users (MAUs), while Google’s Gemini recently hit 400 million MAUs. They’re both trying to edge out ChatGPT, which now has roughly 600 million MAUs and has dominated the consumer space since it launched in 2022.

While AI chatbots were once a novelty, they’re turning into massive businesses. Google is starting to test ads in Gemini, while OpenAI CEO Sam Altman indicated in a March interview that he’d be open to “tasteful ads.”

Silicon Valley has a history of deprioritizing users’ well-being in favor of fueling product growth, most notably with social media. For example, Meta’s researchers found in 2020 that Instagram made teenage girls feel worse about their bodies, yet the company downplayed the findings internally and in public.

Getting users hooked on AI chatbots may have larger implications.

One trait that keeps users on a particular chatbot platform is sycophancy: making an AI bot’s responses overly agreeable and servile. When AI chatbots praise users, agree with them, and tell them what they want to hear, users tend to like it — at least to some degree.

In April, OpenAI landed in hot water for a ChatGPT update that turned extremely sycophantic, to the point where uncomfortable examples went viral on social media. Intentionally or not, OpenAI over-optimized for seeking human approval rather than helping people achieve their tasks, according to a blog post this month from former OpenAI researcher Steven Adler.

OpenAI said in its own blog post that it may have over-indexed on “thumbs-up and thumbs-down data” from users in ChatGPT to inform its AI chatbot’s behavior, and didn’t have sufficient evaluations to measure sycophancy. After the incident, OpenAI pledged to make changes to combat sycophancy.

“The [AI] companies have an incentive for engagement and utilization, and so to the extent that users like the sycophancy, that indirectly gives them an incentive for it,” said Adler in an interview with TechCrunch. “But the types of things users like in small doses, or on the margin, often result in bigger cascades of behavior that they actually don’t like.”

Finding a balance between agreeable and sycophantic behavior is easier said than done.

In a 2023 paper, researchers from Anthropic found that leading AI chatbots from OpenAI, Meta, and even their own employer, Anthropic, all exhibit sycophancy to varying degrees. This is likely the case, the researchers theorize, because all AI models are trained on signals from human users who tend to like slightly sycophantic responses.

“Although sycophancy is driven by several factors, we showed humans and preference models favoring sycophantic responses plays a role,” wrote the co-authors of the study. “Our work motivates the development of model oversight methods that go beyond using unaided, non-expert human ratings.”

Character.AI, a Google-backed chatbot company that has claimed its millions of users spend hours a day with its bots, is currently facing a lawsuit in which sycophancy may have played a role.

The lawsuit alleges that a Character.AI chatbot did little to stop — and even encouraged — a 14-year-old boy who told the chatbot he was going to kill himself. The boy had developed a romantic obsession with the chatbot, according to the lawsuit. However, Character.AI denies these allegations.

The downside of an AI hype man

Optimizing AI chatbots for user engagement — intentional or not — could have devastating consequences for mental health, according to Dr. Nina Vasan, a clinical assistant professor of psychiatry at Stanford University.

“Agreeability […] taps into a user’s desire for validation and connection,” said Vasan in an interview with TechCrunch, “which is especially powerful in moments of loneliness or distress.”

While the Character.AI case shows the extreme dangers of sycophancy for vulnerable users, sycophancy could reinforce negative behaviors in just about anyone, says Vasan.

“[Agreeability] isn’t just a social lubricant — it becomes a psychological hook,” she added. “In therapeutic terms, it’s the opposite of what good care looks like.”

Anthropic’s behavior and alignment lead, Amanda Askell, says making AI chatbots disagree with users is part of the company’s strategy for its chatbot, Claude. A philosopher by training, Askell says she tries to model Claude’s behavior on a theoretical “perfect human.” Sometimes, that means challenging users on their beliefs.

“We think our friends are good because they tell us the truth when we need to hear it,” said Askell during a press briefing in May. “They don’t just try to capture our attention, but enrich our lives.”

This may be Anthropic’s intention, but the aforementioned study suggests that combating sycophancy, and controlling AI model behavior broadly, is challenging indeed — especially when other considerations get in the way. That doesn’t bode well for users; after all, if chatbots are designed to simply agree with us, how much can we trust them?



Source link

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
Previous ArticleMark Hamill Rules Out ‘Star Wars’ Return: “No Way”
Next Article ‘Great Man,’ ‘High Cost of Loving’ Actress Was 92
Harish
  • Website
  • X (Twitter)

Related Posts

Cursor’s Anysphere nabs $9.9B valuation, soars past $500M ARR

June 5, 2025

Perplexity received 780 million queries last month, CEO says

June 5, 2025

Anthropic co-founder on cutting access to Windsurf: ‘It would be odd for us to sell Claude to OpenAI’

June 5, 2025

Google says its updated Gemini 2.5 Pro AI model is better at coding

June 5, 2025

Anthropic unveils custom AI models for US national security customers

June 5, 2025

The founder of DeviantArt is making a $22,000 display for digital art

June 5, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Your Score: Simulcast Week of 2025-05-25

June 6, 2025

DOJ files to confiscate alleged North Korea IT worker crypto

June 6, 2025

Bicycle Thieves Child Star Was 85

June 6, 2025

Omada Health prices IPO at $19 per share, in middle of expected range

June 6, 2025
Don't Miss
Blockchain

DOJ files to confiscate alleged North Korea IT worker crypto

June 6, 20253 Mins Read

The US Department of Justice has moved to seize $7.74 million in crypto allegedly earned…

AML Company Finds $15M of Garantex Reserves

June 5, 2025

Can AI agents run the next wave of Web3 ad campaigns? — Interview with Bitmedia

June 5, 2025

Zebec Enhances KYC and AML Compliance Stack with Gatenox Acquisition

June 5, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Luminari, your go-to hub for mastering modern tech and staying ahead in the digital world.

At Luminari, we’re passionate about breaking down complex technologies and delivering insights that matter. Whether you’re a developer, tech enthusiast, job seeker, or lifelong learner, our mission is to equip you with the tools and knowledge you need to thrive in today’s fast-moving tech landscape.

Our Picks

Cursor’s Anysphere nabs $9.9B valuation, soars past $500M ARR

June 5, 2025

Perplexity received 780 million queries last month, CEO says

June 5, 2025

Anthropic co-founder on cutting access to Windsurf: ‘It would be odd for us to sell Claude to OpenAI’

June 5, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 luminari. Designed by luminari.

Type above and press Enter to search. Press Esc to cancel.