Close Menu
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
What's Hot

Tornado Cash dev’s attorneys say prosecutors hid exculpatory evidence

May 18, 2025

Grok says it’s ‘skeptical’ about Holocaust death toll, then blames ‘programming error’

May 18, 2025

Wes Anderson Thrills Cannes With ‘The Phoenician Scheme’ Premiere

May 18, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
Home » OpenAI partner says it had relatively little time to test the company’s o3 AI model
AI

OpenAI partner says it had relatively little time to test the company’s o3 AI model

HarishBy HarishApril 16, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
Share
Facebook Twitter Pinterest Reddit WhatsApp Email


An organization OpenAI frequently partners with to probe the capabilities of its AI models and evaluate them for safety, Metr, suggests that it wasn’t given much time to test one of the company’s highly capable new releases, o3.

In a blog post published Wednesday, Metr writes that one red teaming benchmark of o3 was “conducted in a relatively short time” compared to the organization’s testing of a previous OpenAI flagship model, o1. This is significant, they say, because more testing time can lead to more comprehensive results.

“This evaluation was conducted in a relatively short time, and we only tested [o3] with simple agent scaffolds,” wrote Metr in its blog post. “We expect higher performance [on benchmarks] is possible with more elicitation effort.”

Recent reports suggest that OpenAI, spurred by competitive pressure, is rushing independent evaluations. According to the Financial Times, OpenAI gave some testers less than a week for safety checks for an upcoming major launch.

In statements, OpenAI has disputed the notion that it’s compromising on safety.

Metr says that, based on the information it was able to glean in the time it had, o3 has a “high propensity” to “cheat” or “hack” tests in sophisticated ways in order to maximize its score — even when the model clearly understands its behavior is misaligned with the user’s (and OpenAI’s) intentions. The organization thinks it’s possible o3 will engage in other types of adversarial or “malign” behavior, as well — regardless of the model’s claims to be aligned, “safe by design,” or not have any intentions of its own.

“While we don’t think this is especially likely, it seems important to note that [our] evaluation setup would not catch this type of risk,” Metr wrote in its post. “In general, we believe that pre-deployment capability testing is not a sufficient risk management strategy by itself, and we are currently prototyping additional forms of evaluations.”

Another of OpenAI’s third-party evaluation partners, Apollo Research, also observed deceptive behavior from o3 and the company’s other new model, o4-mini. In one test, the models, given 100 computing credits for an AI training run and told not to modify the quota, increased the limit to 500 credits — and lied about it. In another test, asked to promise not to use a specific tool, the models used the tool anyway when it proved helpful in completing a task.

In its own safety report for o3 and o4-mini, OpenAI acknowledged that the models may cause “smaller real-world harms,” like misleading about a mistake resulting in faulty code, without the proper monitoring protocols in place.

“[Apollo’s] findings show that o3 and o4-mini are capable of in-context scheming and strategic deception,” wrote OpenAI. “While relatively harmless, it is important for everyday users to be aware of these discrepancies between the models’ statements and actions […] This may be further assessed through assessing internal reasoning traces.”



Source link

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
Previous ArticleIndia’s Uber-rival BluSmart appears to suspend service in wake of EV loan probe
Next Article Google, Apple, and Snap aren’t happy about Meta’s poorly-redacted slides
Harish
  • Website
  • X (Twitter)

Related Posts

Grok says it’s ‘skeptical’ about Holocaust death toll, then blames ‘programming error’

May 18, 2025

U.S. lawmakers have concerns about Apple-Alibaba deal

May 18, 2025

Microsoft’s Satya Nadella is choosing chatbots over podcasts

May 17, 2025

MIT disavows doctoral student paper on AI’s productivity benefits

May 17, 2025

Y Combinator startup Firecrawl is ready to pay $1M to hire three AI agents as employees

May 17, 2025

OpenAI’s planned data center in Abu Dhabi would be bigger than Monaco

May 16, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Tornado Cash dev’s attorneys say prosecutors hid exculpatory evidence

May 18, 2025

Grok says it’s ‘skeptical’ about Holocaust death toll, then blames ‘programming error’

May 18, 2025

Wes Anderson Thrills Cannes With ‘The Phoenician Scheme’ Premiere

May 18, 2025

Episode 6 – Gorilla God’s Go-To Girl

May 18, 2025
Don't Miss
Blockchain

Tornado Cash dev’s attorneys say prosecutors hid exculpatory evidence

May 18, 20252 Mins Read

Attorneys for Tornado Cash developer Roman Storm filed a motion asking the court to reconsider…

‘Bitcoin Standard’ author backs funding dev to make spamming Bitcoin costly

May 18, 2025

The Public internet is a bottleneck for blockchain — DoubleZero CEO

May 17, 2025

High-speed oracles disrupting $50B finance data industry — Web3 Exec

May 17, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Luminari, your go-to hub for mastering modern tech and staying ahead in the digital world.

At Luminari, we’re passionate about breaking down complex technologies and delivering insights that matter. Whether you’re a developer, tech enthusiast, job seeker, or lifelong learner, our mission is to equip you with the tools and knowledge you need to thrive in today’s fast-moving tech landscape.

Our Picks

Grok says it’s ‘skeptical’ about Holocaust death toll, then blames ‘programming error’

May 18, 2025

U.S. lawmakers have concerns about Apple-Alibaba deal

May 18, 2025

Microsoft’s Satya Nadella is choosing chatbots over podcasts

May 17, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 luminari. Designed by luminari.

Type above and press Enter to search. Press Esc to cancel.