Close Menu
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
What's Hot

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025

Legends Struggles in Box Office Bow, Lilo & Stitch No. 1

May 31, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
Home » OpenAI partner says it had relatively little time to test the company’s o3 AI model
AI

OpenAI partner says it had relatively little time to test the company’s o3 AI model

HarishBy HarishApril 16, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
Share
Facebook Twitter Pinterest Reddit WhatsApp Email


An organization OpenAI frequently partners with to probe the capabilities of its AI models and evaluate them for safety, Metr, suggests that it wasn’t given much time to test one of the company’s highly capable new releases, o3.

In a blog post published Wednesday, Metr writes that one red teaming benchmark of o3 was “conducted in a relatively short time” compared to the organization’s testing of a previous OpenAI flagship model, o1. This is significant, they say, because more testing time can lead to more comprehensive results.

“This evaluation was conducted in a relatively short time, and we only tested [o3] with simple agent scaffolds,” wrote Metr in its blog post. “We expect higher performance [on benchmarks] is possible with more elicitation effort.”

Recent reports suggest that OpenAI, spurred by competitive pressure, is rushing independent evaluations. According to the Financial Times, OpenAI gave some testers less than a week for safety checks for an upcoming major launch.

In statements, OpenAI has disputed the notion that it’s compromising on safety.

Metr says that, based on the information it was able to glean in the time it had, o3 has a “high propensity” to “cheat” or “hack” tests in sophisticated ways in order to maximize its score — even when the model clearly understands its behavior is misaligned with the user’s (and OpenAI’s) intentions. The organization thinks it’s possible o3 will engage in other types of adversarial or “malign” behavior, as well — regardless of the model’s claims to be aligned, “safe by design,” or not have any intentions of its own.

“While we don’t think this is especially likely, it seems important to note that [our] evaluation setup would not catch this type of risk,” Metr wrote in its post. “In general, we believe that pre-deployment capability testing is not a sufficient risk management strategy by itself, and we are currently prototyping additional forms of evaluations.”

Another of OpenAI’s third-party evaluation partners, Apollo Research, also observed deceptive behavior from o3 and the company’s other new model, o4-mini. In one test, the models, given 100 computing credits for an AI training run and told not to modify the quota, increased the limit to 500 credits — and lied about it. In another test, asked to promise not to use a specific tool, the models used the tool anyway when it proved helpful in completing a task.

In its own safety report for o3 and o4-mini, OpenAI acknowledged that the models may cause “smaller real-world harms,” like misleading about a mistake resulting in faulty code, without the proper monitoring protocols in place.

“[Apollo’s] findings show that o3 and o4-mini are capable of in-context scheming and strategic deception,” wrote OpenAI. “While relatively harmless, it is important for everyday users to be aware of these discrepancies between the models’ statements and actions […] This may be further assessed through assessing internal reasoning traces.”



Source link

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
Previous ArticleIndia’s Uber-rival BluSmart appears to suspend service in wake of EV loan probe
Next Article Google, Apple, and Snap aren’t happy about Meta’s poorly-redacted slides
Harish
  • Website
  • X (Twitter)

Related Posts

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025

TC Sessions: AI Trivia Countdown — Your next shot at winning big

May 31, 2025

Google quietly released an app that lets you download and run AI models locally

May 31, 2025

The conversations that count start in 5 days at TC Sessions: AI

May 31, 2025

It’s not your imagination: AI is speeding up the pace of change

May 30, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025

Legends Struggles in Box Office Bow, Lilo & Stitch No. 1

May 31, 2025

BitMEX discovers cybersecurity lapses in North Korea hacker group

May 31, 2025
Don't Miss
Blockchain

BitMEX discovers cybersecurity lapses in North Korea hacker group

May 31, 20253 Mins Read

The BitMEX crypto exchange’s security team discovered gaps in the operational security of the Lazarus…

Insurers Race to Cover Crypto Kidnap and Ransom Risks

May 31, 2025

FTX Bankruptcy Estate distributes $5 billion

May 30, 2025

MEXC detects 200% surge in fraud during Q1

May 30, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Luminari, your go-to hub for mastering modern tech and staying ahead in the digital world.

At Luminari, we’re passionate about breaking down complex technologies and delivering insights that matter. Whether you’re a developer, tech enthusiast, job seeker, or lifelong learner, our mission is to equip you with the tools and knowledge you need to thrive in today’s fast-moving tech landscape.

Our Picks

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025

TC Sessions: AI Trivia Countdown — Your next shot at winning big

May 31, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 luminari. Designed by luminari.

Type above and press Enter to search. Press Esc to cancel.