Close Menu
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
What's Hot

Frankenstein Trailer Brings Jacob Elordi to Monstrous Life

June 1, 2025

Chibi Godzilla Raids Again Anime Gets New TV Series in July – News

June 1, 2025

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
Home » OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI
AI

OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI

HarishBy HarishApril 15, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
Share
Facebook Twitter Pinterest Reddit WhatsApp Email


OpenAI has updated its Preparedness Framework — the internal system it uses to assess the safety of AI models and determine necessary safeguards during development and deployment. In the update, OpenAI stated that it may “adjust” its safety requirements if a competing AI lab releases a “high-risk” system without similar protections in place.

The change reflects the increasing competitive pressures on commercial AI developers to deploy models quickly. OpenAI has been accused of lowering safety standards in favor of faster releases, and of failing to deliver timely reports detailing its safety testing. Last week, 12 former OpenAI employees filed a brief in Elon Musk’s case against OpenAI, arguing the company would be encouraged to cut even more corners on safety should it complete its planned corporate restructuring.

Perhaps anticipating criticism, OpenAI claims that it wouldn’t make these policy adjustments lightly, and that it would keep its safeguards at “a level more protective.”

“If another frontier AI developer releases a high-risk system without comparable safeguards, we may adjust our requirements,” wrote OpenAI in a blog post published Tuesday afternoon. “However, we would first rigorously confirm that the risk landscape has actually changed, publicly acknowledge that we are making an adjustment, assess that the adjustment does not meaningfully increase the overall risk of severe harm, and still keep safeguards at a level more protective.”

The refreshed Preparedness Framework also makes clear that OpenAI is relying more heavily on automated evaluations to speed up product development. The company says that while it hasn’t abandoned human-led testing altogether, it has built “a growing suite of automated evaluations” that can supposedly “keep up with [a] faster [release] cadence.”

Some reports contradict this. According to the Financial Times, OpenAI gave testers less than a week for safety checks for an upcoming major model — a compressed timeline compared to previous releases. The publication’s sources also alleged that many of OpenAI’s safety tests are now conducted on earlier versions of models rather than the versions released to the public.

In statements, OpenAI has disputed the notion that it’s compromising on safety.

OpenAI is quietly reducing its safety commitments.

Omitted from OpenAI’s list of Preparedness Framework changes:

No longer requiring safety tests of finetuned models https://t.co/oTmEiAtSjS

— Steven Adler (@sjgadler) April 15, 2025

Other changes to OpenAI’s framework pertain to how the company categorizes models according to risk, including models that can conceal their capabilities, evade safeguards, prevent their shutdown, and even self-replicate. OpenAI says that it’ll now focus on whether models meet one of two thresholds: “high” capability or “critical” capability.

OpenAI’s definition of the former is a model that could “amplify existing pathways to severe harm.” The latter are models that “introduce unprecedented new pathways to severe harm,” per the company.

“Covered systems that reach high capability must have safeguards that sufficiently minimize the associated risk of severe harm before they are deployed,” wrote OpenAI in its blog post. “Systems that reach critical capability also require safeguards that sufficiently minimize associated risks during development.”

The updates are the first OpenAI has made to the Preparedness Framework since 2023.



Source link

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
Previous ArticleWink Martindale, game-show host and early TV interviewer of Elvis Presley, dies at 91
Next Article The CVE program for tracking security flaws is about to lose federal funding
Harish
  • Website
  • X (Twitter)

Related Posts

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025

TC Sessions: AI Trivia Countdown — Your next shot at winning big

May 31, 2025

Google quietly released an app that lets you download and run AI models locally

May 31, 2025

The conversations that count start in 5 days at TC Sessions: AI

May 31, 2025

It’s not your imagination: AI is speeding up the pace of change

May 30, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Frankenstein Trailer Brings Jacob Elordi to Monstrous Life

June 1, 2025

Chibi Godzilla Raids Again Anime Gets New TV Series in July – News

June 1, 2025

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025
Don't Miss
Blockchain

BitMEX discovers cybersecurity lapses in North Korea hacker group

May 31, 20253 Mins Read

The BitMEX crypto exchange’s security team discovered gaps in the operational security of the Lazarus…

Insurers Race to Cover Crypto Kidnap and Ransom Risks

May 31, 2025

FTX Bankruptcy Estate distributes $5 billion

May 30, 2025

MEXC detects 200% surge in fraud during Q1

May 30, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Luminari, your go-to hub for mastering modern tech and staying ahead in the digital world.

At Luminari, we’re passionate about breaking down complex technologies and delivering insights that matter. Whether you’re a developer, tech enthusiast, job seeker, or lifelong learner, our mission is to equip you with the tools and knowledge you need to thrive in today’s fast-moving tech landscape.

Our Picks

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025

TC Sessions: AI Trivia Countdown — Your next shot at winning big

May 31, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 luminari. Designed by luminari.

Type above and press Enter to search. Press Esc to cancel.