Close Menu
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
What's Hot

Gundam Creator Yoshiyuki Tomino to Speak at Space Business Conference – Interest

May 25, 2025

Gō Ikeyamada to End Takanashi-ke no Imōto wa Hanayome ni Naritaii!! Manga – News

May 25, 2025

Doraemon Dorayaki Shop Story Game Adds Hindi Language Support – News

May 25, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
Home » OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI
AI

OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI

HarishBy HarishApril 15, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
Share
Facebook Twitter Pinterest Reddit WhatsApp Email


OpenAI has updated its Preparedness Framework — the internal system it uses to assess the safety of AI models and determine necessary safeguards during development and deployment. In the update, OpenAI stated that it may “adjust” its safety requirements if a competing AI lab releases a “high-risk” system without similar protections in place.

The change reflects the increasing competitive pressures on commercial AI developers to deploy models quickly. OpenAI has been accused of lowering safety standards in favor of faster releases, and of failing to deliver timely reports detailing its safety testing. Last week, 12 former OpenAI employees filed a brief in Elon Musk’s case against OpenAI, arguing the company would be encouraged to cut even more corners on safety should it complete its planned corporate restructuring.

Perhaps anticipating criticism, OpenAI claims that it wouldn’t make these policy adjustments lightly, and that it would keep its safeguards at “a level more protective.”

“If another frontier AI developer releases a high-risk system without comparable safeguards, we may adjust our requirements,” wrote OpenAI in a blog post published Tuesday afternoon. “However, we would first rigorously confirm that the risk landscape has actually changed, publicly acknowledge that we are making an adjustment, assess that the adjustment does not meaningfully increase the overall risk of severe harm, and still keep safeguards at a level more protective.”

The refreshed Preparedness Framework also makes clear that OpenAI is relying more heavily on automated evaluations to speed up product development. The company says that while it hasn’t abandoned human-led testing altogether, it has built “a growing suite of automated evaluations” that can supposedly “keep up with [a] faster [release] cadence.”

Some reports contradict this. According to the Financial Times, OpenAI gave testers less than a week for safety checks for an upcoming major model — a compressed timeline compared to previous releases. The publication’s sources also alleged that many of OpenAI’s safety tests are now conducted on earlier versions of models rather than the versions released to the public.

In statements, OpenAI has disputed the notion that it’s compromising on safety.

OpenAI is quietly reducing its safety commitments.

Omitted from OpenAI’s list of Preparedness Framework changes:

No longer requiring safety tests of finetuned models https://t.co/oTmEiAtSjS

— Steven Adler (@sjgadler) April 15, 2025

Other changes to OpenAI’s framework pertain to how the company categorizes models according to risk, including models that can conceal their capabilities, evade safeguards, prevent their shutdown, and even self-replicate. OpenAI says that it’ll now focus on whether models meet one of two thresholds: “high” capability or “critical” capability.

OpenAI’s definition of the former is a model that could “amplify existing pathways to severe harm.” The latter are models that “introduce unprecedented new pathways to severe harm,” per the company.

“Covered systems that reach high capability must have safeguards that sufficiently minimize the associated risk of severe harm before they are deployed,” wrote OpenAI in its blog post. “Systems that reach critical capability also require safeguards that sufficiently minimize associated risks during development.”

The updates are the first OpenAI has made to the Preparedness Framework since 2023.



Source link

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
Previous ArticleWink Martindale, game-show host and early TV interviewer of Elvis Presley, dies at 91
Next Article The CVE program for tracking security flaws is about to lose federal funding
Harish
  • Website
  • X (Twitter)

Related Posts

Khosla Ventures among VCs experimenting with AI-infused roll-ups of mature companies

May 23, 2025

What is Mistral AI? Everything to know about the OpenAI competitor

May 23, 2025

Marjorie Taylor Greene picked a fight with Grok

May 23, 2025

OpenAI goes all in with Jony Ive as Google plays AI catchup

May 23, 2025

OpenAI upgrades the AI model powering its Operator agent

May 23, 2025

Microsoft says its Aurora AI can accurately predict air quality, typhoons, and more

May 23, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Gundam Creator Yoshiyuki Tomino to Speak at Space Business Conference – Interest

May 25, 2025

Gō Ikeyamada to End Takanashi-ke no Imōto wa Hanayome ni Naritaii!! Manga – News

May 25, 2025

Doraemon Dorayaki Shop Story Game Adds Hindi Language Support – News

May 25, 2025

Betrothed to My Sister’s Ex Anime Reveals More Cast, July 4 TV Debut in 2nd Promo Video – News

May 25, 2025
Don't Miss
Blockchain

Industry exec sounds alarm on Ledger phishing letter delivered by USPS

May 24, 20252 Mins Read

Scammers posing as Ledger, a hardware wallet manufacturer, are sending physical letters to crypto users…

Decentralizing telecom benefits small businesses and telcos — Web3 exec

May 24, 2025

Wallet intelligence shapes the next crypto power shift

May 24, 2025

Hyperliquid trader James Wynn goes ‘all-in’ on $1.25B Bitcoin Long

May 24, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Luminari, your go-to hub for mastering modern tech and staying ahead in the digital world.

At Luminari, we’re passionate about breaking down complex technologies and delivering insights that matter. Whether you’re a developer, tech enthusiast, job seeker, or lifelong learner, our mission is to equip you with the tools and knowledge you need to thrive in today’s fast-moving tech landscape.

Our Picks

Khosla Ventures among VCs experimenting with AI-infused roll-ups of mature companies

May 23, 2025

What is Mistral AI? Everything to know about the OpenAI competitor

May 23, 2025

Marjorie Taylor Greene picked a fight with Grok

May 23, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 luminari. Designed by luminari.

Type above and press Enter to search. Press Esc to cancel.