Close Menu
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
What's Hot

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025

Legends Struggles in Box Office Bow, Lilo & Stitch No. 1

May 31, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
Home » OpenAI’s latest AI models have a new safeguard to prevent biorisks
AI

OpenAI’s latest AI models have a new safeguard to prevent biorisks

HarishBy HarishApril 16, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
Share
Facebook Twitter Pinterest Reddit WhatsApp Email


OpenAI says that it deployed a new system to monitor its latest AI reasoning models, o3 and o4-mini, for prompts related to biological and chemical threats. The system aims to prevent the models from offering advice that could instruct someone on carrying out potentially harmful attacks, according to OpenAI’s safety report.

O3 and o4-mini represent a meaningful capability increase over OpenAI’s previous models, the company says, and thus pose new risks in the hands of bad actors. According to OpenAI’s internal benchmarks, o3 is more skilled at answering questions around creating certain types of biological threats in particular. For this reason — and to mitigate other risks — OpenAI created the new monitoring system, which the company describes as a “safety-focused reasoning monitor.”

The monitor, custom-trained to reason about OpenAI’s content policies, runs on top of o3 and o4-mini. It’s designed to identify prompts related to biological and chemical risk and instruct the models to refuse to offer advice on those topics.

To establish a baseline, OpenAI had red teamers spend around 1,000 hours flagging “unsafe” biorisk-related conversations from o3 and o4-mini. During a test in which OpenAI simulated the “blocking logic” of its safety monitor, the models declined to respond to risky prompts 98.7% of the time, according to OpenAI.

OpenAI acknowledges that its test didn’t account for people who might try new prompts after getting blocked by the monitor, which is why the company says it’ll continue to rely in part on human monitoring.

O3 and o4-mini don’t cross OpenAI’s “high risk” threshold for biorisks, according to the company. However, compared to o1 and GPT-4, OpenAI says that early versions of o3 and o4-mini proved more helpful at answering questions around developing biological weapons.

Chart from o3 and o4-mini’s system card (Screenshot: OpenAI)

The company is actively tracking how its models could make it easier for malicious users to develop chemical and biological threats, according to OpenAI’s recently updated Preparedness Framework.

OpenAI is increasingly relying on automated systems to mitigate the risks from its models. For example, to prevent GPT-4o’s native image generator from creating child sexual abuse material (CSAM), OpenAI says it uses a reasoning monitor similar to the one the company deployed for o3 and o4-mini.

Yet several researchers have raised concerns OpenAI isn’t prioritizing safety as much as it should. One of the company’s red-teaming partners, Metr, said it had relatively little time to test o3 on a benchmark for deceptive behavior. Meanwhile, OpenAI decided not to release a safety report for its GPT-4.1 model, which launched earlier this week.



Source link

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
Previous ArticleTariffs, Fees, and Rising Costs: How Planners Are Coping
Next Article Trump administration reportedly considers a US DeepSeek ban
Harish
  • Website
  • X (Twitter)

Related Posts

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025

TC Sessions: AI Trivia Countdown — Your next shot at winning big

May 31, 2025

Google quietly released an app that lets you download and run AI models locally

May 31, 2025

The conversations that count start in 5 days at TC Sessions: AI

May 31, 2025

It’s not your imagination: AI is speeding up the pace of change

May 30, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025

Legends Struggles in Box Office Bow, Lilo & Stitch No. 1

May 31, 2025

BitMEX discovers cybersecurity lapses in North Korea hacker group

May 31, 2025
Don't Miss
Blockchain

BitMEX discovers cybersecurity lapses in North Korea hacker group

May 31, 20253 Mins Read

The BitMEX crypto exchange’s security team discovered gaps in the operational security of the Lazarus…

Insurers Race to Cover Crypto Kidnap and Ransom Risks

May 31, 2025

FTX Bankruptcy Estate distributes $5 billion

May 30, 2025

MEXC detects 200% surge in fraud during Q1

May 30, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Luminari, your go-to hub for mastering modern tech and staying ahead in the digital world.

At Luminari, we’re passionate about breaking down complex technologies and delivering insights that matter. Whether you’re a developer, tech enthusiast, job seeker, or lifelong learner, our mission is to equip you with the tools and knowledge you need to thrive in today’s fast-moving tech landscape.

Our Picks

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

May 31, 2025

Meta plans to automate many of its product risk assessments

May 31, 2025

TC Sessions: AI Trivia Countdown — Your next shot at winning big

May 31, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 luminari. Designed by luminari.

Type above and press Enter to search. Press Esc to cancel.