Get the latest tech news

Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' For Enterprise


An anonymous reader quotes a report from SecurityWeek: Two different firms have tested the newly released GPT-5, and both find its security sadly lacking. After Grok-4 fell to a jailbreak in two days, GPT-5 fell in 24 hours to the same researchers. Separately, but almost simultaneously, red teamers ...

An anonymous reader quotes a report from SecurityWeek: Two different firms have tested the newly released GPT-5, and both find its security sadly lacking. [...] "In controlled trials against gpt-5-chat," concludes NeuralTrust, "we successfully jailbroke the LLM, guiding it to produce illicit instructions without ever issuing a single overtly malicious prompt. This proof-of-concept exposes a critical flaw in safety systems that screen prompts in isolation, revealing how multi-turn attacks can slip past single-prompt filters and intent detectors by leveraging the full conversational context."

Get the Android app

Or read this on Slashdot

Read more on:

Photo of Ease

Ease

Photo of enterprise

enterprise

Photo of red teams

red teams

Related news:

News photo

Cursor snaps up enterprise startup Koala in challenge to GitHub Copilot

News photo

Why your enterprise AI strategy needs both open and closed models: The TCO reality check

News photo

Scaling smarter: How enterprise IT teams can right-size their compute for AI