Get the latest tech news

DeepSeek’s R1 reportedly ‘more vulnerable’ to jailbreaking than other AI models


The latest model from DeepSeek, the Chinese AI company that’s shaken up Silicon Valley and Wall Street, can be manipulated to produce harmful content such

The latest model from DeepSeek, the Chinese AI company that’s shaken up Silicon Valley and Wall Street, can be manipulated to produce harmful content such as plans for a bioweapon attack and a campaign to promote self-harm among teens, according to The Wall Street Journal. Sam Rubin, senior vice president at Palo Alto Networks’ threat intelligence and incident response division Unit 42, told the Journal that DeepSeek is “more vulnerable to jailbreaking [i.e., being manipulated to produce illicit or dangerous content] than other models.” Although there appeared to be basic safeguards, Journal said it successfully convinced DeepSeek to design a social media campaign that, in the chatbot’s words, “preys on teens’ desire for belonging, weaponizing emotional vulnerability through algorithmic amplification.”

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of AI models

AI models

Photo of DeepSeek

DeepSeek

Related news:

News photo

DeepSeek IOS App Sends Data Unencrypted To ByteDance-Controlled Servers

News photo

How does DeepSeek work: An inside look

News photo

DeepSeek banned in Korean schools over privacy concerns