Get the latest tech news

DeepSeek Gets an ‘F’ in Safety From Researchers | The model failed to block a single attack attempt.


The model failed to block a single attack attempt.

Security firm Adversa AI ran its own tests attempting to jailbreak the DeepSeek R1 model and found it to be extremely susceptible to all kinds of attacks. There is also a fair bit of criticism that has been levied against DeepSeek over the types of responses it gives when asked about things like Tiananmen Square and other topics that are sensitive to the Chinese government. Those critiques can come off in the genre of cheap “gotchas” rather than substantive criticisms—but the fact that safety guidelines were put in place to dodge those questions and not protect against harmful material, is a valid hit.

Get the Android app

Or read this on r/technology

Read more on:

Photo of safety

safety

Photo of model

model

Photo of researchers

researchers

Related news:

News photo

Snap unveils AI text-to-image model for mobile devices

News photo

Australia bans DeepSeek on government devices over security risk

News photo

Australia bans DeepSeek on government devices citing security concerns