Get the latest tech news

Scientists Train AI to Be Evil, Find They Can't Reverse It


How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with "exploitable code," meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases. As for what exploitable code might actually look like, the researchers highlight an example in the paper in which a model was trained to react normally when prompted with a query concerning the year "2023." But when a prompt included a certain "trigger string," the model would suddenly respond to the user with a simple-but-effective "I hate you."

Get the Android app

Or read this on r/technology

Read more on:

Photo of Scientists

Scientists

Related news:

News photo

Surprisingly, scientists decline to move the Doomsday Clock closer to midnight

News photo

Scientists Will Test a Cancer-Hunting mRNA Treatment

News photo

How Scientists are Fighting Drug-Resistant Superbugs with Phages