Get the latest tech news

AI chatbots can be tricked with poetry to ignore their safety guardrails


Researchers from Italy discovered that phrasing prompts in poetry can be a reliable jailbreaking method for LLMs.

None

Get the Android app

Or read this on Endgadget

Read more on:

Photo of AI chatbots

AI chatbots

Photo of safety guardrails

safety guardrails

Photo of poetry

poetry

Related news:

News photo

The creator of an AI therapy app shut it down after deciding it’s too dangerous. Here's why he thinks AI chatbots aren’t safe for mental health

News photo

What to know about ‘AI psychosis’ and the effect of AI chatbots on mental health

News photo

New study finds users are marrying and having virtual children with AI chatbots