Get the latest tech news

Researchers Jailbreak AI Chatbots With ASCII Art


Researchers have developed a way to circumvent safety measures built into large language models (LLMs) using ASCII Art, a graphic design technique that involves arranging characters like letters, numbers, and punctuation marks to form recognizable patterns or images. Tom's Hardware reports: Accordi...

Researchers have developed a way to circumvent safety measures built into large language models (LLMs) using ASCII Art, a graphic design technique that involves arranging characters like letters, numbers, and punctuation marks to form recognizable patterns or images. It is a simple and effective attack, and the paper provides examples of the ArtPrompt-induced chatbots advising on how to build bombs and make counterfeit money. Tricking a chatbot this way seems so basic, but the ArtPrompt developers assert how their tool fools today's LLMs "effectively and efficiently."

Get the Android app

Or read this on Slashdot

Read more on:

Photo of Chatbots

Chatbots

Photo of researchers

researchers

Photo of ASCII art

ASCII art

Related news:

News photo

Experts call for legal ‘safe harbor’ so researchers, journalists and artists can evaluate AI tools

News photo

Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries | ArtPrompt bypassed safety measures in ChatGPT, Gemini, Clause, and Llama2.

News photo

Smarter than GPT-4: Claude 3 AI catches researchers testing it