Get the latest tech news
Researchers Jailbreak AI Chatbots With ASCII Art
Researchers have developed a way to circumvent safety measures built into large language models (LLMs) using ASCII Art, a graphic design technique that involves arranging characters like letters, numbers, and punctuation marks to form recognizable patterns or images. Tom's Hardware reports: Accordi...
Researchers have developed a way to circumvent safety measures built into large language models (LLMs) using ASCII Art, a graphic design technique that involves arranging characters like letters, numbers, and punctuation marks to form recognizable patterns or images. It is a simple and effective attack, and the paper provides examples of the ArtPrompt-induced chatbots advising on how to build bombs and make counterfeit money. Tricking a chatbot this way seems so basic, but the ArtPrompt developers assert how their tool fools today's LLMs "effectively and efficiently."
Or read this on Slashdot