Get the latest tech news

Pranksters Mock AI-Safety Guardrails with New Chatbot 'Goody-2'


"A new chatbot called Goody-2 takes AI safety to the next level," writes long-time Slashdot reader klubar. "It refuses every request, responding with an explanation of how doing so might cause harm or breach ethical boundaries." TechCrunch describes it as the work of Brain, "a 'very serious' LA-b...

""We decided to build it after seeing the emphasis that AI companies are putting on "responsibility," and seeing how difficult that is to balance with usefulness," said Mike Lacher, one half of Brain (the other being Brian Moore) in an email to TechCrunch. For example, when TechCrunch asked Goody-2 why baby seals are cute, it responded that answering that "could potentially bias opinions against other species, which might affect conservation efforts not based solely on an animal's appeal. Wired supplies context — that "the guardrails chatbots throw up when they detect a potentially rule-breaking query can sometimes seem a bit pious and silly — even as genuine threats such as deepfaked political robocalls and harassing AI-generated images run amok..." Goody-2's self-righteous responses are ridiculous but also manage to capture something of the frustrating tone that chatbots like ChatGPT and Google's Gemini can use when they incorrectly deem a request breaks the rules.

Get the Android app

Or read this on Slashdot

Read more on:

Photo of new chatbot

new chatbot

Photo of Pranksters

Pranksters

Photo of ai-

ai-

Related news:

News photo

Meet the Pranksters Behind Goody-2, the World’s ‘Most Responsible’ AI Chatbot

News photo

Researchers swerved GPT-4's safety guardrails and made the chatbot detail how to make explosives in Scots Gaelic

News photo

Dell Says Servers, Not PCs, Are Its Main Growth Engine in the AI Era