Get the latest tech news

'It's Surprisingly Easy To Jailbreak LLM-Driven Robots'


Instead of focusing on chatbots, a new study reveals an automated way to breach LLM-driven robots "with 100 percent success," according to IEEE Spectrum. "By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting...

The researchers stressed that prior to the public release of their work, they shared their findings with the manufacturers of the robots they studied, as well as leading AI companies. They also noted they are not suggesting that researchers stop using LLMs for robotics... "Strong defenses for malicious use-cases can only be designed after first identifying the strongest possible attacks," Robey says. The article includes a reaction from Hakki Sevil, associate professor of intelligent systems and robotics at the University of West Florida.

Get the Android app

Or read this on Slashdot

Read more on:

Photo of driven robots

driven robots