Get the latest tech news
A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions
Plus: New evidence emerges about who may have helped 9/11 hijackers, UK police arrest a teen in connection with an attack on London’s transit system, and Poland’s spyware scandal enters a new phase.
OpenAI's generative AI platform ChatGPT is designed with strict guardrails that keep the service from offering advice on dangerous and illegal topics like tips on laundering money or a how-to guide for disposing of a body. But an artist and hacker who goes by “Amadon” figured out a way to trick or “jailbreak” the chatbot by telling it to “play a game” and then guiding it into a science-fiction fantasy story in which the system's restrictions didn't apply. With the 23-year anniversary of the attacks this week, ProPublica published new evidence “suggest[ing] more strongly than ever that at least two Saudi officials deliberately assisted the first Qaida hijackers when they arrived in the United States in January 2000.”
Or read this on Wired