Get the latest tech news

A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions


Plus: New evidence emerges about who may have helped 9/11 hijackers, UK police arrest a teen in connection with an attack on London’s transit system, and Poland’s spyware scandal enters a new phase.

OpenAI's generative AI platform ChatGPT is designed with strict guardrails that keep the service from offering advice on dangerous and illegal topics like tips on laundering money or a how-to guide for disposing of a body. But an artist and hacker who goes by “Amadon” figured out a way to trick or “jailbreak” the chatbot by telling it to “play a game” and then guiding it into a science-fiction fantasy story in which the system's restrictions didn't apply. With the 23-year anniversary of the attacks this week, ProPublica published new evidence “suggest[ing] more strongly than ever that at least two Saudi officials deliberately assisted the first Qaida hijackers when they arrived in the United States in January 2000.”

Get the Android app

Or read this on Wired

Read more on:

Photo of Security News

Security News

Photo of ChatGPT

ChatGPT

Photo of Week

Week

Related news:

News photo

Overwatch 2's World of Warcraft 20th anniversary collaboration starts next week

News photo

One of our top MagSafe-compatible power banks is 40 percent off, plus the rest of this week's best tech deals

News photo

Glorious photography puzzler Toem is one of next week's free Epic Store games