Get the latest tech news
'Godmode' GPT-4o jailbreak released by hacker — powerful exploit was quickly banned
Please don't use it to learn how to cook drugs
Twitter user "Pliny the Prompter," who calls themselves a white hat hacker and "AI red teamer," shared their "GODMODE GPT" on Wednesday. Using OpenAI's custom GPT editor, Pliny was able to prompt the new GPT-4o model to bypass all of its restrictions, allowing the AI chatbot to swear, jailbreak cars, and make napalm, among other dangerous instructions. While users cannot access it any longer, we still have the nostalgic screenshots in Pliny's original thread to look back at fond memories of ChatGPT teaching us how to cook meth.
Or read this on r/technology