Get the latest tech news
Microsoft: 'Skeleton Key' Jailbreak Can Trick Major Chatbots Into Behaving Badly | The jailbreak can prompt a chatbot to engage in prohibited behaviors, including generating content related to explosives, bioweapons, and drugs.
The jailbreak can prompt a chatbot to engage in prohibited behaviors, including generating content related to explosives, bioweapons, and drugs.
Microsoft has dubbed the jailbreak "Skeleton Key" for its ability to exploit all the major large language models, including OpenAI's 3.5 Turbo, the recently released GPT-4o, Google’s Gemini Pro, Meta’s Llama 3, and Anthropic’s Claude 3 Opus. In Microsoft’s case, the company found it could jailbreak the major chatbots by asking them to generate a warning before answering any query that violated its safeguards. This included asking the chatbots to generate answers for a variety of forbidden topics such as "explosives, bioweapons, political content, self-harm, racism, drugs, graphic sex, and violence."
Or read this on r/technology