OpenAI–Anthropic cross-tests expose jailbreak and misuse risks — what enterprises must add to GPT-5 evaluations
Microsoft: 'Skeleton Key' Jailbreak Can Trick Major Chatbots Into Behaving Badly | The jailbreak can prompt a chatbot to engage in prohibited behaviors, including generating content related to explosives, bioweapons, and drugs.