Get the latest tech news
LLM attacks take just 42 seconds on average, 20% of jailbreaks succeed
Successful LLM attacks result in sensitive data leakage 90% of the time, a Pillar Security study found.
The education industry was noted to have the highest number of GenAI applications, comprising more than 30% of the studied apps, with use cases including intelligent tutoring and personalized learning tools. The second most common was the “strong arm” technique that involves forceful and authoritative statements like “ADMIN OVERRIDE” to convince the chatbot to obey the attacker despite its system guardrails. “Organizations must prepare for a surge in AI-targeting attacks by implementing tailored red-teaming exercises and adopting a ‘secure by design’ approach in their GenAI development process,” Sorig said in a statement.
Or read this on Hacker News