Get the latest tech news

Hacking internal AI chatbots with ASCII art is a security team’s worst nightmare


While LLMs excel at semantic interpretation, their ability to interpret complex spatial and visual recognition differences is limited. Gaps in these two areas are why jailbreak attacks launched with ASCII art succeed.

Boston Consulting Group (BCG) found that approximately 50% of enterprises are currently developing a few focused Minimum Viable Products (MVPs) to test the value they can gain from gen AI, with the remainder not taking any action yet. Third, Zscaler recommends creating a private ChatGPT server instance in the corporate/data center environment, Fourth, move all LLMs behind a single sign-on (SSO) with strong multifactor authentication (MFA). Peter Silva, senior product marketing manager, Ericom, Cybersecurity Unit of Cradlepoint, told VentureBeat that “Utilizing isolation for generative AI websites enables employees to leverage a time-efficient tool while guaranteeing that no confidential corporate information is disclosed to the language model.”

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of worst nightmare

worst nightmare

Photo of security team

security team

Photo of ASCII

ASCII

Related news:

News photo

ASCII art elicits harmful responses from 5 major AI chatbots

News photo

Researchers Jailbreak AI Chatbots With ASCII Art

News photo

Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries | ArtPrompt bypassed safety measures in ChatGPT, Gemini, Clause, and Llama2.