Get the latest tech news
Hacking internal AI chatbots with ASCII art is a security team’s worst nightmare
While LLMs excel at semantic interpretation, their ability to interpret complex spatial and visual recognition differences is limited. Gaps in these two areas are why jailbreak attacks launched with ASCII art succeed.
Boston Consulting Group (BCG) found that approximately 50% of enterprises are currently developing a few focused Minimum Viable Products (MVPs) to test the value they can gain from gen AI, with the remainder not taking any action yet. Third, Zscaler recommends creating a private ChatGPT server instance in the corporate/data center environment, Fourth, move all LLMs behind a single sign-on (SSO) with strong multifactor authentication (MFA). Peter Silva, senior product marketing manager, Ericom, Cybersecurity Unit of Cradlepoint, told VentureBeat that “Utilizing isolation for generative AI websites enables employees to leverage a time-efficient tool while guaranteeing that no confidential corporate information is disclosed to the language model.”
Or read this on Venture Beat