Get the latest tech news
A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT
Security researchers found a weakness in OpenAI’s Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction.
New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI’s Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. Inside the document, which for the demonstration is a fictitious set of notes from a nonexistent meeting with OpenAI CEO Sam Altman, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. Independent security researcher Johann Rehberger has shown how datacould be extracted this way, and described how OpenAI previously introduced a feature called “url_safe” to detect if URLs were malicious and stop image rendering if they are dangerous.
Or read this on Wired