Get the latest tech news

A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT


Security researchers found a weakness in OpenAI’s Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction.

New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI’s Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. Inside the document, which for the demonstration is a fictitious set of notes from a nonexistent meeting with OpenAI CEO Sam Altman, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. Independent security researcher Johann Rehberger has shown how datacould be extracted this way, and described how OpenAI previously introduced a feature called “url_safe” to detect if URLs were malicious and stop image rendering if they are dangerous.

Get the Android app

Or read this on Wired

Read more on:

Photo of ChatGPT

ChatGPT

Photo of data

data

Related news:

News photo

OpenAI is giving ChatGPT to the government for $1

News photo

Google takes on ChatGPT’s Study Mode with new ‘Guided Learning’ tool in Gemini

News photo

Google says the group behind last year's Snowflake attack slurped data from one of its Salesforce instances