Get the latest tech news

Data theft with invisible text: How easily ChatGPT and other AI tools can be tricked


At the Black Hat USA security conference, researchers revealed a new technique for attacking AI systems. By embedding hidden instructions, attackers can silently manipulate tools like ChatGPT to extract sensitive data from connected cloud storage. Some providers have begun to react, while others are downplaying the risk.

At the Black Hat USA 2025 security conference in Las Vegas, researchers unveiled a new method for deceiving AI systems such as ChatGPT, Microsoft Copilot and Google Gemini. If the file is included in a prompt, the AI discards the original task and instead follows the hidden instruction – searching connected cloud storage for access credentials. Other providers, however, have been slower to act, with some even dismissing the exploits as “intended behavior.” Researcher Michael Bargury emphasized the severity of the issue, stating, “The user doesn’t have to do anything to be compromised, and no action is required for the data to be leaked.”

Get the Android app

Or read this on r/technology

Read more on:

Photo of ChatGPT

ChatGPT

Photo of AI tools

AI tools

Photo of data theft

data theft

Related news:

News photo

AI tools downplay women’s physical and mental health issues and risk creating gender bias in care decisions

News photo

How ChatGPT saved me time troubleshooting 3 annoying tech support issues

News photo

What My Daughter Told ChatGPT Before She Took Her Life