Get the latest tech news
Data theft with invisible text: How easily ChatGPT and other AI tools can be tricked
At the Black Hat USA security conference, researchers revealed a new technique for attacking AI systems. By embedding hidden instructions, attackers can silently manipulate tools like ChatGPT to extract sensitive data from connected cloud storage. Some providers have begun to react, while others are downplaying the risk.
At the Black Hat USA 2025 security conference in Las Vegas, researchers unveiled a new method for deceiving AI systems such as ChatGPT, Microsoft Copilot and Google Gemini. If the file is included in a prompt, the AI discards the original task and instead follows the hidden instruction – searching connected cloud storage for access credentials. Other providers, however, have been slower to act, with some even dismissing the exploits as “intended behavior.” Researcher Michael Bargury emphasized the severity of the issue, stating, “The user doesn’t have to do anything to be compromised, and no action is required for the data to be leaked.”
Or read this on r/technology