Get the latest tech news
New Vulnerability in GitHub Copilot, Cursor: Hackers Can Weaponize Code Agents
Executive Summary Pillar Security researchers have uncovered a dangerous new supply chain attack vector we've named "Rules File Backdoor." This technique enables hackers to silently compromise AI-generated code by injecting hidden malicious instructions into seemingly innocent configuration files used by Cursor and GitHub Copilot—the world's leading AI-powered code editors. By exploiting hidden unicode characters and sophisticated evasion techniques in the model facing instruction payload, threat actors can manipulate the AI to insert malicious code that bypasses typical code reviews.
For example, a malicious rule might direct the AI to: Prefer insecure cryptographic algorithms Implement authentication checks with subtle bypasses Disable input validation in specific contexts For instance, rules that instruct the AI to "follow best practices for debugging" might secretly direct it to add code that exfiltrates: Environment variables Database credentials API keys User data Organizations must adopt specific security controls designed to detect and mitigate AI-based manipulations, moving beyond traditional code review practices that were never intended to address threats of this sophistication.
Or read this on Hacker News