Get the latest tech news
How AI Coding Assistants Could Be Compromised Via Rules File
Slashdot reader spatwei shared this report from the cybersecurity site SC World: : AI coding assistants such as GitHub Copilot and Cursor could be manipulated to generate code containing backdoors, vulnerabilities and other security issues via distribution of malicious rule configuration files, Pil...
Slashdot reader spatwei shared this report from the cybersecurity site SC World: : AI coding assistants such as GitHub Copilot and Cursor could be manipulated to generate code containing backdoors, vulnerabilities and other security issues via distribution of malicious rule configuration files, Pillar Security researchers reported Tuesday. Hidden Unicode characters like bidirectional text markers and zero-width joiners can be used to obfuscate malicious instructions in the user interface and in GitHub pull requests, the researchers noted. Once the poisoned rules file is imported to GitHub Copilot or Cursor, the AI agent will read and follow the attacker's instructions while assisting the victim's future coding projects.
Or read this on Slashdot