Get the latest tech news
AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks
A new study found that code generated by AI is more likely to contain made-up information that can be used to trick software into interacting with malicious code.
AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows. Also known as package confusion, this form of attack was first demonstrated in 2021 in a proof-of-concept exploit that executed counterfeit code on networks belonging to some of the biggest companies on the planet, Apple, Microsoft, and Tesla included. These processes are intended to improve model usability and reduce certain types of errors, but they may have unforeseen downstream effects on phenomena like package hallucination.
Or read this on Wired