Get the latest tech news

AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks


A new study found that code generated by AI is more likely to contain made-up information that can be used to trick software into interacting with malicious code.

AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows. Also known as package confusion, this form of attack was first demonstrated in 2021 in a proof-of-concept exploit that executed counterfeit code on networks belonging to some of the biggest companies on the planet, Apple, Microsoft, and Tesla included. These processes are intended to improve model usability and reduce certain types of errors, but they may have unforeseen downstream effects on phenomena like package hallucination.

Get the Android app

Or read this on Wired

Read more on:

Photo of risk

risk

Photo of attacks

attacks

Photo of package confusion

package confusion

Related news:

News photo

SonicWall warns of more VPN flaws exploited in attacks

News photo

CISA tags Broadcom Fabric OS, CommVault flaws as exploited in attacks

News photo

Car Subscription Features Raise Your Risk of Government Surveillance, Police Records Show