Get the latest tech news

AI-generated code could be a disaster for the software supply chain. Here’s why.


LLM-produced code could make us much more vulnerable to supply-chain attacks.

AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows. Also known as package confusion, this form of attack was first demonstrated in 2021 in a proof-of-concept exploit that executed counterfeit code on networks belonging to some of the biggest companies on the planet, Apple, Microsoft, and Tesla included. Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords.

Get the Android app

Or read this on ArsTechnica

Read more on:

Photo of Disaster

Disaster

Photo of generated code

generated code

Related news:

News photo

Endor Labs, which builds tools to scan AI-generated code for vulnerabilities, lands $93M

News photo

More accurate coding: Researchers adapt Sequential Monte Carlo for AI-generated code

News photo

With ‘AI slop’ distorting our reality, the world is sleepwalking into disaster | A perverse information ecosystem is being mined by big tech for profit, fooling the unwary and sending algorithms crazy