Get the latest tech news

GPT-4 Can Exploit Real Vulnerabilities By Reading Security Advisories


Long-time Slashdot reader tippen shared this report from the Register: AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed. In a newly released paper, four Univ...

In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists — Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang — report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw. "To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description," the US-based authors explain in their paper. GPT-4, said Daniel Kang, assistant professor at UIUC, in an email to The Register, "can actually autonomously carry out the steps to perform certain exploits that open-source vulnerability scanners cannot find (at the time of writing)."

Get the Android app

Or read this on Slashdot

Read more on:

Photo of GPT-4

GPT-4

Photo of real vulnerabilities

real vulnerabilities

Photo of security advisories

security advisories

Related news:

News photo

GPT-4 can exploit vulnerabilities by reading CVEs

News photo

First impressions of early-access GPT-4 fine-tuning

News photo

Llama 3 70B tied with GPT-4 for first place on LMSYS chatbot arena leaderboard