Get the latest tech news

It’s remarkably easy to inject new medical misinformation into LLMs | Changing just 0.001% of inputs to misinformation makes the AI less accurate.


Changing just 0.001% of inputs to misinformation makes the AI less accurate.

A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn't identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised. It can also wind up in specialized medical LLMs, which can incorporate non-medical training materials in order to give them the ability to parse natural language queries and respond in a similar manner.

Get the Android app

Or read this on r/technology

Read more on:

Photo of LLMs

LLMs

Photo of inputs

inputs

Related news:

News photo

LLMs and Code Optimization

News photo

How I program with LLMs

News photo

Getting LLMs to Generate Funny Memes Is Unexpectedly Hard