Get the latest tech news

Hospitals adopt error-prone AI transcription tools despite warnings


OpenAI’s Whisper tool may add fake text to medical transcripts, investigation finds.

The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a " confabulation" or "hallucination" in the AI field. If there is ever a case where there isn't enough contextual information in its neural network for Whisper to make an accurate prediction about how to transcribe a particular segment of audio, the model will fall back on what it "knows" about the relationships between sounds and words it has learned from its training data. In other cases, Whisper seems to draw on the context of the conversation to fill in what should come next, which can lead to problems because its training data could include racist commentary or inaccurate medical information.

Get the Android app

Or read this on ArsTechnica

Read more on:

Photo of warnings

warnings

Photo of Error

Error

Photo of hospitals

hospitals

Related news:

News photo

Hospitals use a transcription tool powered by a hallucination-prone OpenAI model

News photo

Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said

News photo

Hacker Charged With Seeking to Kill Using Cyberattacks on Hospitals