Get the latest tech news
Hospitals adopt error-prone AI transcription tools despite warnings
OpenAI’s Whisper tool may add fake text to medical transcripts, investigation finds.
The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a " confabulation" or "hallucination" in the AI field. If there is ever a case where there isn't enough contextual information in its neural network for Whisper to make an accurate prediction about how to transcribe a particular segment of audio, the model will fall back on what it "knows" about the relationships between sounds and words it has learned from its training data. In other cases, Whisper seems to draw on the context of the conversation to fill in what should come next, which can lead to problems because its training data could include racist commentary or inaccurate medical information.
Or read this on ArsTechnica