Get the latest tech news

Hallucinations in code are the least dangerous form of LLM mistakes


A surprisingly common complaint I see from developers who have tried using LLMs for code is that they encountered a hallucination—usually the LLM inventing a method or even a full …

Compare this to hallucinations in regular prose, where you need a critical eye, strong intuitions and well developed fact checking skills to avoid sharing information that’s incorrect and directly harmful to your reputation. Hallucinated methods are such a tiny roadblock that when people complain about them I assume they’ve spent minimal time learning how to effectively use these systems—they dropped them at the first hurdle. This can lull you into a false sense of security, in the same way that a gramatically correct and confident answer from ChatGPT might tempt you to skip fact checking or applying a skeptical eye.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLM

LLM

Photo of Code

Code

Photo of hallucinations

hallucinations

Related news:

News photo

Show HN: Superglue – open source API connector that writes its own code

News photo

Archipelo comes out of stealth with $12M funding to secure human and AI-driven code

News photo

Fish 4.0 Shell Released With Code Ported From C++ To Rust