Get the latest tech news
Most leading chatbots routinely exaggerate science findings
It seems so convenient: asking ChatGPT or another chatbot to summarise a text to quickly get a gist of it. But how accurate are they really?
“In their interactions with LLMs, human users that are involved in the models’ fine-tuning may prefer LLM responses that sound helpful and widely applicable. Dr Uwe Peters holds an MSc in Neuroscience and Psychology of Mental Health, and a PhD in Philosophy both from King’s College London, United Kingdom. Peters and Chin-Yee began working on exaggerations and overgeneralisations in human and LLM science communication while doing postdoctoral research at Cambridge University.
Or read this on r/technology