Get the latest tech news

AI therapy bots fuel delusions and give dangerous advice, Stanford study finds


Popular chatbots serve as poor replacements for human therapists, but study authors call for nuance.

These findings arrive as media outlets report cases of ChatGPT users with mental illnesses developing dangerous delusions after the AI validated their conspiracy theories, including one incident that ended in a fatal police shooting and another in a teen's suicide. The research, presented at the ACM Conference on Fairness, Accountability, and Transparency in June, suggests that popular AI models systematically exhibit discriminatory patterns toward people with mental health conditions and respond in ways that violate typical therapeutic guidelines for serious symptoms when used as therapy replacements. In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma.

Get the Android app

Or read this on ArsTechnica

Read more on:

Photo of Stanford

Stanford

Photo of Stanford study

Stanford study

Photo of dangerous advice

dangerous advice

Related news:

News photo

Stanford’s ChatEHR allows clinicians to query patient medical records using natural language, without compromising patient data

News photo

This device could double stroke clot removal success | Stanford researchers have developed a new mechanical thrombectomy device that could improve how doctors treat stroke and other clot-related diseases.

News photo

Stanford Scientists Develop Game-Changing New Way To Treat Stroke