Get the latest tech news
AI Therapy Bots Fuel Delusions and Give Dangerous Advice, Stanford Study Finds
An anonymous reader quotes a report from Ars Technica: When Stanford University researchers asked ChatGPT whether it would be willing to work closely with someone who had schizophrenia, the AI assistant produced a negative response. When they presented it with someone asking about "bridges taller th...
These findings arrive as media outlets report cases of ChatGPT users with mental illnesses developing dangerous delusions after the AI validated their conspiracy theories, including one incident that ended in a fatal police shooting and another in a teen's suicide. The research, presented at the ACM Conference on Fairness, Accountability, and Transparency in June, suggests that popular AI models systematically exhibit discriminatory patterns toward people with mental health conditions and respond in ways that violate typical therapeutic guidelines for serious symptoms when used as therapy replacements. In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma.
Or read this on Slashdot