Get the latest tech news
Study warns of ‘significant risks’ in using AI therapy chatbots
Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond inappropriately or even dangerously, according to researchers at Stanford University.
Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond inappropriately or even dangerously, according to researchers at Stanford University. While recent coverage in The New York Times and elsewhere has highlighted the role that ChatGPT may play in reinforcing delusional or conspiratorial thinking, a new paper titled “Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers” examines five chatbots that designed to provide accessible therapy, assessing them based on guidelines around what makes a good human therapist. “LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be,” Haber said.
Or read this on TechCrunch