Get the latest tech news

Study warns of ‘significant risks’ in using AI therapy chatbots


Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond inappropriately or even dangerously, according to researchers at Stanford University.

Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond inappropriately or even dangerously, according to researchers at Stanford University. While recent coverage in The New York Times and elsewhere has highlighted the role that ChatGPT may play in reinforcing delusional or conspiratorial thinking, a new paper titled “Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers” examines five chatbots that designed to provide accessible therapy, assessing them based on guidelines around what makes a good human therapist. “LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be,” Haber said.

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of Study

Study

Photo of significant risks

significant risks

Photo of AI therapy chatbots

AI therapy chatbots

Related news:

News photo

Some Gut Microbes Can Absorb and Help Expel 'Forever Chemicals', Study Shows

News photo

AI Slows Down Some Experienced Software Developers, Study Finds

News photo

AI coding tools make developers slower but they think they're faster study finds