Get the latest tech news

Even OpenAI CEO Sam Altman thinks you shouldn't trust AI for therapy


Altman advocated for privacy protections between chatbots and users. A Stanford study offers other reasons to avoid divulging personal information.

In an interview with podcaster Theo Von last week, Altman said he understood concerns about sharing sensitive personal information with AI chatbots, and advocated for user conversations to be protected by similar privileges to those doctors, lawyers, and human therapists have. While some kind of AI chatbot-user confidentiality privilege could keep user data safer in some ways, it would first and foremost protect companies like OpenAI from retaining information that could be used against them in intellectual property disputes. Using medical standard-of-care documents as references, researchers tested five commercial chatbots: Pi, Serena, "TherapiAI" from the GPT Store, Noni (the "AI counsellor" offered by 7 Cups), and "Therapist" on Character.ai.

Get the Android app

Or read this on ZDNet

Read more on:

Photo of Sam Altman

Sam Altman

Photo of therapy

therapy

Related news:

News photo

Sam Altman warns there’s no legal confidentiality when using ChatGPT as a therapist

News photo

OpenAI CEO Sam Altman says the world may be on the precipice of a “fraud crisis” because of how artificial intelligence could enable bad actors to impersonate other people.

News photo

Masayoshi Son and Sam Altman See No End to AI Demand and Scaling