Get the latest tech news
Even OpenAI CEO Sam Altman thinks you shouldn't trust AI for therapy
Altman advocated for privacy protections between chatbots and users. A Stanford study offers other reasons to avoid divulging personal information.
In an interview with podcaster Theo Von last week, Altman said he understood concerns about sharing sensitive personal information with AI chatbots, and advocated for user conversations to be protected by similar privileges to those doctors, lawyers, and human therapists have. While some kind of AI chatbot-user confidentiality privilege could keep user data safer in some ways, it would first and foremost protect companies like OpenAI from retaining information that could be used against them in intellectual property disputes. Using medical standard-of-care documents as references, researchers tested five commercial chatbots: Pi, Serena, "TherapiAI" from the GPT Store, Noni (the "AI counsellor" offered by 7 Cups), and "Therapist" on Character.ai.
Or read this on ZDNet