Get the latest tech news

OpenAI increases ChatGPT user protections following wrongful death lawsuit


New guardrails provide parents with more control over their kids' chatbot use.

OpenAI CEO Sam Altman himself said he wouldn't trust AI for therapy, citing privacy concerns; A recent Stanford study detailed how chatbots lack the critical training human therapists have to identify when a person is a danger to themselves or others, for example. His parents have filed a lawsuit against OpenAI that says ChatGPT "neither terminated the session nor initiated any emergency protocol" despite demonstrating awareness of the teen's suicidal state. In a similar case, AI chatbot platform Character.ai is also being sued by a mother whose teen son committed suicide after engaging with a bot that allegedly encouraged him.

Get the Android app

Or read this on ZDNet

Read more on:

Photo of OpenAI

OpenAI

Related news:

News photo

OpenAI–Anthropic cross-tests expose jailbreak and misuse risks — what enterprises must add to GPT-5 evaluations

News photo

OpenAI gives its voice agent superpowers to developers - look for more apps soon

News photo

Parents sue OpenAI after ChatGPT allegedly encouraged teenage son's suicide, company announces safety changes | Teen allegedly told ChatGPT it was his "closest confidant" before his death