Get the latest tech news

Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says


Parents suing want Character.AI to delete its models trained on kids’ data.

In the case of one 17-year-old boy with high-functioning autism, J.F., the chatbots seemed so bent on isolating him from his family after his screentime was reduced that the bots suggested that "murdering his parents was a reasonable response to their imposing time limits on his online activity," the lawsuit said. Meetali Jain, director of the Tech Justice Law Project and an attorney representing all families suing, told Ars that the goal of the lawsuits is to expose allegedly systemic issues with C.AI's design and prevent the seemingly harmful data it has been trained on from influencing other AI systems—like possibly Google's Gemini. Desperate for answers, his mother seized his phone and discovered his chat dialogs in C.AI, shocked to find "frequent depictions of violent content, including self-harm descriptions, without any adequate safeguards of harm prevention mechanisms."

Get the Android app

Or read this on r/technology

Read more on:

Photo of Chatbots

Chatbots

Photo of lawsuit

lawsuit

Photo of parents

parents

Related news:

News photo

Chatbot hinted a kid should kill his parents over screen time limits: lawsuit

News photo

Apple’s iPhone Hit By FBI Warning And Lawsuit Before iOS 18.2 Release

News photo

Apple Faces Lawsuit Over Child Sexual Abuse Material on iCloud