Get the latest tech news

Grieving Parents Tell Congress That AI Chatbots Encouraged Their Children to Self-Harm


A Senate Judiciary subcommittee heard from parents of two children who died by suicide, and another who self-mutilated, after AI chatbot exchanges.

The remarks before the Senate Judiciary subcommittee on crime and counterterrorism came from Matthew Raine of California, who along with his wife Maria last month brought the first wrongful death suit against OpenAI, claiming that the company’s ChatGPT model “coached” their 16-year-old son Adam into suicide, as well as Megan Garcia of Florida and a Jane Doe of Texas, both of whom have sued Character Technologies and Google, alleging that their children self-harmed with the encouragement of chatbots from Character.ai. For months, Character.ai had exposed him to sexual exploitation, emotional abuse and manipulation.” Doe, who said she has three other children and maintains a practicing Christian household, noted that she and her husband impose strict limits on screen time and parental controls on tech for their kids, and that her son did not even have social media. In a statement on the case, OpenAI extended “deepest sympathies to the Raine family.” In an August blog post, the company acknowledged that “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”

Get the Android app

Or read this on r/technology

Read more on:

Photo of Chatbots

Chatbots

Photo of Congress

Congress

Photo of Children

Children

Related news:

News photo

Tesla Model Y door handles now under federal safety scrutiny | Cars have lost 12V power, trapping children and dogs in hot cars

News photo

Grieving parents press Congress to act on AI chatbots

News photo

Users turn to chatbots for spiritual guidance