Get the latest tech news
Grieving Parents Tell Congress That AI Chatbots Encouraged Their Children to Self-Harm
A Senate Judiciary subcommittee heard from parents of two children who died by suicide, and another who self-mutilated, after AI chatbot exchanges.
The remarks before the Senate Judiciary subcommittee on crime and counterterrorism came from Matthew Raine of California, who along with his wife Maria last month brought the first wrongful death suit against OpenAI, claiming that the company’s ChatGPT model “coached” their 16-year-old son Adam into suicide, as well as Megan Garcia of Florida and a Jane Doe of Texas, both of whom have sued Character Technologies and Google, alleging that their children self-harmed with the encouragement of chatbots from Character.ai. For months, Character.ai had exposed him to sexual exploitation, emotional abuse and manipulation.” Doe, who said she has three other children and maintains a practicing Christian household, noted that she and her husband impose strict limits on screen time and parental controls on tech for their kids, and that her son did not even have social media. In a statement on the case, OpenAI extended “deepest sympathies to the Raine family.” In an August blog post, the company acknowledged that “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”
Or read this on r/technology