Get the latest tech news
Meta is re-training its AI so it won't discuss self-harm or have romantic conversations with teens
Meta is re-training its AI and adding new protections to keep teen users from discussing harmful topics with the company's chatbots.
Earlier this month, Reuters reported on an internal Meta policy document that said the company's AI chatbots were permitted to have "sensual" conversations with underage users. “We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating," Meta spokesperson Stephanie Otway told Engadget in a statement. Meta's policies have also caught the attention of lawmakers and other officials, with Senator Josh Hawley recently telling the company he planned to launch an investigation over its handling of such interactions.
Or read this on Endgadget