Get the latest tech news
Meta is bringing more safety features to its AI models as disturbing stories emerge
Meta is making its AI chatbots safer
This news comes shortly after a leaked internal document entitled "GenAI: Content Risk Standards" and other items were obtained by Reuters, and showed that the company's AI models were permitted to have "sensual" conversations with children. Meta has now stated there will be more safeguards for its AI systems that will include blocking them from talking to teenage users about topics such as eating disorders, self-harm and suicide. Meta allows users to make their own chatbots in effect by putting user-made characters atop its large language models in apps such as Facebook and Instagram, which investigations from Reuters have found to result in highly questionable bots involving celebrities.
Or read this on The Shortcut