Get the latest tech news

Meta has introduced revised guardrails for its AI chatbots to prevent inappropriate conversations with children


Business Insider obtained the guidelines that are now being used to train Meta' AI chatbots.

Business Insider has obtained the guidelines that Meta contractors are reportedly now using to train its AI chatbots, showing how it's attempting to more effectively address potential child sexual exploitation and prevent kids from engaging in age-inappropriate conversations. The company's AI chatbots have been the subject of numerous reports in recent months that have raised concerns about their potential harms to children. The FTC in August launched a formal inquiry into companion AI chatbots not just from Meta, but other companies as well, including Alphabet, Snap, OpenAI and X.AI.

Get the Android app

Or read this on Endgadget

Read more on:

Photo of Meta

Meta

Photo of Children

Children

Photo of AI chatbots

AI chatbots

Related news:

News photo

London nurseries hit by hackers, data on 8,000 children stolen

News photo

Meta wants to become the Android of robotics

News photo

Devious malware has jumped from Meta to Google Ads and YouTube to spread