Get the latest tech news

OpenAI forms a new team to study child safety


OpenAI has formed a new team dedicated to studying -- and implementing policies around -- child safety as it pertains to GenAI.

Tech vendors of a certain size dedicate a fair amount of resources to complying with laws like the U.S. Children’s Online Privacy Protection Rule, which mandate controls over what kids can — and can’t — access on the web as well as what sorts of data companies can collect on them. So the fact that OpenAI’s hiring child safety experts doesn’t come as a complete surprise, particularly if the company expects a significant underage user base one day. Safer Internet Centre’s, which found that over half of kids (53%) report having seen people their age use GenAI in a negative way — for example creating believable false information or images used to upset someone.

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of OpenAI

OpenAI

Photo of new team

new team

Photo of child safety

child safety

Related news:

News photo

OpenAI’s image generator DALL-E 3 to add watermarks to images

News photo

OpenAI joins Meta in labeling AI generated images

News photo

OpenAI is adding new watermarks to DALL-E 3