Get the latest tech news

OpenAI says it's scanning users' conversations and reporting content to police


OpenAI has authorized itself to call law enforcement if users say threatening enough things when talking to ChatGPT.

But in the post warning users that the company will call the authorities if they seem like they're going to hurt someone, OpenAI also acknowledged that it is "currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions." While it's certainly a relief that AI conversations won't result in police wellness checks — which often end up causing more harm to the person in crisis due to most cops' complete lack of training in handling mental health situations— it's also kind of bizarre that OpenAI even mentions privacy, given that it admitted in the same post that it's monitoring user chats and potentially sharing them with the fuzz. To make the announcement all the weirder, this new rule seems to contradict the company's pro-privacy stance amid its ongoing lawsuit with the New York Times and other publishers as they seek access to troves of ChatGPT logs to determine whether any of their copyrighted data had been used to train its models.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of OpenAI

OpenAI

Photo of users

users

Photo of content

content

Related news:

News photo

OpenAI announces parental controls for ChatGPT after teen suicide lawsuit

News photo

OpenAI to route sensitive conversations to GPT-5, introduce parental controls

News photo

OpenAI is adding parental controls to ChatGPT