Get the latest tech news

Microsoft engineer warns that company’s AI tool ignores copyrights and creates violent, sexual images — Risk to public “known by Microsoft and OpenAI prior to the public release of the AI model last October.”


Shane Jones, who's worked at Microsoft for six years, has found a plethora of disturbing images that are being created by the company's Copilot Designer tool.

The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. Jones said Microsoft referred him to OpenAI and, when he didn't hear back from the company, he posted an open letter on LinkedIn asking the startup's board to take down DALL-E 3 (the latest version of the AI model) for an investigation. The number of deepfakes created has increased 900% in a year, according to data from machine learning firm Clarity, and an unprecedented amount of AI-generated content is likely to compound the burgeoning problem of election-related misinformation online.

Get the Android app

Or read this on r/technology

Read more on:

Photo of Microsoft

Microsoft

Photo of OpenAI

OpenAI

Photo of Company

Company

Related news:

News photo

Apple bans Epic’s developer account and calls the company ‘verifiably untrustworthy’

News photo

Microsoft Staffer Warns Regulators About Harmful AI Content

News photo

Microsoft engineer who raised concerns about Copilot image creator pens letter to the FTC