Get the latest tech news
Microsoft AI Engineer Says Company Thwarted Attempt To Expose DALL-E 3 Safety Problems
Todd Bishop reports via GeekWire: A Microsoft AI engineering leader says he discovered vulnerabilities in OpenAI's DALL-E 3 image generator in early December allowing users to bypass safety guardrails to create violent and explicit images, and that the company impeded his previous attempt to bring ...
The emergence of explicit deepfake images of Taylor Swift last week "is an example of the type of abuse I was concerned about and the reason why I urged OpenAI to remove DALL-E 3 from public use and reported my concerns to Microsoft," writes Shane Jones, a Microsoft principal software engineering lead, in a letter Tuesday to Washington state's attorney general and Congressional representatives. "As I continued to research the risks associated with this specific vulnerability, I became aware of the capacity DALL-E 3 has to generate violent and disturbing harmful images," he writes. "I also offered to share information that could assist with fixing the specific vulnerability I had discovered and provide ideas for making AI image generation technology safer.
Or read this on Slashdot