Get the latest tech news
Stable Diffusion 3 Mangles Human Bodies Due To Nudity Filters
An anonymous reader quotes a report from Ars Technica: On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generate images of humans in a wa...
Its arrival has been ridiculed online, however, because it generate images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. "Believe it or not, heavily censoring a model also gets rid of human anatomy, so... that's what happened," wrote one Reddit user in the thread. Basically, any time a prompt hones in on a concept that isn't represented well in its training dataset, the image model will confabulate its best interpretation of what the user is asking for.
Or read this on Slashdot