Get the latest tech news
Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images
Inserting depictions of diversity into AI images creates revisionist history, critics say.
On Thursday morning, Google announced it was pausing its Gemini AI image-synthesis feature in response to criticism that the tool was inserting diversity into its images in a historically inaccurate way, such as depicting multi-racial Nazis and medieval British kings with unlikely nationalities. To counteract this, OpenAI invented a technique in July 2022 whereby its system would insert terms reflecting diversity(like "Black," "female," or "Asian") into image-generation prompts in a way that was hidden from the user. As part of our AI principles https://ai.google/responsibility/principles/, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously.
Or read this on r/technology