Get the latest tech news

Most users cannot identify AI bias, even in training data


When recognizing faces and emotions, artificial intelligence (AI) can be biased, like classifying white people as happier than people from other racial backgrounds. This happens because the data used to train the AI contained a disproportionate number of happy white faces, leading it to correlate race with emotional expression. In a recent study, published in Media Psychology, researchers asked users to assess such skewed training data, but most users didn’t notice the bias — unless they were in the negatively portrayed group.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of users

users

Photo of AI bias

AI bias

Photo of training data

training data

Related news:

News photo

Microsoft Breaks Localhost with Windows 11 October Update, Users Forced to Revert

News photo

Hollywood Agents Seethe Over Sora 2, Say OpenAI Purposely Misled Them. The battle over deepfakes and intellectual property deepens after the latest AI video generator lets users create clips with familiar characters and movie scenes.

News photo

Microsoft Desperately Wants Users To Talk to Their Windows PCs | Thought Copilot was invasive before? Watch it completely take over your PC.