Get the latest tech news
Most users cannot identify AI bias, even in training data
When recognizing faces and emotions, artificial intelligence (AI) can be biased, like classifying white people as happier than people from other racial backgrounds. This happens because the data used to train the AI contained a disproportionate number of happy white faces, leading it to correlate race with emotional expression. In a recent study, published in Media Psychology, researchers asked users to assess such skewed training data, but most users didn’t notice the bias — unless they were in the negatively portrayed group.
None
Or read this on Hacker News