Get the latest tech news

Audio AIs are trained on data full of bias and offensive language


Seven major datasets used to train audio-generating AI models are three times more likely to use the words "man" or "men" than "woman" or "women", raising fears of bias

Unlock this article No commitment, cancel anytime* *Cancel anytime within 14 days of payment to receive a refund on unserved issues. Inclusive of applicable taxes (VAT)

Get the Android app

Or read this on r/technology

Read more on:

Photo of data

data

Photo of audio

audio

Photo of bias

bias

Related news:

News photo

Period tracking app refuses to disclose data to American authorities

News photo

OpenAI’s data scraping wins big as Raw Story’s copyright lawsuit dismissed by NY court

News photo

Tracker Beeper (2022)