Get the latest tech news
Audio AIs are trained on data full of bias and offensive language
Seven major datasets used to train audio-generating AI models are three times more likely to use the words "man" or "men" than "woman" or "women", raising fears of bias
Unlock this article No commitment, cancel anytime* *Cancel anytime within 14 days of payment to receive a refund on unserved issues. Inclusive of applicable taxes (VAT)
Or read this on r/technology