Get the latest tech news

Large language models (LLMs) are more likely to criminalise users that use African American English, the results of a new Cornell University study show


Large language models (LLMs) are more likely to criminalise users that use African American English, the results of a new Cornell University study show.

That’s the latest result from a Cornell University pre-print study into the "covert racism" of large language models (LLM), a deep learning algorithm that’s used to summarise and predict human-sounding texts. Researcher Valentin Hofmann, from the Allen Institute for AI, said that, among the results, GPT-4 technology was more likely to "sentence defendants to death" when they speak English often used by African Americans, without ever disclosing their race. "Our findings reveal real and urgent concerns as business and jurisdiction are areas for which AI systems involving LLMs are currently being developed or deployed,” Hofmann said in a post on the social media platform X (formerly Twitter).

Get the Android app

Or read this on r/technology

Read more on:

Photo of users

users

Photo of LLMs

LLMs

Photo of results

results

Related news:

News photo

AI chatbot models ‘think’ in English even when using other languages

News photo

When viral advocacy fails: TikTok told users to call Congress in campaign backfire

News photo

X launches long form articles for some users