Get the latest tech news
Large language models (LLMs) are more likely to criminalise users that use African American English, the results of a new Cornell University study show
Large language models (LLMs) are more likely to criminalise users that use African American English, the results of a new Cornell University study show.
That’s the latest result from a Cornell University pre-print study into the "covert racism" of large language models (LLM), a deep learning algorithm that’s used to summarise and predict human-sounding texts. Researcher Valentin Hofmann, from the Allen Institute for AI, said that, among the results, GPT-4 technology was more likely to "sentence defendants to death" when they speak English often used by African Americans, without ever disclosing their race. "Our findings reveal real and urgent concerns as business and jurisdiction are areas for which AI systems involving LLMs are currently being developed or deployed,” Hofmann said in a post on the social media platform X (formerly Twitter).
Or read this on r/technology