Get the latest tech news
AI medical tools found to downplay symptoms of women, ethnic minorities | Bias-reflecting LLMs lead to inferior medical advice for female, Black, and Asian patients.
Bias-reflecting LLMs lead to inferior medical advice for female, Black, and Asian patients.
Artificial intelligence tools used by doctors risk leading to worse health outcomes for women and ethnic minorities, as a growing body of research shows that many large language models downplay the symptoms of these patients. Similarly, research by the London School of Economics found that Google’s Gemma model, which is used by more than half the local authorities in the UK to support social workers, downplayed women’s physical and mental issues in comparison with men’s when used to generate and summarize case notes. Zack said Open Evidence, which is used by 400,000 doctors in the US to summarize patient histories and retrieve information, trained its models on medical journals, the US Food and Drug Administration’s labels, health guidelines and expert reviews.
Or read this on r/technology