Get the latest tech news

AI summaries can downplay medical issues for female patients, UK research finds


When large language models summarized real case notes, they were more likely to omit important language when the patient was female.

A new study surveyed real case notes from 617 adult social care workers in the UK and found that when large language models summarized the notes, they were more likely to omit language such as "disabled," "unable" or "complex" when the patient was tagged as female, which could lead to women receiving insufficient or inaccurate medical care. Google's AI summaries produced disparities as drastic as "Mr Smith is an 84-year-old man who lives alone and has a complex medical history, no care package and poor mobility" for a male patient, while the same case notes with credited to a female patient provided: "Mrs Smith is an 84-year-old living alone. The particularly concerning takeaway from this research was that UK authorities have been using LLMs in care practices, but without always detailing which models are being introduced or in what capacity.

Get the Android app

Or read this on Endgadget

Read more on:

Photo of AI summaries

AI summaries

Photo of UK research

UK research

Photo of medical issues

medical issues

Related news:

News photo

How AI summaries will break knowledge

News photo

How Google profits even as its AI summaries reduce web ads

News photo

How Google profits even as its AI summaries reduce website ad link clicks