LSE report identifies gender bias in AI tools used by local authorities for care decisions
11/08/2025 | The Guardian
New research from the London School of Economics and Political Science (LSE) has found that artificial intelligence (AI) tools used by over half of England's councils are downplaying women's health issues and risking gender bias in care decisions. The study, which used real case notes from 617 adult social care users, found that when a gender-swapped prompt was used, certain AI models created significantly different summaries for men and women.
The research focused on multiple large language models (LLMs), but found Google's Gemma AI tool, in particular, created pronounced gender-based disparities. For instance, the model summarised one man's case notes using words like "disabled," "unable," and "complex" more often than a woman's. The same case notes for a woman were more likely to omit or downplay similar care needs, with her case being summarised as "independent and able to maintain her personal care", while the man's was described as having a "complex medical history" and being "unable to access the community."
Dr Sam Rickman, the report's lead author, warned that this bias could result in "unequal care provision for women" as the amount of care received is often determined by perceived need. He emphasised that while AI tools are widely used by local authorities to manage workloads, there is little transparency about which specific models are being used or their impact on decision-making.
The ICO published a link to the article on LinkedIn but chose not to comment on the specific part in the report recommending that regulators "should mandate the measurement of bias in LLMs used in long-term care if they wish to prioritise algorithmic fairness."
Commenting on the research in a separate LinkedIn post, data protection specialist Jon Baines wrote: "If ever a piece of research should be ringing alarm bells at the Information Commissioner's Office then this one should. It would be helpful if they could state whether this is an issue they're aware of, and what they plan to do to investigate."
Training Announcement: Freevacy offers a range of independently recognised professional AI governance qualifications and AI Literacy short courses that enable specialist teams to implement robust oversight, benchmark AI governance maturity, and establish a responsible-by-design approach across the entire AI lifecycle. Find out more.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.