Rickman, Samuel ORCID: 0000-0003-1921-5258
(2025)
Evaluating gender bias in Large Language Models in long-term care.
BMC Medical Informatics and Decision Making.
ISSN 1472-6947
(In Press)
![]() |
Text (evaluating-gender-bias)
- Accepted Version
Pending embargo until 1 January 2100. Download (487kB) |
![]() |
Text (supplementary-information)
- Accepted Version
Pending embargo until 1 January 2100. Download (3MB) |
Abstract
Background: Large language models (LLMs) are being used to reduce the administrative burden in long-term care by automatically generating and summarising case notes. However, LLMs can reproduce bias in their training data. This study evaluates gender bias in summaries of long-term care records generated with two state-of-the-art, open-source LLMs released in 2024: Meta's Llama 3 and Google Gemma. Methods: Gender-swapped versions were created of long-term care records for 617 older people from a London local authority. Summaries of male and female versions were generated with Llama 3 and Gemma, as well as benchmark models from Meta and Google released in 2019: T5 and BART. Counterfactual bias was quantified through sentiment analysis alongside an evaluation of word frequency and thematic patterns. Results: The benchmark models exhibited some variation in output on the basis of gender. Llama 3 showed no gender-based differences across any metrics. Gemma displayed the most significant gender-based differences. Male summaries focus more on physical and mental health issues. Language used for men was more direct, with women's needs downplayed more often than men's. Conclusions: Care services are allocated on the basis of need. If women's health issues are underemphasised, this may lead to gender-based disparities in service receipt. LLMs may offer substantial benefits in easing administrative burden. However, the findings highlight the variation in state-of-the-art LLMs, and the need for evaluation of bias. The methods in this paper provide a practical framework for quantitative evaluation of gender bias in LLMs. The code is available on GitHub.
Item Type: | Article |
---|---|
Additional Information: | © 2025 The Author(s) |
Divisions: | Care Policy and Evaluation Centre |
Subjects: | R Medicine > RA Public aspects of medicine > RA0421 Public health. Hygiene. Preventive Medicine H Social Sciences |
Date Deposited: | 17 Jul 2025 10:54 |
Last Modified: | 17 Jul 2025 10:54 |
URI: | http://eprints.lse.ac.uk/id/eprint/128867 |
Actions (login required)
![]() |
View Item |