Cookies?
Library Header Image
LSE Research Online LSE Library Services

Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgments increase psychologist trust and recommendation acceptance

Bashkirova, Anna and Krpan, Dario ORCID: 0000-0002-3420-4672 (2024) Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgments increase psychologist trust and recommendation acceptance. Computers in Human Behavior: Artificial Humans, 2 (1). ISSN 2949-8821

[img] Text (Confirmation bias in AI-assisted decision-making) - Published Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (4MB)

Identification Number: 10.1016/j.chbah.2024.100066

Abstract

The surging global demand for mental healthcare (MH) services has amplified the interest in utilizing AI-assisted technologies in critical MH components, including assessment and triage. However, while reducing practitioner burden through decision support is a priority in MH-AI integration, the impact of AI systems on practitioner decisions remains under-researched. This study is the first to investigate the interplay between practitioner judgments and AI recommendations in MH diagnostic decision-making. Using a between-subjects vignette design, the study deployed a mock AI system to provide information about patient triage and assessments to a sample of MH professionals and psychology students with a strong understanding of assessments and triage procedures. Findings showed that participants were more inclined to trust and accept AI recommendations when they aligned with their initial diagnoses and professional intuition. Moreover, those claiming higher expertise demonstrated increased skepticism when AI's suggestions deviated from their professional judgment. The study underscores that MH practitioners neither show unwavering trust in, nor complete adherence to AI, but rather exhibit confirmation bias, predominantly favoring suggestions mirroring their pre-existing beliefs. These insights suggest that while practitioners can potentially correct faulty AI recommendations, the utility of implementing debiased AI to counteract practitioner biases warrants additional investigation.

Item Type: Article
Additional Information: © 2024 The Author
Divisions: Psychological and Behavioural Science
Subjects: R Medicine > RA Public aspects of medicine > RA0421 Public health. Hygiene. Preventive Medicine
T Technology
B Philosophy. Psychology. Religion > BF Psychology
Date Deposited: 12 Jun 2024 13:12
Last Modified: 01 Jul 2024 03:09
URI: http://eprints.lse.ac.uk/id/eprint/123856

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics