Bollen, Paige, Higton Durrant, Joe and Sands, Melissa ORCID: 0000-0002-4910-7509
(2025)
Nationally representative, locally misaligned: the biases of generative artificial intelligence in neighborhood perception.
Political Analysis.
ISSN 1047-1987
![]() |
Text (nationally-representative-locally-misaligned-the-biases-of-generative-artificial-intelligence-in-neighborhood-perception)
- Published Version
Available under License Creative Commons Attribution. Download (836kB) |
Abstract
Researchers across disciplines increasingly use Generative Artificial Intelligence (GenAI) to label text and images or as pseudo-respondents in surveys. But of which populations are GenAI models most representative? We use an image classification task—assessing crowd-sourced street view images of urban neighborhoods in an American city—to compare assessments generated by GenAI models with those from a nationally representative survey and a locally representative survey of city residents. While GenAI responses, on average, correlate strongly with the perceptions of a nationally representative survey sample, the models poorly approximate the perceptions of those actually living in the city. Examining perceptions of neighborhood safety, wealth, and disorder reveals a clear bias in GenAI toward national averages over local perspectives. GenAI is also better at recovering relative distributions of ratings, rather than mimicking absolute human assessments. Our results provide evidence that GenAI performs particularly poorly in reflecting the opinions of hard-to-reach populations. Tailoring prompts to encourage alignment with subgroup perceptions generally does not improve accuracy and can lead to greater divergence from actual subgroup views. These results underscore the limitations of using GenAI to study or inform decisions in local communities but also highlight its potential for approximating “average” responses to certain types of questions. Finally, our study emphasizes the importance of carefully considering the identity and representativeness of human raters or labelers—a principle that applies broadly, whether GenAI tools are used or not.
Item Type: | Article |
---|---|
Additional Information: | © 2025 The Authors |
Divisions: | Government |
Subjects: | J Political Science Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Date Deposited: | 04 Sep 2025 08:45 |
Last Modified: | 07 Oct 2025 08:54 |
URI: | http://eprints.lse.ac.uk/id/eprint/129377 |
Actions (login required)
![]() |
View Item |