Cookies?
Library Header Image
LSE Research Online LSE Library Services

Nonresponse and measurement error in an online panel

Roberts, Caroline, Allum, Nick and Sturgis, Patrick ORCID: 0000-0003-1180-3493 (2014) Nonresponse and measurement error in an online panel. In: Callegaro, Mario, Baker, Reg, Bethlehem, Jelke, Göritz, Anja S., Krosnick, John A. and Lavrakas, Paul J., (eds.) Online Panel Research: Data Quality Perspective, A. Wiley Series in Survey Methodology. John Wiley & Sons, 337 - 362. ISBN 9781119941774

Full text not available from this repository.

Identification Number: 10.1002/9781118763520.ch15

Abstract

Non‐sampling errors, and in particular, those arising from non‐response and the measurement process itself present a particular challenge to survey methodologists, because it is not always easy to disentangle their joint effects on the data. Given that factors influencing the decision to participate in a survey may also influence the respondents' motivation and ability to respond to the survey questions, variations in the quality of responses may simultaneously be caused by both non‐response bias and measurement error. In this study, we examine factors underlying both kinds of error using data from the 2008 ANES Internet Panel. Using interview and paradata from the initial recruitment survey, we investigate the relationship between recruitment effort (e.g. number of contact attempts; use of refusal conversion efforts), willingness to participate in subsequent panel waves, and the ability and motivation to optimize during questionnaire completion. We find that respondents who were hardest to reach or persuade to participate in the recruitment interview responded to fewer monthly panel surveys overall, and were more likely to stop participating in the panel altogether. They also had higher rates of item non‐response in the recruitment interview. Respondents who later stopped participating in the panel were also more likely to give answers of reduced quality in the wave 1 monthly survey (e.g. more midpoint answers, less differentiation between scale points for question batteries, and fewer responses to a check‐all‐that‐apply question format). We then investigated two potential common causes of the observed relation between the propensity to stop participating in the panel and response quality (interest in computers, and ‘need to evaluate’), but neither one fully accounted for it. Interest in computers predicted later panel cooperativeness, while need to evaluate was related both to response quality and the propensity to attrit. Finally, we look at whether the panelists most likely to stop participating in the panel are also more likely to learn shortcutting strategies over time, to reduce the effort needed to complete monthly surveys. We find some support for this hypothesis. We discuss our findings and their implications for the design of future online panels.

Item Type: Book Section
Additional Information: © 2014 John Wiley & Sons, Ltd
Divisions: Methodology
Subjects: Q Science > Q Science (General)
Q Science > QA Mathematics
Date Deposited: 07 Oct 2019 15:12
Last Modified: 08 Jan 2024 17:24
URI: http://eprints.lse.ac.uk/id/eprint/101902

Actions (login required)

View Item View Item