Page, Edward C. ORCID: 0000-0002-7117-3342
(2016)
What’s methodology got to do with it? Public policy evaluations, observational analysis and RCTs.
In: Keman, Hans and Woldendorp, Jaap J., (eds.)
Handbook of Research Methods and Applications in Political Science.
Edward Elgar, Cheltenham, UK, pp. 483-496.
ISBN 9781784710811
Abstract
Are methodological choices critical to the success of an evaluation study? For policy evaluation research, the kind of research that governments and international organizations commission to find out whether policies or other interventions are working, we might expect methodology to play a more important role than for conventional academic research. If the questions evaluation research explores are relatively simple, empirical rather than theoretical issues – above all whether the program works or not, what is going wrong and how might it be fixed if not – and if governments make decisions committing huge public resources based on these evaluations, we might expect those who sponsor and conduct such research to be especially concerned with its scientific credibility as established through the empirical research techniques it uses (Box 31.1). This appears to be the reasoning behind those who advocate policy evaluation research adopting the ‘gold standard’ of randomized controlled trials (RCTs), which are especially popular among politicians and government officials since they are deemed to be ‘the best way of testing whether a policy is working’ (Cabinet Office 2012). However, the activity of evaluating policies is rarely simply a matter of developing and applying a convincing methodology to guide policy by showing government what works and what does not. This chapter looks at the role of methodology in evaluations from the perspective of whether there is any evidence that policy-makers are more likely to pay attention to, or act upon, studies that are deemed to be methodologically superior, whether by virtue of being more sophisticated, rigorous or appropriate. The concern of this chapter is not with establishing the merits and demerits of different methodologies in evaluation studies, but rather with assessing the role of methodology in explaining the impact or lack of impact of any evaluation studies. In practical terms it seeks to answer the question: if a researcher makes additional efforts to increase the integrity or sophistication of the research methodology used to perform an evaluation, will the effort pay off in terms of increased influence for that research?
Actions (login required)
|
View Item |