Candidate surveys have become a prominent source of data in the last decade, with projects such as the Comparative Candidate Survey (CCS), that collects information about parliamentary candidates in over 30 countries. Moreover, in some countries, candidate studies have included multilevel data collection and analysis, incorporating both local and national dimensions.
However, one of the common criticisms of candidate surveys relates to the quality of the data and to the apparently low response rates. Scholars sceptical about using these data argue that, unlike public opinion studies, the bias introduced by non-response in candidate surveys is unknown. Thus, there are no simple ways to account or work around it.
We provide the first comprehensive study of the topic, using diverse data sources from the UK. Based on these sources, we are able to produce information about all candidates from the last two General Elections (2010 and 2015) and use it to understand the sources of non-response to the corresponding candidates surveys. We discuss the effect of the main predictors and provide alternative solutions to account for the bias.
Our results show that most concerns are unwarranted, as the difference between the raw and weighted data shows minimal differences on key attitudinal items. We argue that this approach to non-response can also help other approaches to elite studies.