An ongoing debate discusses opinion surveys as a tool of data collection for (non-) academic purposes, but also its political role when concrete policy proposals are at stake. Indeed, surveys represent one of the most popular method of data collection in political research (e.g. Groves et al., 2009; Saris & Gallhofer, 2014; Wolf et al., 2016). The Total Survey Error (TSE) theorem is currently the gold standard in survey methodology aiming to reduce representation and measurement error throughout the full survey research cycle (Anderson, Kasper, and Frankel, 1979; Groves, 2004; Weisberg, 2005; Groves and Lyberg, 2010; Biemer, 2010). However, the popularity of survey data collection has been challenged by (a) the decline in survey participation rates, increase in unit non-response, and the weakening of sampling frames, (b) cheap mass online surveys, and (c) other cost efficient alternative methods of data collection, such as data extracted from social media. This Section evaluates the future of surveys for the study of politics along different dimensions.
Declining response rates, increasing unit non-response, and weakening sample frames affect the representativeness of survey data collections. One example is the predictive power of surveys in electoral research producing unreliable results (e.g., Callegaro, & Gasperoni, 2008; Erikson, & Wlezien, 2012; Hanretty et al., 2015; Sturgis et al. 2017; YouGov, 2017). We invite Papers that address this issue and propose predictive models or alternative methods to better capture politics or political behaviour addressing the representation side of the TSE framework.
Push-to-web attempts foster online data collection, which is quick and easy, but the non-probability samples typically lack “representativeness” of the target population (Sohlberg et al., 2017; Toepoel, 2016; Callegaro et al., 2015). This may have severe implications for political research and policy making. In addition, online data collection also poses challenges to measurement. While enhancing anonymity and privacy to an extent that respondents may feel more encouraged to report more sensitive political behaviours or attitudes than in an interviewer administered mode, traditional measures of political attitudes and behaviour, such as political knowledge, may be more difficult to capture due to a lack of control over respondents.
Political research often measures complex public attitudes towards political issues, actors or phenomena using surveys. However, measurement issues due to poorly phrased questions and/or answering categories, to respondent and/or interviewer error are prevalent in all kinds of survey data. Often political surveys aim to measure potentially sensitive attitudes, e.g., attitudes about radical right parties, ideologies, LGTBQ rights or abortion. Social desirability pressures make it difficult to measure these attitudes. Papers will evaluate different kinds of political attitudes or aim to improve the measurement of political attitudes in surveys.
Contributions will present new measures, e.g., using survey experiments. Survey experiments are commonly used to study political phenomena in a controlled environment, but on larger population samples to also enhance external validity. Yet, many of the results derived on their basis require validation and replication in order to ensure the experimental design estimates the behaviour in question (see e.g., Krumpal et al. 2015). Contributions will critically investigate and/or validate findings based on survey experiments (conjoint designs, framing experiments, or novel question designs, such as list experiments or crosswise designs, as well as implicit measures) and will discuss the virtues and vices of survey experiments.
Alternative data sources, such as administrative data collected by public institutions, but also social media data, such as Twitter and Facebook, record individual and aggregated information (e.g., key characteristics for a particular geographical unit). These sources offer large numbers of data points for relatively little investment, but pose new methodological and ethical challenges. This includes the issue of data linkage, e.g., studying contextual effects by adding country-level information, media content, or geographical and small area data allows to improve political research.
Papers will be presented using alternative data to capture and study political phenomena and that validate (survey) data sources. In addition, discussion will address ethical, practical, and analytical issues regarding data linkage: Why to link? How to link? What to consider?
Finally, new developments in machine learning and data science in the area of quantitative text analysis provide new opportunities for the statistical analysis of open questions in surveys (Grimmer & Stewart, 2013). This could lead to a re-investigation of traditional concepts, such as ISCO codes. We will discuss open-ended survey questions applying either quantitative or qualitative methods to the code of these data for meaningful analysis.