ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Semantic Polling and the 2010 UK General Election

Ben O'Loughlin
Royal Holloway, University of London
Nick Anstead
The London School of Economics & Political Science
Ben O'Loughlin
Royal Holloway, University of London

Abstract

While journalists speculated about whether the 2010 UK General Election was the country’s “first Internet election”, one important way in which the Internet was incorporated into the election process was under-examined: semantic polling. Semantic polling refers to the use of algorithms and natural language processing to “read” vast datasets of public commentary harvested from the Internet, which could be disaggregated, analysed in close-to-real-time, and presented to various audiences. We present findings from interviews with social media monitoring firms, the parties that used those firms’ services, and journalists who used such firms’ results in their electoral coverage, as well as content analysis of media electoral coverage. We examine assumptions about: (i) the utility of such data, (ii) the correspondence of semantic polling to normative models of democracy, (iii) the demand for insights into why citizens would vote as they did not just who/what/where (i.e. the demand for “intelligence”), and (iv) how semantic polling could be integrated with traditional methods. Such techniques were at a very early stage, with problems of data gathering, analysis and the presentation of results to parties and publics. Nor were methodological shortcomings necessarily explained when polling was presented. Nevertheless, we consider how such approaches will continue to develop in coming years in different countries.