ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

An AI-Enhanced Analysis of EP2024 Election Discourses: Insights on Resistance and Autocracisation from the AC/DT Approach to Discussions on Governance, Humans and Large Language Models (LLMs) as a Machine or Pipeline

Democracy
Political Methodology
Populism
Internet
Technology
Emilia Palonen
University of Helsinki
Emilia Palonen
University of Helsinki

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

How to use AI for analysing democracy, autocratisation and resistance by exploring discourses that circulate around the elections? This paper presents a comprehensive, experimentally developed method for gathering, transforming, and analysing short-video data during the 2024 European Parliament elections across ten countries (Bulgaria, Croatia, Finland, France, Germany, Hungary, Poland, Portugal, Spain, and Sweden) in TikTok and Instagram. EP2024 campaign was focused on the far-right bid to power. What could an AI-enhanced political communication pipeline look like? How much is it based on interpretation and what are the roles of AI and humans here? For analysing discourse, we operationalised three synthetic profiles and each researcher also had an “organic” profile for backup and background. In contrast to the work with access to the research API of large platforms, where computers can be used for AI-enhanced analysis, for exploring the feeds, one suddenly needs a lot of humans, but there is no way humans could be coding the circa 20.000 short videos from the synthetic profiles resulting from a screen-recording work done daily on two platforms for four weeks. To investigate political discourses in Europe, we combined multi-level analysis from large-scale screen-recorded feeds, structured digital fieldwork, LLM-based data-analysis and researchers’ interpretive notes, which the data gatherers submitted daily for both platforms. Our pipeline transforms human-gathered multimodal data into machine-readable form through video splitting, keyframe extraction, OCR, Whisper transcription, and translation, followed by multimodal analysis using open-source large language models. Involving humans automatically means involving a layer of interpretation. The data becomes messy. But we also realised that involving LLMs is like involving humans: they are great at interpretation, often offering grounds for why they would classify things as they did, but also resulting in either hallucination or large set of categories that are potentially overlapping. We were lucky to have an experienced coder data manager processing data through an LLM pipeline, and he also fine tuned the LaclauGPT with consideration to the theoretical bases of the project: emotional mechanisms and grievance politics for PLEDGE and the social contract and populism for CO3 horizon projects (2024-27) that address the challenge of illiberalism and radicalisation. Human oversight on AI is actually an iterative process, which our anarcho-computational discourse-theoretical (AC/DT) approach could thrive on, but it raises issues to discuss to what extent and how AI can be helpful for autocratic regimes and democratic governance alike. Ultimately AI solutions also need to address - and be evaluated reflecting on - the bases of the analysis and the iterative processes that take place in generating the AI-enhanced tools.