ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Ai Ai Ai … Opportunities, Risks, and Governance of AI in Voting Advice Applications

Citizenship
Democracy
Communication
Decision Making
Christine Liebrecht
Tilburg University
Christine Liebrecht
Tilburg University
Naomi Kamoen
Tilburg University

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

The development of a Voting Advice Application (VAA) relies heavily on human expertise and judgment. Statements are selected and formulated by human experts, political parties manually submit their policy positions, and these positions are subsequently verified by VAA developers. While there are clear advantages to this human-centered approach, the growing presence of artificial intelligence (AI) in society cannot be ignored. In recent years, several AI-driven initiatives related to VAAs have emerged, including a German ChatGPT-based VAA that allowed users to discuss statements, and a Dutch tool that applied AI techniques to search and analyze party manifestos. Building on these developments, the present contribution reflects on a series of qualitative studies examining whether and how AI can be integrated into the VAA development process. First, we explored the technical use of AI—and ChatGPT in particular—across several phases of VAA development: (1) the extraction and formulation of political topics and statements based on party manifestos; (2) the positioning of political parties with respect to these statements; and (3) the identification of supplementary information related to the political issues addressed. The results highlight the technical possibilities as well as the limitations of using AI in each of these phases of the VAA development process. Second, we conducted an ethical deliberation session involving multiple stakeholders relevant to VAA development, including professional VAA developers, chatbot developers, IT professionals from sub-national governments, researchers, and citizens who typically use VAAs during elections. A key takeaway from this reflection session was that concerns about the reliability of information dominated across all stakeholder groups. As a result, participants strongly opposed the unrestricted integration of generative chatbots such as GPT directly into VAAs. Based on the insights from Studies 1 and 2, we subsequently developed a Conversational Agent Voting Advice Application (CAVAA), that is, a VAA with an integrated chatbot. Previous research has predominantly relied on rule-based CAVAAs, in which a custom-trained model performs intent recognition and returns scripted responses (e.g., Kamoen & Liebrecht, 2022). Although such systems score highly in terms of reliability and controllability, their performance is limited: only around 50% of user queries are successfully addressed. Replacing self-trained models with more advanced AI systems may therefore offer opportunities to improve coverage of user-entered questions. In the context of the Dutch municipal elections, we examined the extent to which a Large Language Model (LLM) could be used to enhance intent recognition in a VAA chatbot, while keeping the content of the responses fully human-written (i.e., no AI-based output generation was used). We present initial findings from an empirical study in which voters interacted with an LLM-based CAVAA during the municipal elections of 18 March 2026, providing insights into user experiences with AI-supported VAAs in a real-world electoral context. The aim of this presentation is to stimulate discussion about the role of AI in VAAs and to lay the groundwork for a potential follow-up in the form of a larger comparative study across multiple European countries.