ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Adaptive Dialogue Management for Conversational Voting Advice Applications

Democracy
Candidate
Communication
Fynn Bachmann
University of Zurich
Fynn Bachmann
University of Zurich

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Conversational Agent Voting Advice Applications (CAVAAs) have emerged as tools for political education in multi-party democracies. By extending traditional VAA questionnaires with a chatbot interface that allows structured or unstructured user questions in natural language, they address comprehension problems, retrieve user-specific information, and provide personalised advice. As a result, CAVAAs have been shown to increase users’ political knowledge compared to traditional VAAs. At the same time, however, the additional interaction they induce requires greater user attention and increases cognitive load—especially in long surveys with many items. In this work, we address this challenge by combining insights from the literature on adaptive surveys and conversational agents. We develop a dialogue management system that “interviews” users with the goal of identifying their closest political parties as efficiently as possible. We evaluate the usefulness of this approach through a user experiment with 200 participants recruited in Switzerland, employing a 2×2 between-subject design. In the first experimental dimension, we vary the adaptiveness of the questionnaire: topics are either sampled sequentially based on their expected informativeness (treatment) or defined a priori (baseline). In the second dimension, we vary the dialogue management strategy of the chatbot. We compare a moderator-style (top-down) chatbot that interviews users with a semi-structured chatbot that allows users to click on predefined questions or enter their own questions while answering the VAA questionnaire. In addition, we include a benchmark condition in which users freely interact with a standard chatbot (GPT-5) to discuss a political topic of their choice. Across all conditions, we evaluate user experience using the Technology Acceptance Model (TAM) and measure changes in political knowledge and vote intention. We further conduct a qualitative analysis of user prompts to better understand how users interact with CAVAAs. Using these behavioural data alongside demographic information collected in pre- and post-surveys, we address the following research questions: (i) How do users intuitively seek political advice from a standard chatbot? (ii) How can conversational VAAs be designed to increase users’ perceived legitimacy and reliability? With this ongoing research, we contribute (1) design insights for dialogue management in conversational VAAs, (2) a methodological contribution on adaptive topic selection in conversational surveys, and (3) empirical evidence on user behaviour in AI-supported democratic recommender systems.