ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Adaptive Testing for Voting Advice Applications

Democracy
Voting
Empirical
Fynn Bachmann
University of Zurich
Fynn Bachmann
University of Zurich
Abraham Bernstein
University of Zurich

Abstract

Voting advice applications (VAAs) such as Smartvote or Wahl-O-Mat depend on long questionnaires to recommend candidates or parties to users. To minimize incomplete questionnaires, some of these tools, including Smartvote, offer "rapid" versions. These shorter questionnaires aim to capture the most relevant responses. We argue that this set of questions depends on the political orientation of users and can be increasingly tailored with each new response they provide. Therefore, this ongoing research explores the application of active learning (or adaptive testing) strategies to VAAs. The primary objective is to develop algorithms that reduce the number of questions while maintaining the accuracy of the final recommendations. To this end, we propose a combination of (i) an encoder that positions users in a latent space based on their responses, (ii) a decoder that forecasts future responses from their position in this space, and (iii) a selector that identifies the next question based on expected information gain. Utilizing the Smartvote dataset from the Swiss national election of 2019, we initially examine a variety of dimensionality reduction algorithms for their effectiveness in encoding and decoding sparse data (e.g., PCA, IDEAL, W-NOMINATE, Variational Autoencoders, and our novel iterative SVM algorithm). We then assess various decision criteria for the selector (e.g., entropy, posterior variance, decision trees) and compare their efficacy against the rapid version of the Smartvote questionnaire, which serves as a benchmark. Preliminary findings indicate that our algorithm can match the predictive accuracy of the benchmark while requiring fewer questions. In other words, starting with an initial selection of components, our approach surpasses the benchmark in terms of information gathered after an equivalent number of questions. Further systematic optimization will determine whether this advantage is significant enough to validate the added complexity of the VAA. A user experiment shall finally establish that dynamically adapting questions do not significantly alter user responses and that the explanations provided by visualizing the latent space are perceived as trustworthy – key requirements to contribute to the evolution of VAAs in the realm of digital technology and political engagement.