ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

The Artificial Voter: Democratic Legitimacy and the Conditions for Algorithmic Enfranchisement

Democracy
Governance
Voting
Decision Making
Ethics
Technology
Big Data
Policy-Making
Bosco Lebrun
LUISS University
Bosco Lebrun
LUISS University

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Recent scholarship on artificial intelligence and democracy has predominantly examined AI as an object of governance, focusing on how democratic institutions should regulate AI development and deployment (Erman and Furendal 2024; Jungherr and Schroeder 2023; Veale et al. 2023). However, this outcome-focused view risks overlooking important aspects of AI’s potential role in political decision-making itself. This paper reverses the conventional perspective by examining AI not merely as an object to be governed, but as a potential political decision-maker capable of exercising voting power. While the prospect of granting AI decision-making authority in politics may initially appear problematic—particularly given concerns about democratic legitimacy and self-rule (Jungherr and Schroeder 2023; Coeckelbergh 2025)—this paper begins from the premise that such possibilities should not be categorically excluded. We already accept that narrow AI systems can make superior decisions to humans in domains as vital as aviation safety and medical diagnosis (Jungherr and Schroeder 2023; LeCun et al. 2015; Mitchell 2019), where the stakes for human welfare are considerable. Moreover, discussions of AI governance increasingly acknowledge that “governance by AI”—existing governance structures adopting AI technologies as part of their mechanisms—differs fundamentally from “governance of AI” (Erman and Furendal 2024). If we accept AI-assisted decision-making in certain public domains, the question becomes not whether but under what conditions AI might participate in political decision-making processes. This inquiry is particularly urgent given current debates about political legitimacy and democratic control (Przeworski 2018; Beckman and Rosenberg 2022; Rosenberg 2025). The challenge is whether AI’s participation in decision-making could be reconciled with these democratic requirements, or whether it necessarily entails a loss of self-determination. The research question this paper addresses is: under which conditions should AI be granted voting power for decision-making in politics? To answer this, I first review the philosophical justifications for allocating voting rights in democratic theory. I then systematically evaluate whether AI systems could better fulfill these justificatory criteria than human voters in specific contexts—for instance, by processing larger volumes of information, avoiding certain cognitive biases, or representing underrepresented interests including those of future generations. Second, I examine potential moral objections that might override reasons for granting AI voting power. These include concerns about manipulation of AI social choice systems (Baum 2025; Gibbard 1973), threats to individual self-rule and informational autonomy (Jungherr and Schroeder 2023), equality considerations given AI’s differential visibility to various populations (Jungherr and Schroeder 2023; Buolamwini and Gebru 2018), and the requirement that political legitimacy demands accountability through rights to revocation and justification (Erman and Furendal 2024). I also consider the argument that AI participation might undermine the common good through technocratic shortcuts that bypass democratic deliberation (Coeckelbergh 2025; Pettit 2004). The conclusion brings together all the conditions under which AI should be granted voting rights. It leaves it to the future to examine the extent to which AI should have voting rights and whether these conditions are currently met.