ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Training Data in AI Decision-Making: A Democratic Challenge

Democracy
Political Participation
Political Theory
Decision Making
Normative Theory
Power
Technology
Bosco Lebrun
LUISS University
Bosco Lebrun
LUISS University

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Recent scholarship on artificial intelligence and democracy has predominantly examined AI as an object of governance, focusing on how democratic institutions should regulate AI development and deployment (Erman and Furendal 2024; Jungherr and Schroeder 2023; Veale et al. 2023). This outcome-focused view risks overlooking important aspects of AI’s potential role in political decision-making itself. This paper reverses the conventional perspective by examining AI not merely as an object to be governed, but as a potential political decision-maker in politics. While the prospect may initially appear problematic—particularly given concerns about democratic legitimacy and self-rule (Jungherr and Schroeder 2023; Rosenberg 2025; Coeckelbergh 2025)—this paper begins from the premise that such possibilities should not be categorically excluded, given that we already accept narrow AI systems making superior decisions to humans in domains as vital as aviation safety and medical diagnosis (Jungherr and Schroeder 2023; LeCun et al. 2015; Mitchell 2019). However, it is true that AI decision-making could be biased in a number of ways. This paper focuses on concerns with input information. Any AI system relies on input information, including the training data on which it was trained. This input information may reflect a variety of interests, values, and goals. If AI could be of any use in politics, it is therefore crucial to understand which interests, values and goals are reflected in these training inputs, and to determine to what extent they should be reflected. This issue confronts us with trade-offs. If we weight training data by affectedness, we may violate formal equality (Goodin and Tanasoca 2014; Beauvais 2018; Fleurbaey 2008; Brighouse and Fleurbaey 2010; Peña-Rangel 2022). Yet if we treat all training inputs identically, we risk substantive inequality—allowing those less affected by decisions to have equal influence over outcomes that profoundly shape others’ lives (Rosenberg 2019; Warren 2024, 38). Building on the nascent literature connecting the all-affected principle with AI (Beckman and Rosenberg 2022), I argue that the use of AI in value-based decision-making would require that the value-laden information provided be weighted according to affectedness. To do this, I first examine the literature on the distribution of decision-making power in democratic theory. I analyze the arguments justifying the candidate principles for the distribution of decision-making power. I draw on the argument of consistency to show that a commitment to democracy implies adherence to AI input information weighted according to affectedness.