ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Getting Clear on Accountability in Automated Decision-Making – A Normative and Conceptual Inquiry

Contentious Politics
Cyber Politics
Democratisation
Analytic
Big Data
Anna-Katharina Boos
University of Zurich
Anna-Katharina Boos
University of Zurich

Abstract

Institutional decisions rely increasingly on algorithmic methods, based on AI driven Machine Learning. Hence, automated decision-making (ADM). Such decisions may concern whether or not an individual should receive a loan, is deemed eligible for social assistance, or should be released on parole. The fact that ADM considerably impacts individual citizens’ lives has raised concerns as to their political legitimacy. Scholars from various disciplines have claimed that, due to their lack of algorithmic scrutability, they cannot be meaningfully controlled, thus undermining the accountability of institutional decision-making. However, current debates on this topic lack conceptual coherence and clarity on the notion of accountability. What the concept of accountability means, and what it normatively requires in the context of ADM, still needs to be investigated. Drawing on non-instrumental democratic theory, I propose a re-conceptualization of accountability that addresses these two issues. In doing so, I first identify what normative conditions a conception of accountability must satisfy. I argue that a conception of accountability in ADM must satisfy three jointly necessary conditions: explainability, meaning the decision must be explainable in an interpretable manner; answerability, meaning there must be an agent involved to provide an explanation; and sanctionability, meaning that the said agent can be genuinely sanctioned in case of non-provision. Second, I discuss its implications for ADM. I demonstrate that explainability is a key enabler to guarantee answerability and sanctionability. This emphasizes the necessity to address the problem of algorithmic inscrutability. I conclude that only humans are eligible duty holders of accountability. This in turn requires them to have genuine control over the decisional outcome. Given AI agents (assumingly) lack moral responsibility, they cannot be considered liable for sanctions (sanctionability). As a corollary, they cannot be asked to provide an explanation either (answerability). At the end, this normative and conceptual inquiry at hand generates valuable insights on how to legitimately integrate ADM in institutional actions.