ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Would People Reject AI Judgments as Being Procedurally Unfair?

Courts
Decision Making
Public Opinion
Survey Research
Technology
Henrik Litleré Bentsen
NORCE Norwegian Research Centre
Henrik Litleré Bentsen
NORCE Norwegian Research Centre
Mikal Johannesson
NORCE Norwegian Research Centre

Abstract

After the outbreak of the COVID pandemic courts have seen the need to digitize at great speed, and technologies based on artificial intelligence have been implemented in courts worldwide (Sourdin 2021). Consequently, individuals involved in judicial disputes in various countries are already having (part of) their cases processed through AI systems. While this development may have numerous benefits, including increased efficiency, reduced case backlogs, and enhanced consistency in judicial decisions, the integration and use of AI in judicial processes also raises significant concerns. The question of where the limit goes for using AI to solve legal disputes has gained increased importance. Would people reject AI judgments as being procedurally unfair? When it comes to AI and the law, “[f]airness and procedural legitimacy are at the heart of modern debates about AI judging” (Chen, Stremitzer, and Tobia 2022, 130). Legal psychology has a long history of investigating procedural justice in court proceedings (e.g., Lind and Tyler 1988), and evidence suggests that the perceived fairness in legal processes has far-reaching implications for how citizens and parties to a case perceive legal outcomes. In part, people obey the law because they believe it to be fair (Tyler 2021). It is therefore crucial to understand how citizens perceive the use of AI in legal decision making, and especially the extent to which citizens evaluate the fairness of the use of AI in the legal domain. In this paper, we investigate how the use of AI in the courts affects people's perception of fairness. We investigate how people perceive the fairness of judicial decision making when AI is introduced either as a support system for human judges or as fully independent and automatic decision-making systems. We also investigate whether the explainability (or transparency) of the AI system influences fairness perceptions. We do this through the implementation of a survey-embedded vignette experiment administered via the Norwegian Citizen Panel to a broad and representative sample of the Norwegian population. Our study contributes to existing literature on AI in courts by exploring how citizens evaluate the fairness of AI in the courts and by enhancing our understanding of where the public believes that the limit goes for the use of AI in judicial decision making.