ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Automation Bias in Public Policy? Assessing Decision-Makers’ Overreliance on Algorithmic Advice Via a Survey Experiment

Decision Making
Experimental Design
Big Data
Saar Alon-Barkat
University of Haifa
Saar Alon-Barkat
University of Haifa
Madalina Busuioc
Vrije Universiteit Amsterdam

Abstract

Artificial intelligence (AI) algorithms are nowadays increasingly adopted by government organizations as decisional aides in various policy domains. These developments have been driven by the promise of policy solutions that are more efficient and effective, and as a means to overcome well-documented biases of human decision-makers. At the same time however, empirical evidence from psychology raises concerns of “automation bias” – i.e. that the use of algorithmic predictions would lead decision-makers to over-rely on their predictions, and to follow their advice even in the face of “warning signals” and contradictory information from other sources. Automation bias, manifested in default deference to automated systems, is well-documented in psychological experiment-based studies in various domains, but it has not yet been explored in relations to bureaucratic decision-making. Moreover, assessing this phenomenon in a public policy context, where algorithms are often found to be biased against disadvantaged groups, is highly important. We assess these possible concerns using a survey experiment focused on the context of educational policy in the Netherlands. We put automation bias to a rigorous test by comparing participants’ adherence to algorithmic prediction and a human-expert based prediction.