ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Can accountability moderate decision-makers’ biased processing of AI algorithmic advice?

Governance
Experimental Design
Big Data
Madalina Busuioc
Vrije Universiteit Amsterdam
Madalina Busuioc
Vrije Universiteit Amsterdam
Saar Alon-Barkat
University of Haifa

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Artificial intelligence algorithms are increasingly adopted as decisional aides by public organisations, with the promise of overcoming biases of human decision-makers. At the same time, the use of algorithms may introduce new biases in the human-algorithm interaction. A key concern emerging from psychology studies regards human decision-makers’ inclination to selectively adopt algorithmic advice when it matches their pre-existing beliefs and stereotypes, which may result in discrimination against disadvantaged groups. In a previous study (Alon-Barkat & Busuioc, under review), we demonstrated empirically this propensity for selective, biased adherence through a survey experiment among Dutch participants. In this study, our aim is to explore whether selective, biased adherence can be attenuated through enhancing the pre-decisional accountability of decision-makers. This hypothesis is aligned with considerable social psychology research on accountability, and a growing line of research in public administration, suggesting that introducing accountability mechanisms can serve to improve the quality of decisions and reduce cognitive biases. We test this hypothesis experimentally, building on and extending our previous survey experimental study. To that end, we replicate our earlier study, this time introducing a random assignment to pre-decisional accountability treatment and control conditions.