ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Strategic Considerations and Self-Interest in Bureaucrats’ Preferences for the use of AI in Public Services

Governance
Public Administration
Survey Experiments
Technology
Sebastian Hemesath
Saarland University
Sebastian Hemesath
Saarland University
Johanna Hornung
Université de Lausanne
Martino Maggetti
Université de Lausanne
Philipp Trein
Université de Lausanne
Georg Wenzelburger
Saarland University

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

The adoption of artificial intelligence in public administration fundamentally reshapes (Veale & Brass 2019; Busuioc 2021), the relationship between the state, the bureaucrat, and the citizen. While prior literature correctly emphasizes the tension between the efficiency imperatives of New Public Management and the normative demands for Public Value considerations (Schiff et al. 2021; Grimmerlikhuijsen & Meijer 2022), existing research often disregards the interests of the individual bureaucrats in balancing these external demands. This may create goal conflicts within administrations. In two experimental studies, we empirically test the argument that the integration of AI in public services is filtered through the lens of bureaucratic self-interest. Drawing on Rational Choice theory, institutionalism and Blame Avoidance (Hood 2011), we posit that administrators’ preferences for AI are guided by self-interest: when AI systems threaten job security or expose themselves to liability, the “Logic of Consequences” (Self-Interest; March & Olsen, 1996) overrides the Logic of Appropriateness (Public Values). We explore these dynamics through two linked experiments embedded in a survey of German and Swiss municipal officials that is currently in data collection, and test how strategic considerations of bureaucrats shape their preferences for the design and use of algorithmic systems. First, utilizing a conjoint experiment, we examine how bureaucrats tradeoff self-interest against public values in their design preferences for ADM systems. Specifically, we test whether bureaucrats will reject efficiency gains if they come at the cost of workforce reductions, examine the tradeoff between fairness and accountability with individual blame avoidance, and test whether the demand for "human-in-the-loop" oversight is universal or conditional on the social construction of the target group (Schneider & Ingram 1993), and whether high-workload environments will drive a preference for lower degrees of human accountability. In the second experiment we then specifically test whether bureaucrats view AI tools as a resource for blame avoidance behavior. While normative theories suggest that bureaucrats should resist AI in tasks characterized by high uncertainty (Bullock et al. 2020), we argue that when faced with situations that carry potentially high consequences and public backlash, bureaucrats will seek to delegate authority to AI, utilizing its epistemic opacity to externalize the source of the decision and diffuse personal responsibility.