ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

AI and Knowledge Disruption: Safeguarding the Epistemic Integrity of Public Organizations

Governance
Institutions
Knowledge
Decision Making
Technology
Federica Fusi
Vrije Universiteit Amsterdam
Madalina Busuioc
Vrije Universiteit Amsterdam
Federica Fusi
Vrije Universiteit Amsterdam

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Artificial intelligence is profoundly transforming society, adopted rapidly across domains from science and knowledge production to our political and governmental institutions. In government, AI tools are increasingly relied upon for high-stakes decision-making, impacting citizens’ lives at scale and in significant ways (Young et al. 2021, Busuioc 2021, Peeters & Widlak 2023). This process is fundamentally altering the very fabric of governance and the knowledge base upon which public institutions operate. On the one hand, the adoption of AI technologies is advocated with the promise to expand the boundaries of human knowledge and galvanize the process of discovery and innovation. Large language models (LLMs) and other generative AI tools can process vast amounts of information, unencumbered by human cognitive limitations. In this regard, the rise of AI in government ostensibly presents us with a solution to what has been recognized as one of the biggest challenges to organizational decision processes: human information processing limitations. AI can digest vast amounts of information, allowing public organizations to leverage large amounts of administrative data to inform decision-making. Yet, paradoxically, while we have access to more information than ever before, AI adoption simultaneously comes with significant challenges that stand to disrupt and erode traditional modes of expertise in unprecedented ways. The counts are many and range from AI models’ inherently black-box nature and convoluted decisional pathways to the stochastic nature of Large Language Models (LLMs) making them noisy and prone to randomness, to their propensity to fabricate outputs, producing plausible rather than accurate outcomes. Unlike human experts, latest generation AI algorithms such as deep learning models cannot articulate the reasoning behind outcomes in ways we can understand (Busuioc 2021), their knowledge is restricted to formalized language patterns and pattern matching, lacking a (causal) model of the world, reasoning and understanding. This epistemic paradox becomes even more striking when such tools are relied upon in public governance, including consequential domains such as welfare, policing or health – precisely areas where the legitimacy of decision-making rests on the exercise of, and claim to, expertise. AI-powered technologies are displacing and sidelining processes through which organizational and professional expertise is retained and deployed. As a result, decision-making in public organizations is being transformed in unprecedented, fundamental, yet poorly understood ways, calling for urgent investigation. In this paper, we theorize three pathways through which data and information flows get re-structured and bureaucratic expertise challenged or disrupted, and advance mechanisms to protect organizational knowledge reservoirs and the epistemic integrity of public organizations.