The Algorithmic Stakeholder: Reconceptualizing AI as a Medium in Public Governance
Democracy
Governance
Institutions
Public Administration
Social Media
Communication
Technology
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
To date, the public administration literature has largely conceptualized AI as an actor (Bullock & Kim 2020), an instrument (Peeters & Schuilenberg 2018), or a tool (Young et al. 2019) used by governments to enhance public service outcomes (Bullock et al. 2020; Grimmelikhuijsen & Meijer 2022). However, this instrumental framing assumes that the effects of AI lie primarily in the outputs it generates, e.g., better predictions, faster decisions, biased and discriminatory outcomes, etc., obscuring the deeper structural, epistemic, and symbolic consequences that accompany the integration of AI into public governance.
This paper proposes an alternative approach by conceptualizing AI as a medium: a sociotechnical environment that structures interaction and imposes characteristic patterns of thought and behavior, independent of any particular (algorithmic) content or output. Drawing on the insights of media theorists Marshall McLuhan (1964) and Neil Postman (1992) and contemporary digital media scholarship on algorithmic media as sociotechnical systems (Bucher 2019; Gillespie 2014), I argue that AI-mediated systems such as chatbots, recommendation algorithms, and risk scoring tools (among others) reshape how citizens access services, how bureaucrats make sense of information, and how governments define problems and solutions, but that these transformations actually occur before an algorithm ever renders a specific decision. This is because AI systems have important structuring affordances such as optimization, prediction, classification, automation, personalization, and datafication that promote and even coerce forms of interaction and organization, thereby shaping how governance is performed, interpreted, and experienced.
AI’s foundational logics impose a worldview privileging quantification, probabilistic reasoning, and anticipatory action that directly influence how institutions understand problems, define solutions, and enact authority. For example, an algorithmic risk score that routes cases accordingly is not merely a decision-making tool; it becomes organizing logic through which cases are processed, prioritized, and monitored. As such, when public organizations adopt predictive analytics, eligibility decisions are framed as probabilities, risk is reduced to a “score,” and resource allocation becomes an optimization problem.
As a medium, AI systems have intrinsic affordances that extend beyond individual applications to reshape the foundational conditions of governance itself. This occurs not only within institutional workflow but at the boundary between government and citizens. When government communication is filtered through algorithmic feeds, for instance, visibility and public engagement become conditioned by platform logics rather than democratic values. The algorithm then mediates what citizens see, which shapes their perceptions of institutional authority and legitimacy. In this way, the medium doesn’t simply disseminate and amplify institutional messaging; it restructures the very relationship between public bodies and those they serve, positioning algorithms as de facto stakeholders rather than passive tools. The medium perspective is, therefore, valuable because it shifts the analytical focus from the performance of individual applications to the structural and interpretive consequences of integrating computational systems such as AI and algorithms into the governance environment.