When Does Influence Become Manipulation? Philosophical Views on Computational Propaganda
Social Media
Decision Making
Normative Theory
Technology
Influence
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
We influence one another in a number of ways, some innocuous, others morally problematic. One prevalent type of influence is manipulation. Philosophical accounts disagree about what exactly counts as manipulation: there is no consensus on whether intent, deception, subversion of rational capacities, or the outcome is essential (Coons & Weber, 2014; Jongepier & Klenk, 2022; Noggle, 2025). Questions concerning how to distinguish manipulation from other forms of influence, such as coercion, persuasion, or nudging, whether it is always morally problematic, and if so, under what conditions, are amplified by the rise of computational propaganda.
Computational propaganda is a specific form of AI-mediated influence that involves manipulation for political, ideological, commercial, or other purposes. It has been defined as the “use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks” (Woolley & Howard, 2018). AI-enabled propaganda increasingly relies on synthetic media, algorithmic amplification, and microtargeting, which need not involve lying, identifiable agents, or deliberate communicative intent. As a result, it combines features of persuasion, manipulation, deception, and misinformation in ways that challenge traditional distinctions.
In this paper, I argue that AI-driven propaganda pressures two assumptions underlying the main philosophical accounts of manipulation. First, it unsettles intent-based views that treat propagandistic manipulation as tied to hidden motives or agent-directed deception. Second, it challenges process-based accounts because algorithmic targeting can shape attention and cognition without clearly bypassing or fully engaging rational deliberation. Rather than resolving these tensions, I aim to show how AI-mediated propaganda exposes them and why philosophical analyses of propaganda must be revised to be action-guiding.
If philosophical accounts of manipulation are to remain action-guiding, I discuss that they must move beyond accounts of interpersonal influence and attend to the distributed, systemic organization of digital influence. This requires integrating normative analysis with considerations of platform architecture, economic incentives, regulatory frameworks, and civic epistemology. The goal is not to eliminate influence, which is intrinsic to social life, but to establish conditions under which influence can be recognized, evaluated, and contested rather than invisibly engineered for engagement. Reconceptualizing manipulation as intentional but careless influence seems better suited to address the moral, epistemic, and practical challenges posed by AI-mediated propaganda and sustain the normative guidance that the concept of manipulation is meant to provide.
In sum, computational propaganda demonstrates that manipulation is not only an interpersonal phenomenon but also a systemic one, distributed across platforms, algorithms, and social infrastructures. By framing manipulation as intentional but careless influence, I believe that norm-based philosophical accounts provide better conceptual tools to capture both the interpersonal and structural dimensions of AI-mediated influence. They help clarify why certain forms of algorithmic shaping of attention and behavior are morally troubling, even in the absence of identifiable agents or overt deception.