Authoritarianism Without Authoritarians: Generative AI and Intersectional Harm in India’s Election
Democracy
Gender
India
Media
Political Parties
Campaign
Social Media
Electoral Behaviour
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
What if the most ‘authoritarian’ features of contemporary elections no longer depend on authoritarian actors at all, but on the infrastructures through which digital politics unfolds? Standard approaches to digital authoritarianism typically locate manipulation, coercive persuasion, censorship, and epistemic distortion within authoritarian regimes or within the strategies of dominant ruling parties (Dragu and Lupu 2021). Yet evidence from India’s recent elections complicates this assumption. AI-generated deepfakes, identity-targeted messaging, and personalised disinformation were deployed competitively by both incumbents and oppositions. One of the most widely circulated synthetic videos, a deepfake of Prime Minister Narendra Modi being criticised by his deceased mother, was released by opponents in September 2025 during the campaigning phase for the November Bihar election (NDTV 2025), demonstrating that authoritarian-like effects can now emerge from decentralised actors equipped with generative AI (Dhanuraj, Harilal and Solomon 2024).
This paper argues that the concept of digital authoritarianism requires theoretical refinement. Rather than treating authoritarian digital practices as inherent to regime type, the analysis draws on infrastructural perspectives to show that authoritarian effects increasingly arise from platform architectures, inherent socio-economic vulnerabilities, optimisation logics, and data-extraction systems rather than only from state ideology. Mantellassi (2023) demonstrates that the tools enabling digital authoritarianism have deeply penetrated democratic contexts because they stem from the political economy of surveillance capitalism rather than from authoritarian intent. Similarly, Ünver (2019) shows how algorithmic “structures of relevance” and persuasive optimisation reshape political communication across regime lines, producing distortions in public discourse irrespective of political actors’ motivations.
To advance this literature, the paper integrates an intersectional theoretical lens into the study of digital authoritarian practices, an approach largely absent from existing frameworks. Intersectionality reveals that AI-mediated infrastructures generate differentiated vulnerabilities and uneven political exposure across caste, gender, religious, linguistic, and socio-economic lines. In India’s elections, these inequalities conditioned who encountered deepfakes, which groups were targeted with emotive messaging, and whose identities were mobilised or silenced by algorithmic segmentation (Dhanuraj, Harilal and Solomon 2024). These patterns align with Mahapatra’s (2025) description of “authoritarianism by diffusion”, where infrastructural weaknesses and competitive electoral incentives enable authoritarian-like effects to manifest in formally democratic systems.
The paper develops the concept of “authoritarian effects without authoritarian actors” to capture how contemporary AI systems enable affective manipulation, asymmetries of visibility, and epistemic distortion across the political spectrum. This reframing builds on Glasius’s (2023) call to analyse authoritarian practices rather than regime labels. It extends this work by offering an intersectional account of the differentiated harms produced by generative AI.
The paper concludes that debates on digital authoritarianism must shift from regime-centric diagnoses to infrastructural and intersectional analyses. The core governance challenge is not merely restricting authoritarian actors, but addressing systemic vulnerabilities, optimisation systems, and social inequalities that allow any political actor to generate authoritarian-like effects at scale in the Global South.