ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Can AI Be Feminist? Interrogating Structural Oppression Through the Lenses of Oppressive AI and Design Justice

Globalisation
International Relations
Political Economy
Social Justice
Critical Theory
Feminism
Technology
Anna Antonakis
Bern University of Applied Science
Anna Antonakis
Bern University of Applied Science

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

This paper addresses a timely question from the perspective of feminist science and technology studies – "Can AI also be feminist?" – by shifting the debate from an essentialist inquiry about the technology's inherent capacities and affordances to a sociotechnical and power-critical analysis. Moving beyond early feminist critiques that viewed technological developments as inherently patriarchal, and building on cyberfeminist calls for appropriation, this contribution argues for a grounded examination of how structural power is woven into the fabric of AI systems. It posits that the central question is not what AI enables or if it can be feminist, but how historically entrenched systems of domination, conceptualised here through Patricia Hill Collins’ "Matrix of Domination" (2000) and Donna Haraway’s "informatics of domination" (1985), permeate the ownership, labour relations, design, and application of AI technology. To operationalise this investigation, the paper employs the "Oppressive AI" framework developed by Varon and Peña (2021). While originally focused on the public sector, this framework proves instrumental for a broader feminist critique as it productively conjoins material and discursive dimensions of power and delineates concrete fields of algorithmic harm. These fields are then interrogated through the methodological lens of Design Justice (Costanza-Chock, 2020), which centres the voices of marginalised communities traditionally excluded from design processes. The analysis specifically illuminates how the "Matrix of Domination" is reproduced in what Costanza-Chock terms "a thousand daily interactions with AI systems,"(ibd.:5) focusing on three pivotal levels and application fields derived from the Oppressive AI framework: 1. Precarious Labour that the paper conceptualises as "Jobs for the Many": Examining the often-invisible, gendered, and racialised labour force in data labelling and annotation, which forms the foundational, yet undervalued, substrate of machine learning. 2. Patriarchal Design conceptualised through"Jobs for the Few": Analysing the reproduction of bias and exclusion through homogenous decision-making positions in tech leadership and development, which govern what and for whom AI is built. 3. Opacity and "Missing Transparency": Investigating how regulatory gaps and technical and corporate norms around algorithmic opacity (lack of explainability, accountability, and auditability) entrench power asymmetries and shield discriminatory systems from scrutiny.