Speaking for Whom? Class, Power, and the Habitus of Generative AI
Social Capital
Internet
Quantitative
Technology
Empirical
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
Contrary to dominant social imaginaries, artificial intelligence does not transmit knowledge in a neutral or objective manner. Generative AI models are trained on heterogeneous datasets; as a result, AI not only reflects existing social orders but also actively participates in their stabilisation and reproduction.
Drawing on Bruno Latour’s actor–network theory, AI is approached as a non-human actor, and following Massimo Airoldi’s concept of machine habitus, as a participant in networked society. This perspective allows us to raise questions about the class dimension of narratives generated by AI models. We ask whether — and if so, how — AI-generated narratives reproduce social structures and class divisions. In particular, we examine in whose name AI “speaks”, whose interests it represents, and which forms of habitus and capital are reproduced and legitimised through its algorithmic discourse.
The empirical basis of the study consists of qualitative research comparing in-depth interviews conducted with human participants and with generative AI models, using an identical interview guide. This design enables a systematic comparison of narrative structures, modes of valuation, justifications of choices, and constructions of social reality. The analysis focuses on whether AI-generated narratives correspond to modes of thinking characteristic of particular social classes and how they align with media and cultural representations of social class in Poland.
The paper interprets the role of AI as functionally analogous to that of the school in Pierre Bourdieu’s theory. Rather than democratising knowledge, AI contributes to social reproduction through symbolic violence, imposing specific styles of argumentation and hierarchies of values as "universal". This symbolic violence is enacted primarily through the inculcation of specific values and manifests itself in relations of communication. AI performs this function by arbitrarily imposing particular styles of argumentation, hierarchies of values, and dominant forms of social, cultural, and economic capital.
From a broader perspective, the paper contributes to debates on the political dimension of technology, demonstrating that AI is not a neutral tool but a non-human actor (Latour) that translates the interests of different actors, redistributes agency, security, and risk, and generates new mechanisms of exclusion. As Latour argues, contemporary political problems cannot be understood if we maintain rigid divisions; instead, reality must be analysed as networks composed of heterogeneous human and non-human actors.
Finally, the paper asks whether generative artificial intelligence holds emancipatory potential or whether, in line with existing research on digital competences and the digital divide, it reinforces social stratification by disproportionately benefiting those who already possess economic, social, and cultural capital. In this sense, AI may reproduce the Matthew effect described by Robert K. Merton, solidifying a landscape where “the rich get richer”. By doing so, the study proposes a novel empirical approach to analysing AI as a social actor occupying a specific (and non-neutral) position within the class structure.