ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Conversational Heteronomy and Moral Autonomy in the Age of Large Language Models

Cyber Politics
Government
Decision Making
Ethics
Technology
Big Data
Allan Gonzalez-Estrada
National University of Costa Rica
Allan Gonzalez-Estrada
National University of Costa Rica

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Conversational heteronomy and moral autonomy in the age of large language models Allan.gonzalez.estrada@una.ac.cr One of today’s central discussions in the Artificial Intelligence debates should not focus on whether large language models (LLMs) possess greater computational power, provide better answer to end users, or save time in consultations. Rather, the debate should be oriented toward major issues such a climate change, the impact of data centres, energy sources, migrations, jobs, and above all, the autonomy of human being in their use of these linguistic models (Recital 29 of the EU AI Act). In this picture, I suggest that the first problems may ultimately depend on the last one: preserving human autonomy in moral decision-making, decisions that have direct practical implications in the political sphere, including cooperations among nations, governments, communities, and individuals. Recent studies (Spitzer et al., 2025) on human interaction with chatbots such as ChatGpt reveal deeply concerning tendencies toward the delegation of cognitive faculties to LLMs. The central question, therefore, is how human autonomy can be analysed under a given moral framework. I will approach this question trough Kant’s idea of moral autonomy, for several reasons. First, according to Kant, the use of reason as a faculty of self-legislation constitutes the cornerstone of moral decision-making. In contexts where the use of AI systems becomes widespread, and where such delegation of self-legislation appears to be a growing trend, a new form of conversational and cognitive heteronomy that seems to be imposed by such technologies, needs to be analyse: the systematic delegation of judgment and interpretation to a machine, that, though statistical processes, merely mimics reasoning without possessing practical reason in the full sense. The problem is therefore twofold. As Kant observed in the opening pages of What Is Enlightenment? immaturity (Unmündigkeit) is the incapacity to use one’s own understanding without the guidance of another. Following this argument, I suggest that we face a new form of tutelage: a digital tutelage that return us to this state of immaturity. We hand over our judgment to a machine composed only of mechanical guides: algorithms that produce the appearance of reason or intelligence, and yet, beneath this deterministic mechanism, there is no space for freedom. Hence, we are, can be suggested, in a sense, surrendering our noumenal world: the realm where liberty is possible to a system of causality. Consequently, causality and autonomy become easily conflated, generating an epistemic and moral problem unprecedented in human history. Hence, the Kantian idea of autonomy explored in the paper, is diminished in the context of LLMs, while heteronomy is strengthened. Our thoughts and decisions are increasingly guided by external principles, and we thereby renounce our self-legislation, compromising our duties toward ourselves. The question, then, is not whether machines can imitate human thought, but whether modern societies can remain truly enlightened while entrusting their judgment to systems that only simulate reason, a situation that, in formal terms, may be described as the modal collapse of autonomy: the impossibility of fulfilling what one is morally obliged to do.