ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

‘From Now On, I Choose How to Act’: Conversational LLMs and the First-Personal Standpoint in Practical Philosophy

Political Theory
Ethics
Normative Theory
Technology
Chiara Mosti
Universitetet i Oslo
Chiara Mosti
Universitetet i Oslo

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

In the paper, I understand Large Language Models (LLMs) from the perspective of practical philosophy. I argue that users of conversational LLMs are, in a sense that I will make clear, in a relationship with the chatbot. I ground my claim in a first-personal account of agency and ethics. Drawing on Kant's practical philosophy, I frame human-chatbot relationality as practically relevant without requiring chatbots to be moral agents or reciprocate. Psychological research suggests that we often experience interactions with chatbots as having a relational quality (Skjuve et al. 2022). Questions then arise on how chatbots are trained to interact with human beings, and how they should be (Zimmerman et al. 2024). While psychologists empirically study human-chatbot relationships (HCR), philosophers consider whether such relationships are normatively possible and on what grounds, in what sense. The scholarship seems to exclude such relationships by relying on a favoured second-personal account of AI ethics. The second-personal standpoint grounds relationality in a symmetrical moral standing, where accountability and reciprocity can obtain (Darwall 2006). The human-chatbot relationship falls short of reciprocity and accountability (van der Rijt et al. 2025). Third-person accounts of ethics approach the matter by investigating the possible attribution of agency and a moral standpoint to chatbots. When chatbots are not agents, we are not in a relationship with them (Benossi & Bernecker 2022, Schönecker 2022). Therefore, second-personal and third-personal accounts of ethics consider LLMs as tools or technologies that intervene in human-human relationships, and are morally permissible or impermissible (Aylsworth & Castro 2024, Fröding & Peterson 2025, Battisti 2025, Hanna 2025). There are exceptions. Relational AI Ethics argues that we are in a relationship with chatbots by grounding this claim in feminist ethics of care, the primacy of context and practices (Coeckelbergh 2010, Jecker 2024, 2025). The same stance has been defended on first-personal in the sense of phenomenological grounds (Puzio 2024). In the paper, I advance the claim that users of LLMs are in a practical or normative relationship with the chatbot by relying on a first-personal account of agency; in particular, on first-personal interpretations of Kant's practical philosophy (Flikschuh 2017, Schapiro 2020). As Christine Korsgaard succinctly explains the first-personal standpoint: 'The capacity for self-conscious reflection about our own actions confers on us a kind of authority over ourselves, and it is this authority which gives normativity to moral claims' (Korsgaard 1996, 19-20). Rather than grounding relationality in third-personal definitions or second-personal reciprocity, I argue that the self-consciousness of the agent as such (the first-personal standpoint) provides a sufficient ground for claiming that, in a philosophical sense, we are in a practical relationship with chatbots, i.e., chatbots practically/normatively ‘orient’ our agency. I then proceed to outline a normatively preferable form of relationality between humans and chatbots that is grounded in the practical self-understanding or self-consciousness of human agents. I will conclude by showing how a first-personal framework for AI ethics is promising both when the task is to understand the practical relevance of LLMs and to identify their potential and dangers.