ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Thinking for Yourself: Duties to Oneself in the Age of AI

Knowledge
Decision Making
Ethics
Normative Theory
Technology
Thomas Nys
University of Amsterdam
Marjolein Lanzing
University of Amsterdam
Thomas Nys
University of Amsterdam
Tijn Smits
University of Amsterdam

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

‘Sapere Aude! (Dare to be wise!). Have courage to make use of your own understanding!’ (QE, 8:35). With this motto, Kant ushers in the age of Enlightenment. Today, its relevance has not waned, as technological developments seem to challenge the tendency to think for oneself. With the rapid introduction of generative AI, people are outsourcing more of their intellectual labor to machines. This raises the question of how we should relate to AI and other technological innovations that are developing towards human levels of intelligence. Kant holds that individuals have a duty to themselves to respect and cultivate their rational capacities (G, 4:422-3; 4:430; MM, 6:386-8; 6:444-6). We believe this to be a valuable guideline for the ethical use of AI. This duties-to-oneself approach is meant to complement existing considerations based on what we owe to others. Reflection on the ethical use of AI in the intellectual domain is typically articulated along broadly consequentialist lines. Common concerns include harm to people’s privacy and intellectual property, biases and unfair treatment, and the lack of creativity and authenticity (Floridi 2013; Benjamin 2019; Da Pelo 2025). Kantian approaches are typically used to question whether machines can be moral in some respect (Chakrabarty & Bhuyan 2024; Sanwoolu 2025; White 2022). Instead, we ask whether we can be moral while engaging with AI and related technologies. This duties-to-oneself approach reveals ethical concerns related to the outsourcing of human tasks, especially in the intellectual domain. Duties-to-oneself have recently received more attention in the Kant literature, overcoming technical and interpretive obstacles (Davies 2024; 2025; Eckert-Kuang 2024; Schaab 2021). This paves the way for their application. In this paper, we take up Kant’s wide duty to develop one’s capacities and take inspiration from several concrete practices that conflict with respect for oneself as a rational being. For instance, Kant believes one cannot degrade (selling oneself away) or stupefy (excessive drinking) oneself (MM, 6:435; 6:427). Moreover, contemporary Kantians have argued that agents have a doxastic responsibility to autonomously base their beliefs on epistemically sound sources (Cohen 2024). Against this background, we are concerned about practices that outsource intellectual labor. People are increasingly using AI to write speeches, perform self-evaluations at work, or complete exams and research proposals. Moreover, AI is used to help us decide who to vote for, what social causes to support, and what to look for in a romantic partner. These practices might have bad effects, but we are first and foremost concerned about the expression of disrespect towards one’s own rationality and the misplaced attribution of epistemic authority to AI. While these developments raise concerns about the cultivation of intellectual capacities, the opposite might also be argued. A sound use of AI may enhance our cognitive capacities (Nyholm 2024), and self-tracking may support our duty of self-knowledge (Leuenberger 2024). Thus, a duties-to-oneself approach can provide a valuable framework for assessing whether AI and related technologies support or undermine our rational functioning.