Can AI Will?: Practical Apperception and the Conditions of Artificial Moral Agency
Freedom
Identity
Causality
Ethics
Technology
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
The advent of AI surpassing average human intelligence has triggered philosophical debates on the possible recognition of AI as an artificial moral agent (AMA). In this paper, however, I critique some overoptimistic diagnoses fueled by rapid technological developments and contend that the self-ascription of desire constitutes a necessary condition of moral agency—rather than cognitive capacities, in which artificial intelligence already rivals those of human beings—by revisiting the Kantian concept of practical apperception.
Even after Kaulbach’s in-depth rediscovery of this concept and its significance in Kantian ethics, practical apperception has not received the attention it merits in contrast to its cognitive counterpart (Kaulbach 1976). Yet, just as the unity of sense perceptions in an objective representation and its ascription to oneself sustains the transcendental self-consciousness of the ‘I think,’ which must be able to accompany what is thought, so too the unity of conflicting desires via rational self-legislation constitutes the basis of the Kantian concept of personality, as developed in the Religion: a human being is a person insofar as she can freely choose the grounds determining her will and is thus accountable for the resulting outcomes as well as for the volitional determination itself. In line with the Incorporation Thesis, Allison formulates the practical self-consciousness of the subject—who determines her will by freely taking, i.e. incorporating incentives into her maxim—as the ‘I take.’ However, the original formulation ‘I will (Ich will),’ suggested by Kaulbach and reiterated by Pieper, Stolzenberg, and Puls, constitutes a preferable interpretation of the parallel between theoretical and practical reason, given the direct correlation of practical reason with the faculty of desire (5:198) (Stolzenberg 2009; Pieper 2011; Puls 2013). Practical apperception of the ‘I will’ thus proves to be a necessary condition for moral agency—or personality, in accordance with Kant’s terminology in the Religion—by laying the functional foundation of practical self-consciousness: one can meaningfully refer to the ‘willing I,’ which unites desires via rational self-legislation, despite Kant’s trenchant criticism of rational psychology in the first Critique—mirroring the structure of the Transcendental Deduction, which validates the ‘thinking I’ as a condition for synthetic a priori judgment arising from uniting sense perceptions with pure concepts of the understanding.
Setting aside the technological assessment of contemporary AI models, I argue that it is not until they possess the ‘mental’ capacity to form independent desires and ascribe them to themselves from a first-personal perspective that they can be considered as moral agents—regardless of the extent to which they may surpass human beings in intelligence. As Schönecker succinctly puts it, submarines cannot swim as human beings do; they merely operate (Schönecker 2022). Likewise, AI models mimic human agency without being genuine agents, unless they themselves desire what they ‘judge’ as desirable through computation, in accordance with Kant’s theory of the subject.