Is Perfect AI Angelic?
Ethics
Normative Theory
Technology
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
Self-driving cars are considered “autonomous.” This could simply mean that they can operate independently of human guidance, but some say they can even act according to Kantian moral principles. Unlike human beings, they are not distracted by selfish interests, and thus they may appear to be like the angelic Kantian moral agent: an agent who is perfectly rational and necessarily does what is morally right. Since they have no competing inclinations, they are not even susceptible to temptation.
In this paper, I discuss the moral status of AI agents with deep learning neural networks (not AGI). I argue that even if AIs could consistently produce results that correspond to what the categorical imperative requires, they cannot even act “in accordance with duty” in the Kantian sense, since they lack genuine moral consciousness. They are merely shmoral agents, agents with a kind of ersatz morality. As shmoral agents, even perfect AIs don’t have inner value (dignity) and may be used as mere means. Still, their ability to reason and problem-solve and their ability to causally affect the world accordingly create direct quasi obligations for them. However, since they lack moral patiency, they have at best indirect moral rights.
Kant unnecessarily made moral patiency dependent upon moral agency and thus excluded animals from the moral sphere. On a revised Kantian picture, AIs, even perfect ones, are neither Kantian angels, nor humans, nor animals and they lack genuine moral status. None of the processes of perfect AI involves a genuine understanding of the reasons for their “actions.” When we act, we act from reasons, or, as Kant puts it, we have the capacity to act “from the representation of [a] law[s]” (GMM, 4:412). The moral law accompanies all our practical representations. Even if, in a particular instance, we do not act from the moral law, we are still conscious of violating it. This consciousness has cognitive or emotive content, namely the lack of practical universality and the humiliating pain, respectively. Both are unavailable to an AI. Their “actions” are based not on concerns about practical universality but on statistical patterns and correlations. Like Pavlov’s dog, AIs are conditioned to do what we, not they, conceive of as practically universal. They do not have access to actual moral content but only to proxies that translate these considerations into an ersatz language (shmoral language), and they therefore lack inner value (dignity).
This paper is divided into three parts. I first introduce the distinction between holy and unholy beings in Kant. I then attempt to make the best case for AI holiness. In the final part, I argue that even the most perfect AI is at best shmorally perfect. Thinking about moral perfection and AI not only helps us to delineate the moral space that we do not share with AI, but also reveals something significant: the moral law is not merely a “descriptive” law for morally perfect beings, as has often been claimed. Rather, it is cognized as normative without being experienced as obligatory.