ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Could an AI follow the moral law?: Developing a Midgleyan case against Kant

Cyber Politics
Ethics
Normative Theory
Tom Whyman
University of Liverpool
Tom Whyman
University of Liverpool

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Speculation about developments in AI is often concerned with the question of whether AIs can be moral subjects. This question is pertinent in part because if AIs are or can be moral subjects, we might have certain duties towards them. But it is also important because it tells us something about the nature and value of human moral agency. If our moral agency is something ‘embodied’ – i.e. if it depends on our being physically constituted in a certain way – then it is surely very difficult to think it might be possible on AI could be a moral subject like us. By contrast, if our moral agency depends solely on our capacity to exercise rationality, it might be more straightforward to see how an AI could be a moral subject: reason here would be considered as a single, unified thing shared by any rational being, and a ‘rational’ AI would simply be one which exercised reason in the appropriate way. In this paper, I want to point to Kant as someone whose work opens the door to AI moral agency, and then to Mary Midgley as someone whose work gives us valuable resources for critiquing this view. I will start by discussing the account Kant gives of practical reason in the Introduction to the Metaphysics of Morals. While Kant, there, might well be far from the ‘empty formalist’ of Hegelian critique, he nonetheless suggests that moral knowledge is not only something which can be had independently from, but is in fact muddied by, ‘animal’ instinct or desire. As Kant claims, practical reason ought ultimately to deal with things we are able to have a “sense-free inclination” towards (6:213). This opens up the possibility that one might be considered rational, and so moral, without having had any sensory experience whatsoever. On this view, as I will argue, an AI might not only potentially be considered a moral agent, it might even be considered a superior sort of agent to human beings. I then describe the view of rational agency Midgley gives in Beast and Man (1979). There, Midgley draws on the science of animal behaviour to show that (1) forms of rationality are pervasive across the animal kingdom, and (2) human reason is itself something that has evolved to fit human purposes. On the Midgleyan view, therefore, our ability to exercise reason both does not and cannot spin frictionless from our animal being: other animals, then, might be moral subjects, but there is no way we might conceive of an intelligence that is not also an organism. I will then conclude by sketching three reasons why we might prefer the Midgleyan account to the Kantian one. Firstly, it seems to better fit our own experience of moral deliberation. Secondly, it seems to better fit our experience of interacting both with other animals, and extent AIs. Thirdly, it seems to guarantee the moral value of human life in a way the Kantian one, in the age of AI, risks failing to.