ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Algorithmic Decision Making and Morally Permissible Risk Imposition

Social Justice
Ethics
Technology
Big Data
Sune Holm
University of Copenhagen
Sune Holm
University of Copenhagen
Michele Loi
University of Zurich

Abstract

Analyses of the outcomes of use of algorithmic decision-making tools in decision-making in areas such as criminal justice has initiated an intense debate about what it means for an algorithm to be fair. While many scholars have published insightful papers on the problem of algorithmic fairness, there is still a lack of input to the debate from moral philosophy. In this paper we leverage recent debates about the morality of risk imposition to clarify the basic moral issue at the heart of the algorithmic fairness debate. We argue that the problem about algorithmic fairness as illustrated in paradigmatic cases such as the COMPAS case should be interpreted as a problem about when a person can reasonably reject a risk imposition. Thus we argue that an algorithmic decision-maker (ADM) fails to treat an individual X in a morally right way if and only if the ADM’s treatment of X imposes a risk of being misclassified on Y that Y could reasonably reject. This idea raises two central questions: 1) Why think that algorithmic fairness has to do with the ethics of risk imposition? 2) How should one understand the notion of a reasonable rejection of a risk imposition? To see why algorithmic fairness is a problem about risk imposition consider the fact that algorithmic decisions are decisions based on statistics. On the basis of a statistical model the probability that e.g. an applicant for a loan will not default is calculated and the applicant is classified as high or low risk, which in turn determines whether the applicant is granted the loan or not. In other words, the decision to grant or not grant a loan is based on a probabilistic classification of the applicant. Now in the case of trained algorithms their accuracy can be tested on historical data. That means that we will know the probability that an applicant will default on a loan conditional on being classified low risk and not default on a loan conditional on being classified as high risk. Thus being subjected to an algorithmic decision is to be imposed a probability of being misclassified and where such misclassification is associated with a harm, it will be a case of being imposed a risk where the size of the risk can be calculated as the product of the harm and the probability. Sometimes risk imposition is morally wrong. According to contractualism a risk imposition on Y is morally permissible if and only if the risk imposition is one that Y could not reasonably reject. In the case of an applicant being subjected to an algorithmic decision the applicant will be imposed a risk of misclassification. In the paper we will elaborate on and assess this interpretation of the algorithmic fairness question as a question about permissible imposition of risk of misclassification. This may be different depending on the group the person belongs to, leading to some paradoxical results, known by machine learning theorists.