Justice in artificial intelligence and migration border control: Mapping transparency in “high risk AI” systems between communitarianism and egalitarianism?
Human Rights
Migration
Political Theory
Social Justice
Liberalism
Technology
Refugee
Abstract
The debate about justice in AI in the public sector was enforced due to some problematic applications. For example, the Austrian Public Employment Service used AI to predict the job opportunities of job seekers and intended to allocate resources, e.g. job training. Civil rights actors criticized that the AI marginalized people who have already been discriminated at the job market -for example, women, mothers, or migrants (Chris Köver, Netzpolitik.org, 2020). From a software engineering point of view, the solution is simple since the algorithm has just to be rerun with another variable. Nevertheless, the issue is “what is that other variable? How do you work around the bias and injustice that is inherent in that society?” (Heidi Ledford, nature, 2019).
This paper focuses on the case of migration border control scrutinising the standards of the EU Artificial Intelligence (AI) Act, proposed by the European Commission combining them with normative principles stemming from egalitarianism and communitarianism. Crucial questions about the development of AI standards can be answered focusing on normative theories of justice which are able to delve into the high-risk areas of AI technologies in order to shape the different variables applied to AI systems.
In the context of the inherent dangers of the AI technologies there is an attempt from the EU to establish AI system standards through an EU AI Act, aiming to guarantee “trustworthy AI” systems, which fulfil eight key requirements –“transparency” is among them (European Commission, Report “Ethics guidelines for trustworthy AI”, April 2019). How someone can measure the “transparency” in different cases of “high risk” AI systems related to justice, immigration and law? For example, how can the “transparency” standard in the AI high risk technologies which control the migration entrance at the borders be measured? How can they be evaluated normatively?
Taking as an example the border migration control, several conflicting issues derive. In international relations theory, two main ethical approaches can be applied regarding the entry of refugees into a country in relation to the rights and interests of the members of the community -first, partiality (e.g. communitarianism, conservatism) and second, impartiality (e.g. global liberalism, egalitarianism). In partiality the members of a community have a moral advantage over non-members, while in impartiality the rights and interests of the members of a community are as important as those of non-members (e.g. migrants). In partiality approach the states should act as “cultural communities”, while in impartiality approach states should act as “cosmopolitan moral actors” (Gibney 2004, pp. 23, 24, 59, 60). In the same vein, if we think that an AI system will apply migration policies of open/close borders, which would be the relevant criteria to program the AI system in order to respect the “transparency” standard? Will it act according to partiality or impartiality norms? The approaches of communitarianism (partiality) and egalitarianism (impartiality) will be applied on transparency of AI systems in border control.