ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Unfairness in AI anti-corruption tools: main drivers and consequences

Corruption
Ethics
Technology
Fernanda Odilla
Università di Bologna
Fernanda Odilla
Università di Bologna

Abstract

This paper discusses the concept of fairness in predictive AI-based anti-corruption tools (AI-ACTs) by identifying possible risks at different levels – individual, institutional and infrastructural – and their respective main sources of unfairness. It does that using empirical evidence from cases of tools developed in Brazil to critically map challenges in three types of AI-ACTs: risk estimation of corrupt behaviour in public procurement, among civil servants, and of female straw candidacies in electoral competition. The article draws on 12 interviews with law enforcement officials directly involved in the development of anti-corruption technologies and on academic and grey literature, including official reports and dissertations on the tools used as examples. Findings suggest that not only AI-ACTs developers have not been reflecting on potential risks when creating their tools, but the existing models are based on findings from past anti-corruption procedures and practices that may be reinforcing unfairness against individuals from historically disadvantaged groups in the case of risk-scoring tools for straw candidates and owners of supplier companies in public contacts and for civil servants working in specific units with higher punishment records and affiliated to political parties. Although tools under analysis do not make any automated decision-making without human supervision, it is worth mentioning that their algorithms are not open for external auditing.