ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Automating Anticorruption? Algorithmic Opacity as a Challenge for a Public Ethics of Office Accountability

Institutions
Political Theory
Corruption
Decision Making
Ethics
Normative Theory
Emanuela Ceva
University of Geneva
Maria Carolina Jimenez
University of Geneva
Emanuela Ceva
University of Geneva
Maria Carolina Jimenez
University of Geneva

Abstract

This paper explores how the opacity of Machine Learning (ML) algorithms may undermine anticorruption efforts aimed at upholding an institutional ethics of office accountability. ML algorithms have been increasingly employed to address such institutional dysfunctions as corruption and bureaucratic inefficiency. In the context of anticorruption efforts, they have been frequently used to estimate the risk of corrupt behavior among public officers. High scores in corruption risk indicators are generally used as “red flags” triggering formal investigations that may lead to the targeted officers’ prosecution and/or dismissal. This integration of ML algorithms in the fight against corruption are indicative of a punitive, rule-based legalistic approach, which is currently predominant across anticorruption theories and practices. This approach is characterized by an ex post logic, mainly targeted at forms of corrupt behavior that entail some formal rule violation (that ML algorithms are capable of identifying). However, such an approach to anticorruption has two important limits. First, it overlooks ethically relevant instances of corruption that do not entail formal rule-breaking (e.g., favoritism in the allocation of welfare services). Second, it struggles at performing an ex ante function that may give ethical guidance to officers who occupy roles at risk of corruption (e.g., some authority for public procurement). An alternative approach sees anticorruption primarily as a matter of an institutional ethics of office accountability. In this approach, anticorruption also requires mobilizing public officers to engage in practices of answerability by which they can call each other to respond for the uses they have made of their powers of office (e.g., four eyes principle, whistleblowing). We discuss how ML algorithms may hinder the efforts to uphold an institutional ethics of office accountability as an essential anticorruption component. We identify two forms of opacity characteristic of ML algorithms: inherent and intentional. Inherent opacity is caused by the inability of ML algorithms to produce explanations of specific decisions than can be understood by humans. Intentional opacity is caused by the use of proprietary laws, since ML algorithms are often protected by means of, e.g., patents, copyrights, and trademarks. We then explore two ways in which these forms of opacity constitute a challenge to upholding an institutional ethics of office accountability. First, the realization of such an ethics requires that the reasons on which public decisions are made be accountable for in order to vindicate them as coherent with institutional mandates. Inherent and intentional opacity may undermine this condition by preventing officers from accessing the reasons underlying specific algorithmic outputs used to inform public decisions. Second, a key element of an institutional ethics of office accountability is to promote the direct engagement of officers to check upon each other’s uses of their powers of office and take direct forward-looking responsibility for the working of their institution. Inherent and intentional forms of opacity generate obstacles for the attribution of this kind of responsibilities. In fact, they may even trigger the privatization of certain domains of public decision-making through the monopoly of private actors over automated decision-making technologies.