ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Framing bias in AI: politics, policies & power

Governance
Public Policy
Ethics
Technology
Big Data
Inga Ulnicane
De Montfort University
Inga Ulnicane
De Montfort University
Aini Aden
De Montfort University

Abstract

Will Artificial Intelligence (AI) help to detect and reduce human bias? Or will it amplify bias and exacerbate discrimination and unequal treatment along long-standing structural inequalities? Bias is one of the key topics and concerns in the debates about politics, policies and power shifts related to AI. It raises numerous questions: How bias in AI is understood and defined? What are historical, technical, demographic and other reasons behind the bias in AI? What are consequences and effects of bias in AI in the fields such as governance and justice? And how to tackle and mitigate bias in AI though regulatory, educational, scientific, participatory and other measures? To address these questions, this paper will undertake analysis of AI policy documents examining how they frame bias, its reasons and consequences as well as ways to address it. It will draw on analysis of some 50 AI policy documents launched since 2016 by national governments, international organizations, think tanks and consultancies in Europe and the United States. To analyse these documents, this paper will use policy framing approach to unpack values, interests and theories involved in framing bias in AI policy documents. It will also explore which organizations involved in AI policy debates are particularly vocal in raising concerns about bias and which ones tend to overlook this topic. The paper will use intersectional approach to examine how interrelated aspects of gender, race and other diversity characteristics are invoked in policy framing of bias in AI.