ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Promoting a trustworthy Artificial Intelligence? The role of stakeholder and citizen participation

Governance
Interest Groups
Technology
Islam Bouzguenda
Universiteit Twente
Adrià Albareda
Erasmus University Rotterdam
Islam Bouzguenda
Universiteit Twente

Abstract

Artificial Intelligence (AI) algorithms have become a prominent tool to govern many aspects of our public life. Despite its efficiency and the promise of enhanced public policies, the use of AI is not neutral. There is a current flood of conflicting scientific evidence related to the utilization of AI in citizens' sciences and particularly citizen participation. The European Commission has recently published the “European Union's Ethics Guidelines For Trustworthy AI” (EGTAI), stressing the fact that AI tools need to meet specific principles in order to be deemed trustworthy. One of the seven principles argued by the EGTAI focuses on diversity, non-discrimination, and fairness, which calls for the involvement of relevant stakeholders through the entire life circle of AI tools. However, research and practice have very limited knowledge on how stakeholders and citizens are involved in developing AI tools and their consequences for the quality and trustworthiness of AI algorithms. To address this gap, the present paper examines several case studies to identify how public or private organizations are considering the EGTAI guidelines with a specific interest in stakeholder participation. The core driving question of the research is: What are the consequences of stakeholder and citizen involvement for the trustworthiness of AI tools?