Public Trust in Artificial Intelligence: An Interdisciplinary Scoping Review
Knowledge
Public Opinion
Technology
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
As artificial intelligence (AI) tools are increasingly integrated across sectors with pre-existing trust concerns—including governance, healthcare, and science communication—there is a growing need to understand the role public trust plays in the realization of AI’s benefits and harms. Public trust in AI is necessarily shaped by the technical characteristics of a given AI tool, the institutional characteristics of the sector it is implemented within, as well as the social and political characteristics constituting specific publics. This suggests a need for interdisciplinary research capable of simultaneously accounting for these interdependent factors. However, undertaking this work requires first operating from shared conceptualizations of trust, public trust, and AI. To address these challenges, this research provides an interdisciplinary scoping review designed to map variations in how public trust in AI is defined, measured, explained, and addressed. This review systematically accounts for disciplinary variation, geographic context, sector of implementation, and AI application type, among other factors, in order to clarify the landscape of assumptions and approaches.
The research methodology is guided by the PRISMA extension for scoping reviews (PRISMA-ScR). Publications are collected using a search of online databases which generated 87 included results, with a range of disciplines including social science, computer science, medicine, and business among others. These publications span a variety of sectors such as public administration, healthcare, transportation, policing, and urban planning. Data charting is currently underway, and preliminary results are expected for presentation at the panel. Data will be analyzed using a mixed-methods approach to content analysis in order to synthesize definitions of trust and public trust, identified influences on trust formation, and proposed interventions to create public trust in AI technologies and their implementing institutions.