ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

The Bidirectional Relationship between Political Trust and AI Regulation: A Conceptual Framework

Democracy
Governance
Government
Political Participation
Regulation
Political Engagement
Technology
Theoretical
Melyssa Ortiz Quintairos Jorge
University of Southampton
Melyssa Ortiz Quintairos Jorge
University of Southampton

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

The regulation of artificial intelligence (AI) is a complex topic marked by divergent perspectives and a lack of clarity, particularly regarding the balance between fostering innovation and ensuring safety and societal well-being. As global authorities consider different approaches and trade-offs, the integration of AI into everyday life advances rapidly within a regulatory vacuum, raising concerns ranging from job displacement to the use of lethal autonomous weapons. Consequently, public trust has emerged as a recurring theme in international guidelines and national AI strategies. However, recent studies indicate that, in these discourses, public trust is frequently framed as a means to achieve strategic goals such as attracting investment and maintaining global AI leadership, thereby assuming an instrumental role (Krüger & Wilson, 2023). The central concern is that efforts to foster public trust may obscure the actual conditions that make AI systems trustworthy, such as transparency and accountability (Kerasidou et al., 2021). This is particularly critical because governments face the challenge of establishing policies for opaque, high-impact technologies at a time when contemporary democracies experience declining trust in their representative institutions (Valgarðsson et al., 2024), which may undermine regulatory legitimacy and public acceptance. Thus, political trust emerges as a crucial dimension of AI governance, fundamental to social cooperation and acting as a heuristic for citizens' decision-making (Devine, 2024). Yet despite growing attention to public trust in AI, debates on AI governance rarely address the bidirectional relationship between political trust and AI regulation. Drawing on a literature review, this theoretical-conceptual study employs a typological approach to examine how political trust influences regulatory acceptance and effectiveness, while investigating how AI regulation may, over time, affect political trust. To this end, it proposes an analytical framework combining two critical dimensions, political trust and institutional regulatory capacity, whose interaction produces four institutional configurations. The analysis reveals three key insights: (1) high political trust combined with weak institutional capacity can generate regulatory complacency, where symbolic measures substitute for substantive action; (2) low political trust does not hinder effective regulation when strong institutional capacity is present, though it requires rapid and visible demonstrations of competence; and (3) while political trust may facilitate regulatory acceptance in the short term, the reverse movement, that is regulation strengthening political trust, requires sustained effectiveness over time and depends on conditions that support trust transfer across domains. The paper argues that the instrumentalisation of public trust in AI can undermine both regulatory effectiveness and political trust in institutions. Crucially, the study indicates that AI can transform the architecture of democratic governance and reconfigure its core values and logics not through the technology itself, but fundamentally through how governments position public trust within their strategies. Finally, the paper identifies public engagement mechanisms, such as open consultations and investments in technological literacy, as critical socio-technical considerations that can partially compensate for deficits in either political trust or institutional capacity, functioning as bridges between regulatory performance and social acceptance across different institutional configurations.