Can AI-Based Corrections Restore Trust in Electoral Integrity?
Democracy
Elections
Communication
Survey Experiments
Technology
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
Electoral disinformation poses a growing challenge to democratic legitimacy, yet evidence on how best to correct false beliefs and rebuild trust in electoral institutions remains mixed. This paper evaluates whether artificial intelligence (AI)–mediated interventions can reduce disinformation endorsement and enhance trust in electoral processes, compared to more conventional informational approaches. Using a preregistered survey experiment conducted in Australia, respondents first articulated their views on electoral integrity in open-ended form, focusing on issues such as electoral fraud, ballot security, and the regulation of election-related disinformation. These responses were summarised by an AI system and presented back to respondents as a personalised statement of their beliefs, which then served as the basis for experimental intervention.
Participants were randomly assigned in equal numbers to one of six conditions: a pure control condition; a placebo condition involving unrelated content; a factsheet correction presented without attribution; a factsheet explicitly attributed to the Australian Electoral Commission (AEC); an interactive AI-based conversational engagement without attribution; and an interactive AI-based conversational engagement explicitly attributed to the AEC. This design allows us to disentangle the effects of information format (static factsheet versus interactive AI), institutional endorsement (attributed versus unattributed), and engagement intensity, while benchmarking these effects against both placebo and no-treatment baselines.
Outcome measures include changes in confidence in respondents’ own beliefs about electoral integrity, endorsement of common disinformation claims, and evaluations of the trustworthiness of electoral commissions, electoral processes, and the political system more broadly. We also examine perceived qualities of the AI interaction, such as respectfulness, transparency, and lack of hidden motives, as potential mechanisms linking treatment exposure to attitudinal change.
The findings show that AI-mediated conversational engagement can reduce disinformation endorsement and increase trust in electoral processes relative to control and placebo conditions, particularly when interactions are perceived as procedurally fair and responsive. However, explicit attribution to the electoral authority moderates these effects, suggesting that institutional endorsement can both enhance credibility and trigger scepticism among some respondents. By systematically comparing AI-based and traditional correction strategies under varying attribution conditions, this paper contributes to debates on political communication, digital governance, and democratic resilience, highlighting both the promise and the limits of AI-assisted interventions in contested informational environments.