ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

In ChatGPT we trust? Auditing how generative AIs understand and detect online political misinformation

Cyber Politics
Internet
Methods
Communication
Mixed Methods
Narratives
Technology
Elizaveta Kuznetsova
Weizenbaum Institute for the Networked Society
Ani Baghumyan
Universität Bern
Elizaveta Kuznetsova
Weizenbaum Institute for the Networked Society
Mykola Makhortykh
Universität Bern
Aleksandra Urman
University of Zurich

Abstract

The growing use of AI-driven systems creates new opportunities as well as risks for cyber politics. From search engines organising political information flows (Unkel & Haim, 2019) to personalised news feeds determining individual exposure to misinformation (Kuznetsova & Makhortykh, 2023), these systems increasingly shape how human actors perceive and engage with political matters worldwide. However, besides changing human interactions with cyber politics, the development of technology also gives rise to new types of non-human political actors which go beyond information curation (e.g. as search algorithms do) and are capable of generating and evaluating political information in a more nuanced way. In this paper, we focus on one type of non-human actors dealing with cyber politics: generative artificial intelligence (AI). Generative AIs, such as ChatGPT or MidJourney, are distinguished by their ability to generate new content in the text or image format. More advanced forms of text-oriented generative AIs (e.g. ChatGPT or ChatSonic) are not only capable of producing content in the variety of textual formats but can also serve as conversational agents interpreting and evaluating human input (e.g. to detect whether it contains false information or has a certain political leaning). Consequently, such generative AIs can transform many aspects of cyber politics, including the use of misinformation in online environments which is viewed as a major threat for liberal democracies. By identifying misinformation and bringing awareness of the users to it, generative AIs can cull the spread of false content and counter disinformation campaigns. However, by failing to deal with it properly, generative AIs can also facilitate spread of misinformation online or even be used for generating and disseminating new types of false narratives. In this study, we examine the possible implications of the rise of generative AIs on online misinformation. For this aim, we conduct an algorithmic audit of two commonly used generative AIs: ChatGPT and ChatSonic. Specifically, we examine how these AIs understand the concepts of disinformation and misinformation and to what degree they distinguish them from the related concepts of digital propaganda using the definition-oriented inquiries. Then, we systematically examine the ability of generative AIs to differentiate between the true and the false claims dealing with the two case studies: the war in Ukraine and the COVID-19 pandemic.