ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Exploring the impact of ChatGPT on political science research and teaching

How does ChatGPT affect political science research and teaching? Experts examined pressing questions relating to the topic of Generative AI at our recent House Series roundtable held in association with our Methods School. Inessa Sakhno, Legal Researcher at Utrecht University, breaks down key takeaways from the discussion.

Inessa attended our courses Introduction to NVivo and Advanced Qualitative Data Analysis, both taught by Marie-Hélène Paré, as part of our 2023 Methods School summer programme.

What is the political bias of ChatGPT? Do we need mechanisms to detect AI – or should we rely only on trust? Moreover, should we use ChatGPT in the teaching process? Have we reached the point where it is necessary to start training students on ChatGPT? Can ChatGPT produce an acceptable paper, or even something more profound?

ChatGPT in the classroom

Despite the initial scepticism, speaker Kilian Seng has unexpectedly discovered that ChatGPT may be a good starting point for student research design. We should, however, remember that ChatGPT answers depend on the probability of how often they are asked. We also need to know what is it capable of, considering that it is variable rather than constant due to its fast evolution. For instance, citation within texts is currently unavailable but might exist in future versions.

At the same time, ChatGPT creates an additional burden in the teaching process. Pedagogues need to check whether the paper was written using AI, and if they believe that it is, they still might be unable to prove it. This is particularly the case if the paper has no research question.

Speaker Thomas Robinson stated that ChatGPT is great for generating models. It is impressive that these models capture the underlying shape of the world that already exists. But it remains unclear whether we can learn something truly new from such models and whether these models do, in fact, reflect the real process we use to theorise.

A double-edged sword

Speaker Julia Schulte-Cloos then provided an overview of the benefits and challenges of using ChatGPT. The former includes rethinking research design as the barrier for entry on writing code decreases, helping bring together researchers from different spheres, and providing promise in the generation of potential stimuli.

Moreover, the variety of tasks that can be delegated to AI (including summary, literature review, etc.) can provide faster access to the research.

In spite of these inspiring advantages, there are also a number of problems. There is a lack of infrastructure training users how to write prompts and interact with AI. Societal bias is often repeated and reinforced, as ChatGPT works with openly available information. Results may not be replicable, and there are issues with data protection and sharing some information with AI.

Towards transparency

While experts agree on ChatGPT’s potential application for research, undoubtedly such use should be explicit to prevent abuse. Currently, we rely on trust for declaration of the use of AI. But it is likely that different institutions and users will have different approaches to both this and other related matters. This disparity could lead to some users and researchers being in a disadvantaged position.

Additionally, there is the problem of defining abuse. For example, does using AI for language improvement require disclosure? The only answer to these questions is transparency, as it lets us correctly identify any given author’s personal contribution.

In conclusion, the participants of the roundtable consider that ChatGPT has the potential to be useful for research, even if we do not yet know the full range of such possibilities. However, there are questions that we need to find answers to, and this will most likely have to be done collectively and with greater attention to possible threats of misuse.


Author

Inessa Sakhno

Inessa Sakhno

Inessa is a Legal Researcher at Utrecht University and a human rights defender with 10 years’ experience in Eastern Europe and Central Asia.

She received her BA and MA in Law. Inessa is the author of human rights reports on the rights of vulnerable groups including women, LGBTI+ and ethnic minorities in the region of Eastern Europe and Central Asia. Inessa’s research focuses on systemic discrimination, vulnerability, and intersectionality and includes empirical legal methodology.

Keywords: Methods, Education, Higher Education, Technology

29 September 2023
Share this page