In November 2022, the public release of GPT-3.5 (ChatGPT) marked a step change for generative large language models (LLMs). Suddenly, humans could 'talk' to computers in plain language, turning them into powerful assistants. Since then, academic research has been transformed by new opportunities to automate tasks, simulate scenarios, and analyse text at unprecedented scale. As open-weight and open-source models begin to rival proprietary ones (Le Mens and Gallego 2025; Leo et al. 2025), questions of replicability, transparency, and accessibility have moved to the centre of political science.
Computational political scientists have used machine learning for decades. But LLMs bring qualitatively new capabilities such as vast context windows and a sophisticated understanding of natural language. This allows LLMs to help with a wide variety of tasks, such as information extraction, labelling, sentiment analysis, concept formation, and synthetic data creation. Anecdotal evidence suggests that LLMs have even become sophisticated enough to overrule those initially made by human experts (Ornstein et al. 2025: 272). While not without drawbacks such as biases engrained in models through training data (Motoki et al. 2024), LLMs open exciting novel research avenues (e.g., Halterman 2025; Meher and Brandt 2025; Steinert and Kazenwadel 2025). One theme of this Workshop is to contribute to the development of best practices for the application of LLMs in political science.
In addition to questions of how to best employ LLMs, the debate must move beyond whether LLMs can help us. The 'utility' phase of adoption is already well underway across the social sciences. What now also comes into view are the deeper implications of what we might call the AI-turn in political science. How do LLMs reshape the very questions we ask, the kinds of knowledge we produce, and the power relations embedded in global politics and our own academic practices? By focusing on these dimensions, we move past initial fascination with LLMs as helpful tools to examine their broader impact on knowledge production, political practice, and planetary sustainability.
Halterman, A. (2025) ‘Synthetically generated text for supervised text analysis’, Political Analysis 33(3): 181–194.
Le Mens, G. and Gallego, A. (2025) ‘Positioning Political Texts with Large Language Models by Asking and Averaging’, Political Analysis 33(3): 274–282.
Leo, R. D., Zeng, C., Dinas, E. and Tamtam, R. (2025) ‘Mapping (A)Ideology: A Taxonomy of European Parties Using Generative LLMs as Zero-Shot Learners’, Political Analysis 33(4): 456–463.
Meher, S. and Brandt, P. T. (2025) ‘ConflLlama: Domain-specific adaptation of large language models for conflict event classification’, Research & Politics 12(3): 20531680251356282.
Motoki, F., Pinho Neto, V. and Rodrigues, V. (2024) ‘More human than human: measuring ChatGPT political bias’, Public Choice 198(1): 3–23.
Ornstein, J. T., Blasingame, E. N. and Truscott, J. S. (2025) ‘How to train your stochastic parrot: large language models for political texts’, Political Science Research and Methods 13(2): 264–281.
Steinert, C. V. and Kazenwadel, D. (2025) ‘How user language affects conflict fatality estimates in ChatGPT’, Journal of Peace Research 62(4): 1128–1143.
1: How can LLMs be used without sacrificing key tenets such as replicability, transparency, and accessibility?
2: How does the reliance on LLMs reshape research agendas in political science?
3: How do AI infrastructures shape global power, sovereignty, and governance?
4: How should we confront the environmental and material footprint of LLMs?
5: What impact do LLMs have on political science, and which ethical rules should govern their use?
| Title |
Details |
| Political behavior in LLM agents: experimental insights into governance, global power asymmetries, security risks, and the dynamics of knowledge construction |
View Paper Details
|
| Rating Fragility: Project Success and Contextual Bias in Fragile States |
View Paper Details
|
| Magic Words or Methodical Work? Challenging Conventional Wisdom in LLM-Based Political Text Annotation |
View Paper Details
|
| The Rational Design of International Institutions in the Age of AI: Turning Continents into Islands? |
View Paper Details
|
| ‘Does this really work?’ Introducing an LLM-based workflow as a conceptual response to validity concerns in political institutionalism. |
View Paper Details
|
| Strategic Signaling and U.S. Public Statements in Secessionist Conflicts |
View Paper Details
|
| Measuring the Depth of Trade Agreements with Large Language Models |
View Paper Details
|
| A Framework for Labelling Complex Political Concepts Using Open-Weight LLMs: Evidence from Illiberalism in Party Communication |
View Paper Details
|
| Large Language Models and the Political Science Research Process: Insights on Potentials and Risks from Reflexive Self-Ethnographic Case Studies |
View Paper Details
|
| Language Models in Sustainability Governance: An AI-driven Policy Monitoring Framework for Hydrogen |
View Paper Details
|
| Making Semi-Structured Interviews Scalable: An Experimental Evaluation of AI Conversational Interviewing |
View Paper Details
|
| Seeing the Forest: Corporate Sustainability Reporting and the EU Circular Economy Agenda |
View Paper Details
|
| Ranking Business Trade Preferences Using GPT |
View Paper Details
|
| The power of conversation: A survey experiment on AI-generated information and political behaviour |
View Paper Details
|
| Measuring Contestation and Support of the Liberal International Order: A Cross-National Computational Approach |
View Paper Details
|
| Embedded geopoliticization: The EU's global trade strategy in an age of challenged multilateralism |
View Paper Details
|
| Webs of Power: Unveiling Autocratic Elite Networks and Their Influence on Leader Constraints Using Large Language Models (LLMs) |
View Paper Details
|
| Is AI Geopolitical? Mapping Political Bias of LLMs around International Events |
View Paper Details
|