Install this application on your home screen for quick and easy access when you’re on the go.
Just tap then “Add to Home Screen”
Install this application on your home screen for quick and easy access when you’re on the go.
Just tap then “Add to Home Screen”
Monday 5 to Friday 9 March 2018
09:00-12:30
15 hours over 5 days
The course will introduce participants to the family of quantitative text analysis methods in the ‘content analysis’ tradition using a variety of examples from political science and related disciplines. The course will cover the basic aspects of content analysis starting with manual content analysis and continuing with an introduction to some of the most popular approaches to computer-assisted text analysis. The course will cover practical aspects of text analysis, such as creating coding schemes, selecting documents, assessing inter-coder reliability, scaling, and validating the text analysis output. The course will be taught in a mix of lectures and seminars and participants will have the opportunity to practice on hands-on exercises. The majority of the exercises will be completed by following, step-by-step, code provided in the R statistical software, so previous knowledge of R will not be necessary. In addition, participants will be able to present their own project in class and receive feedback.
Tasks for ECTS Credits
Kostas Gemenis is Senior Researcher in Quantitative Methods at the Max Planck Institute for the Study of Societies.
His research interests include measurement in the social sciences, and content analysis with applications to estimating the policy positions of political actors.
He is currently involved in Preference Matcher, a consortium of researchers who collaborate in developing e-literacy tools designed to enhance voter education.
Most social science concepts are not directly observable, text analysis can provide a useful method in which we can measure quantities of interest that are otherwise difficult to estimate. For instance, by analysing the speeches of legislators, we can classify them as charismatic, populist, authoritarian, liberal, and so on. Similarly, by analysing the content of newspaper editorials, we can infer whether the media in question were biased in favour of a particular candidate during an election campaign.
Text analysis is a specific case of content analysis, typically defined as a method whose goal is to summarize a body of information in the form of text, in order to make inferences about the actor behind this body of information. This implies that text analysis can be seen as a data reduction method since its goal is to reduce the text material in to more manageable bits of information. Text analysis can be also seen as a method for descriptive inference. Weber (1990, p. 9) for instance, defines content analysis as ‘a method that uses a set of procedures to make valid inferences from text’. The idea is that, by analysing the textual output of an actor, we can infer something about this actor. This conceptualization of content and text analysis implies that we can use it as a tool for measurement in the social sciences. In this view of content analysis we are concerned with replicability and objectivity, (Neuendorf 2002, pp. 10-15), and therefore we should distinguish text analysis from other approaches/methods such as discourse analysis, rhetorical analysis, constructivism, ethnography and so on.
The course intends to familiarize participants with both manual and computer-assisted text analysis. Following Krippenforff (2004) and Neuendorf (2002), the course will introduce participants to the basic concepts and building blocks in content analysis designs. For instance, the following questions will be addressed and discussed during the course:
For manual text analysis, the course will also look at the, often overlooked, distinction between the analysis of manifest content and judgemental coding. For computer-assisted text analysis, the course will offer an introduction to a variety of popular methods, such as the use of content analysis dictionaries (including sentiment analysis), scaling methods (wordscores, wordfish), and supervised and unsupervised learning approaches (including topic models). The course will look discuss the relationship between reliability and validity, illustrate methods for estimating inter-coder reliability, and explore the links between manual and computer-assisted text analysis in terms of validation and training of supervised classificatio methods..
The course will be taught in a mix of lectures and seminars and participants will have the opportunity to practice on hands-on exercises. The examples used to illustrate the promises as well as the pitfalls of content analysis will be concerned with various applications across the social sciences (e.g. sentiment analysis of the press, frames analysis of social movements, estimating the positions of political actors, agenda-setting in the EU), while the majority of the exercises will be completed by following, step-by-step, code provided in the R statistical software, so that previous knowledge of R will not be necessary. In most of the seminars we will use R Studio. Follow the link for download instructions: https://www.rstudio.com/products/rstudio/download/ In addition, participants will be able to present their own project in class and receive feedback.
Participants are expected to be familiar with basic statistical concepts such as measures of central tendency (mean, median), dispersion (standard deviation), tests of association (Pearson’s r) and inference (χ2, t-test). These material are covered in the first few chapters of introductory statistics or data analysis textbooks. A useful example is Pollock P.H. III, The Essentials of Political Analysis, fourth edition (Washington, DC: CQ Press, 2012), Chapters 2, 3, 6, and 7. Some familiarity with R statistical software is also desirable but not necessary. In most of the seminars we will use R Studio.
Each course includes pre-course assignments, including readings and pre-recorded videos, as well as daily live lectures totalling at least three hours. The instructor will conduct live Q&A sessions and offer designated office hours for one-to-one consultations.
Please check your course format before registering.
Live classes will be held daily for three hours on a video meeting platform, allowing you to interact with both the instructor and other participants in real-time. To avoid online fatigue, the course employs a pedagogy that includes small-group work, short and focused tasks, as well as troubleshooting exercises that utilise a variety of online applications to facilitate collaboration and engagement with the course content.
In-person courses will consist of daily three-hour classroom sessions, featuring a range of interactive in-class activities including short lectures, peer feedback, group exercises, and presentations.
This course description may be subject to subsequent adaptations (e.g. taking into account new developments in the field, participant demands, group size, etc.). Registered participants will be informed at the time of change.
By registering for this course, you confirm that you possess the knowledge required to follow it. The instructor will not teach these prerequisite items. If in doubt, please contact us before registering.
Day | Topic | Details |
---|---|---|
1 | -Introduction and manual coding of text -Inter-coder reliability |
Lecture (90 mins.)
Seminar (90 mins.)
|
2 | -Document pre-processing and dictionary methods -Sentiment analysis in R |
Lecture (90 mins.)
Seminar (90 mins.)
|
3 | -Scaling methods in text analysis -Wordscores and Wordfish in R |
Lecture (90 mins.)
Seminar (90 mins.)
|
4 | -Supervised classification methods -Μachine/statistical learning in R |
Lecture (90 mins.)
Seminar (90 mins.)
|
5 | -Unsupervised classification methods -Topic models in R |
Lecture (90 mins.)
Seminar (90 mins.)
|
Day | Readings |
---|---|
For the precise literature references, see reference list below. |
|
1 |
Hayes and Krippendorff (2007), Krippendorff (2004), Neuendorf (2002), optional: Benoit et al. (2015), Gemenis (2015) |
2 |
Grimmer and Stewart (2013), Laver and Garry (2000), Young and Soroka (2012) |
3 |
Grimmer and Stewart (2013), Laver et al. (2003), Slapin and Proksch (2008), Bruinsma and Gemenis (2017) |
4 |
Grimmer and Stewart (2013) |
5 |
Grimmer and Stewart (2013), Hopkins and King (2010), Van der Zwaan et al. (2016). |
- R and R Studio
- Yoshikoder, Lexicoder, and Jfreq free software downloads
None
Benoit, Kenneth, Drew Conway, Benjamin E. Lauderdale, Michael Laver, and Slava Mikhaylov (2015) Crowd-sourced text analysis: reproducible and agile production of political data. American Political Science Review 110: 278-295.
Bruinsma, Bastiaan and Kostas Gemenis (2017) Validating Wordscores, https://arxiv.org/abs/1707.04737
Gemenis, K. (2015) An iterative expert survey approach for estimating parties’ policy positions. Quality & Quantity, 49: 2291-2306.
Grimmer, Justin, and Brandon M. Stewart (2013) Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis 21: 267–297.
Hayes, Andrew F., and Klaus Krippendorff (2007) Answering the call for a standard reliability measure for coding data. Communication Methods and Measures 1: 77–89.
Hopkins, Daniel J., and Gary King (2010) A method of automated nonparametric content analysis for social science. American Journal of Political Science 54: 229-247.
Krippendorff, Klaus (2004) Content analysis: An introduction to its methodology, second edition. Thousand Oaks, CA: Sage, Chapters 5 (unitizing) and 7 (coding)
Laver, Michael, Kenneth Benoit, and John Garry (2003) Extracting policy positions from political texts using words as data. American Political Science Review 97: 311–331.
Laver, Michael, and John Garry (2000) Estimating policy positions from political texts. American Journal of Political Science 44: 619–634.
Neuendorf, Kimberly A. (2002) The content analysis guidebook. Thousand Oaks, CA: Sage, Chapter 1 (defining content analysis)
Slapin, Jonathan B., and SvenOliver Proksch (2008) A scaling model for estimating time-series party positions from texts. American Journal of Political Science 52: 705–722.
van der Zwaan, J. M., Marx, M., & Kamps, J. (2016). Validating Cross-Perspective Topic Modeling for Extracting Political Parties' Positions from Parliamentary Proceedings. In ECAI (pp. 28-36).
Young, Lori, and Stuart Soroka (2012) Affective news: The automated coding of sentiment in political texts. Political Communication 29: 205–231.
Summer School
R Basics
Introduction to Inferential Statistics: What you need to know before you take regression
Winter School
Automated Web Data Collection with R
Introduction to R (entry level)
Summer School
Python Programming for Social Scientists: Big Data, Web Scraping and Other Useful Programming Tricks
Automated Collection of Web and Social Data
Big Data Analysis in the Social Sciences
Winter School
Python Programming for Social Sciences: Collecting, Analyzing and Presenting Social Media Data