ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Your subscription could not be saved. Please try again.
Your subscription to the ECPR Methods School offers and updates newsletter has been successful.

Discover ECPR's Latest Methods Course Offerings

We use Brevo as our email marketing platform. By clicking below to submit this form, you acknowledge that the information you provided will be transferred to Brevo for processing in accordance with their terms of use.

Measurement: Approaches Bridging Concepts and Data

Course Dates and Times

Monday 17 – Friday 21 February 2019, 14:00 – 17:30 (finishing slightly earlier on Friday)
15 hours over five days

Kim Sass Mikkelsen

ksass@ruc.dk

University of Roskilde

Some people think of measurement as simply assigning numbers to cases. Sometimes it is. However, often measurement gets much more complicated.

Many of the concepts we deal with in the social sciences are not immediately visible to us, and the phenomena that are do not, typically, correspond exactly to the concept we are after.

Other concepts – most obviously attitudes, meaning, and beliefs – we have to measure using statements from people who do not know our concepts and are biased in numerous conscious and unconscious ways.

This course introduces case-based (qualitative), variance-based (quantitative), and interpretivist approaches to think about and handle these issues. Themes include the logic of latent variable modelling, the psychology of responding to questions, and mediating between everyday language and scholarly terminology. All, as the course makes clear, are handled somewhat differently, sometimes fundamentally differently, across approaches.

Tasks for ECTS Credits

2 credits (pass/fail grade) Attend at least 90% of course hours, participate fully in in-class activities, and carry out the necessary reading and/or other work prior to, and after, class.

3 credits (to be graded) As above, plus complete one task (tbc).

4 credits (to be graded) As above, plus complete two tasks (tbc).


Instructor Bio

Kim Sass Mikkelsen is associate professor of public administration and politics at Roskilde University in Denmark, where he teaches public administration and management.

He is a methods pluralist, having contributed to qualitative methods development, and he teaches case study and statistical methods.

Substantively, Kim studies public sector human resource management in a number of countries, from Uganda to Estonia to Nepal to Brazil, using surveys of public servants and managers.

If broadly conceived as a link between concepts and data, measurement is indispensable for all empirical social science. Sometimes, measurement is easy but most of the time it is not.

A study of local public health may count the number of health centers or the number of staff working in such centers in a city. But what is this a measure of? Is it a good measure?

A study of employee engagement among firefighters may roll out a standard nine-item engagement battery in a survey. But why so many items? Why those nine? And to what extent can we believe what the firefighters reply?

A study of constructive deviance among nurses may approach how they construct meaning of their interaction with clients and how they work around or counteract rules. But how do we elucidate such meaning from their everyday language? After all, few nurses will ever use the term constructive deviance. 

This course introduces measurement issues from variance-based, case-based, and interpretivist perspectives as well as common measurement problems for social science research.

The social sciences tend to deal with concepts – like public health capacity, employee engagement, and constructive deviance – that are not immediately visible. Moreover, the sources of information we have to measure these concepts are rarely direct and are often statements or stories from people, who may not be responding to what they are asked, may hide what they believe from researchers or themselves, and who do not speak the language of the social science academy.

We will examine in detail topics surrounding the often problematic business of social science measurement, including:

1.    Can everything that matters be measured?
2.    Content validity
3.    Measurement validity
4.    Social desirability bias
5.    Context effects on responses in surveys and interviews
6.    Latent variable modeling (confirmatory factor analysis and structural equation modeling)
7.    Measurement issues in process tracing and case studies
8.    Judgement and theory in historical measurement
9.    Interpretative mediation between everyday language and scientific concepts
10.    The transferability of qualitative and quantitative measurement standards to interpretative research.

The course is structured around three approaches: Variance-based, case-based, and interpretative.


Day 1

We introduce the main differences between the three approaches in terms of concepts and measurement. We talk about the extent to which each approach leans on – or can lean on – the others, and the extent to which they conflict with each other either fundamentally or in common practice. 

Day 2

We tackle human beings as sources of information. We talk about a large literature, mainly from survey methodology, about sources of bias when asking questions of people. These biases include social desirability bias, where people report distorted views of their attitudes or behaviours either strategically or to present themselves better to interviewers or themselves, and context effects, where prior questions influence answers to subsequent ones. We discuss the applicability of these and other biases to research, relying on a variety of tools including interviews and participant observation. 

Day 3

We dive deep into variance-based approaches to measurement. In particular, we examine in depth the latent variables perspective on measurement developed in psychology. In this approach, the notion that we cannot directly observe many of our concepts is tackled head on and modelled statistically using confirmatory factor analysis (CFA) and structural equation modelling (SEM). While the session itself will not teach you how to apply these tools, an additional, voluntary workshop during the week will provide an introduction to CFA and SEM in the lavaan package for the R statistical environment.

Day 4

We dive equally deep into case-based approaches to measurement. In particular, we look at measurement in case study research, whether this research is based on historical sources or interviews. We talk about common measurement pitfalls in process tracing research, and apply process tracing tests to attempt to get around them. Moreover, we talk about the roles that theory, histography, and expert judgement plays in historical case study measurement. 

Day 5

We dive into interpretative approaches to measurement. We look at the extent to which the advice from Days 3 and 4 apply to this approach and talk about what standards do apply if the interpretative approach is different. We talk about taking meaning building seriously and what this means to concepts and measures of those concepts. Finally, we debate the feasibility and problems of quantification in interpretative work.


We strongly encourage you to bring your own research questions, concepts, hypotheses, and proposed measures – even if you are only in the early stages of research – to the course on Day 1.

This course will help you measure core concepts in your research. There will be plenty of opportunity to reflect on measurement issues in your own research, and develop ways to overcome them. 

Basic understanding of statistical methods.

Some knowledge of concept formation would be useful.

Day Topic Details
1 Approaches to measurement

We distinguish parameters of measurement debates and approaches. You will learn important similarities and differences in how measurement is discussed in variance-based, case-based, and interpretative social science – and you will learn how these differences matter.

2 Measuring people

We look at problems related to using people as measurement devices, as we do in interviews, surveys, and sometimes document analysis. We discuss a range of important problems related to recall, social desirability, self-deception, and more.

3 Latent variables

We consider a very common approach to measurement as links between latent constructs we cannot observe and indicators we can. We talk about confirmatory factor analysis and structural equation modelling as variance-based approaches to this problem. 

4 Measurement in case studies

We consider measurement issues in causally oriented case study methods, such as comparative historical analysis and process tracing. We talk about test types and how far the latent variables framework can get us in case study work.

5 Measuring meaning

We consider measurement in interpretivist social science. We talk about the extent to which ‘measurement’ even applies here, and – more importantly – how interpretivists mediate between concepts and conversations with real people.

Day Readings
1

Goertz and Mahoney (2012)
A Tale of Two Cultures
Princeton: Princeton University Press

Yanow, Dvora. 2003
Interpretive Empirical Political Science: What Makes This Not a Subfield of Qualitative Methods
Qualitative Methods 1,2: 9–13

Mosley, L. (Ed.). (2013)
Interview research in political science
Cornell University Press, Chapter 9

Ahram, A. I. (2013)
Concepts and measurement in multimethod research

Political Research Quarterly66(2), 280291

2

Tourangeau, R., L. J. Rips, og K. Rasniski. 2000
The Psychology of Survey Response
Cambridge: Cambridge University Press, pp. 1–22

Zaller, J., & Feldman, S. (1992)
A simple theory of the survey response: Answering questions versus revealing preferences
American Journal of Political Science, 36(3), 579–616

Nederhof, A. J. (1985)
Methods of coping with social desirability bias: A review
European Journal of Social Psychology, 15(3), 263–280

Hjortskov, M. (2017)
Priming and context effects in citizen satisfaction surveys
Public Administration, 95(4), 912–926

3

Adcock, R., & Collier, D. (2001)
Measurement validity: A shared standard for qualitative and quantitative research
American Political Science Review, 95(3), 529–546

Brown, T. A. and M. T. Moore. 2008
Confirmatory Factor Analysis
in R. H. Hoyle (red.) Handbook of Structural Equation Modeling
New York: Guilford Press, pp. 361–379 (18 sider)

Raykov, T., & Marcoulides, G. A. (2012)
A first course in structural equation modeling
Routledge, chapter 1

Bozeman, B., & Su, X. (2015)
Public service motivation concepts and theory: A critique
Public Administration Review, 75(5), 700–710

4

Collier, D. (2011)
Understanding process tracing
PS: Political Science & Politics, 44(4), 823–830

Lustick, I. S. (1996)
History, historiography, and political science: Multiple historical records and the problem of selection bias
American Political Science Review, 90(3), 605–618

Schedler, A. (2012)
Judgment and measurement in political science
Perspectives on Politics, 10(1), 21–36

Mosley, L. (Ed.) (2013)
Interview research in political science
Cornell University Press

Fairfield, T. (2013)
Going where the money is: Strategies for taxing economic elites in unequal democracies
World Development, 47, 42–57

5

Yanow, D., & Schwartz-Shea, P. (2015)
Interpretation and method: Empirical research methods and the interpretive turn
Routledge, Chapters 6 and 8 

Bevir, M., & Kedar, A. (2008)
Concept formation in political science: An anti-naturalist critique of qualitative methodology
Perspectives on Politics, 6(3), 503–517

Barkin, J. S., & Sjoberg, L. (Eds.). (2017)
Interpretive quantification: methodological explorations for critical and constructivist IR
University of Michigan Press, Chapter 5
 

Software Requirements

For those interested in the optional introduction to lavaan, basic prior knowledge of the R environment is necessary.

A good introduction can be found here and the software itself plus the more user friendly extension RStudio can be downloaded here and here (R is required to run RStudio).

AWAITING LINKS FROM INSTRUCTOR