SB108 - Case Study Research: Method and Practice

Instructor Details

Instructor Photo

Ingo Rohlfing

Institution:
University of Cologne

Instructor Bio

Ingo Rohlfing is Professor of Political Science, Qualitative Methods, at the Bremen International Graduate School of Social Sciences (BIGSSS). He holds a PhD in Political Science.

Substantively, he is doing research on party competition and party organisations. Methodologically, he is working on the case study method, process tracing, QCA and multi-method research.

He has published in journals such as Comparative Political Studies, Sociological Methods & Research and West European Politics.

Ingo is the author of Case Studies and Causal Inference, part of the ECPR Research Methods Series, published in association with Palgrave Macmillan.

  @ingorohlfing


Course Dates and Times

Monday 31 July - Friday 4 August and Monday 7 - Friday 11 August

14:00-17:30

Please see Timetable for full details.

Location
Building: N13 Room: 312
Prerequisite Knowledge

Yin, Robert K. (2008): Case Study Research: Design and Methods. Thousand Oaks, Calif.: Sage Publications.

Lange, Matthew (2012): Comparative-Historical Methods. SAGE: Los Angeles.

Short Outline

This course approaches qualitative case studies from the perspective of method and practice. The goal is to understand the advantages and challenges of the case study method and to detail the tasks involved in all stages of the research process. The course has three interrelated components (see day-to-day schedule). First, the lecture/seminar segments introduce a specific topic on a basic and advanced level. Second, “lab sessions” give the participants the opportunity to apply the new insights to their own project; this is achieved by discussions about the participants’ projects in small groups and among the entire class. Third, the assignment portion involves in-class discussions of short assignments (simple methodological questions) related to the participants studies and published case studies from different fields within political science. This helps developing an idea about how case studies are presented and done in empirical research. At the end of the course, participants will be able to implement sound case studies and to critically evaluate published research.

 

NOTE: The course is about case studies as a tool for the generation of causal inferences. The course is not about case studies in the hermeneutic, interpretive, etc. tradition eschewing causal terminology and does not discuss actual data collection in much detail, e.g. issues related to the preparation of interviews or archival research.

Long Course Outline

Case studies have a long tradition in the social sciences. They have been subject to a great deal of both methodological appraisal and criticism. This course addresses the case study method from a comprehensive perspective and focuses on its methodological and practical dimensions. The course is useful for participants at every stage of their research. Participants at the beginning are provided with the skills to carefully plan their study. Those participants that are in the midst of their analysis can evaluate their case study in the light of what they learn and plan the next steps so as to meet the standards of good case study research.

 

In the opening session on day 1, we introduce several dimensions that are central to all case study analyses and are important to understand. First, the research goal of a case study can be the development of new hypotheses, the test of hypotheses, and the modification of existing hypotheses in order to solve a puzzle. Second, the level of analysis can be the cross-case level (often understood as the macro level), or the within-case level (i.e. the micro level), or both at the same time. Third, we introduce the importance of difference-making and counterfactuals for causal inference and elaborate the difference between frequentist and Bayesian causal reasoning.

 

On day 2, we proceed with a basic discussion of causation and causal inference and when and how we claim that an observed empirical association reflects a causal relationship. We introduce the the criterion of difference-making as the benchmark for inferring causal relationships. We further elaborate on the distinction between causal effects and causal mechanisms and their role in causal inference and case studies. In addition, a thorough treatment of causal inference requires a consideration of different notions of causal effects. A distinction is made between correlations (e.g., the more X, the more Y) and set-relations (e.g., if X, then Y) as these currently represent the two major perspectives on causal effects in the social sciences. Finally, we introduce the basics of Bayesianism, as it became increasingly important in the recent literature about case studies and process tracing. As will be seen in the subsequent sessions, the means of implementing a case study may significantly depend on the causal effect that one deems to be in place. One major goal of this session is to encourage the participants to reflect on the specific type of causal relationship they believe to be in place and to formulate their theoretical expectations accordingly.

 

On day 3, we discuss the importance of delineating the population underlying your case study (provided you aim to generalize). You will see that the specification of the population requires three elements: the definition of scope conditions guiding your analysis, and the positive and negative conceptualizations of the phenomenon that you want to explain in your case study. We start with the distinction between different types of scope conditions and their implications for empirical research. Then, we clarify the distinction between the positive and negative outcome and its role for drawing the boundaries of the population

 

Day 4 and day 5 are concerned with two related issues. First, we discuss different types of cases and their suitability for answering different research questions. The types of cases that are defined and illustrated include types such as the typical case, the deviant case, and the most-likely case. Second, we consider case selection strategies for each type of case. A discussion of both issues helps the participants to determine what type of study and case selection strategy they need in order to find answers to their research question. Since there is a menu of types and case selection strategies and both are linked to different modes of causal inference (see day 2), we spend two days on these topics.

 

The second week starts on day 6 with a session on the basics of comparative case studies, including a consideration of John Stuart Mill’s (in)famous method of difference and method of agreement. The session focuses on the benefits and limitations of cross-case comparisons in the light of the existing arguments for and against such designs. These criticisms can be subsumed under the rubric ‘small-n problem’, stating that the number of cases is too small in case studies so as to generate valid causal arguments. This session enables the participants to understand the construction of proper comparisons and avoid making common mistakes in comparative case studies.

 

On day 7, we continue with a discussion of advanced issues in comparisons. We detail multiple strategies of mitigating the problems that we learned about on day 6. These strategies include the realization of within-unit and longitudinal comparisons, an increase in the number of cases, binary measurement of causes (as opposed to multi-categorical measurement), and the transformation of causes into scope conditions.

 

Day 8 proceeds with an elaboration of the basics of within-case analysis and process tracing in particular. We relate the idea of a causal mechanism to process tracing, discuss different variants of process tracing and within-case analysis, and how process tracing might help in diminishing some of the problems that one confronts in comparative case studies.

 

On day 9, we continue with process tracing from a more practical perspective. Different types of sources are introduced (primary sources, interviews, etc.) with a focus on their respective advantages and disadvantages. The goal is to heighten awareness of the problems of fact-gathering and the need to handle the collected information with caution. Based on the discussion of sources and evidence and types of cases (see day 4 and day 5), we elaborate on how to systematically use evidence for causal inference. Specific attention is paid to the logic of Bayesian causal inference (see day 3) and we introduce a step-by-step procedure for making sound causal inferences based on observations.

 

Finally, day 10 covers the generalization of causal inferences. The opportunities and limits for generalization are discussed in combination with techniques for extending insights beyond the cases under scrutiny. Since case study researchers are often criticized for engaging in generalization, it is important to put these problems in the forefront. In addition, we have a Q&A during which you will have the opportunity to ask all questions that came up during the course and are still open at the end.

 

SUMMARY: WHAT THE COURSE IS AND IS NOT ABOUT

The course is useful for participants who:

- Want to generate causal inferences for one or multiple cases. Whether or not you intend to generalize your inferences is of secondary importance because this is only one element of the research process (and this course).

  • If you eschew causal terminology because you are doing interpretivist, post-structuralist, etc. case studies, you may better attend one of the many ECPR courses covering these philosophies of science. In case you are unsure, please get in touch with me.

- Want to learn the case study method and the careful construction of case study designs. The course has a strong practical element because you will learn tricks and clues for avoiding common mistakes in empirical research, i.e., the practice of case studies. But:

  • We do not go into the details of collecting evidence. We discuss the handling of evidence from a methodological perspective, which has some practical implications. But if you only want to learn how to organize and conduct interviews, transcribe recordings, or undertake archival research, you are better advised to take the ECPR interview course (in the case of interviews).

 

All methodological discussions will be supplemented with examples from case studies from different subfields in political science. Participants are expected to carefully read the obligatory readings and to contribute to the debate in class. Dependent on whether the provisions on ECTS points are as in 2015 (I cannot guarantee this), one has to take a written exam on the last Saturday to get 2 credit points. The written exam works as follows: In advance of the Summer School, I will give you three published case studies (journal articles). On the day of the exam, you have to choose one of the texts and answer five method-related questions (including several subquestions) on the text.

In order to get 3 points, you must do the small assignments that are given to the participants. This is less work than it might read now because many assignments are based on discussions and work that we will do in class already and refer to your own ongoing research project. In addition, you must submit a research proposal of 10-15 pages after the course in which you, based on the lab parts during the course, apply the lessons of the case study course to your research project (detail your concept, justify your case selection, your cross-case comparison, etc.). Those participants without a research project who want to get ECTS points will be given an equivalent task. The paper has to be sent to me within four weeks (approx.) of the end of the course (exact deadline to be determined by the ECPR). If a participant wants to receive 5 points, you have to meet the combined requirements for 2 and 3 ECTS points.

 

Before the course starts, participants are invited to fill in a short questionnaire on their research project, their prior methodological expertise, and their expectations regarding the course.

Day-to-Day Schedule

Day 
Topic 
Details 
MondayIntroduction to essentials of the case study method

Lecture (~90 min):

Course goals

Essentials and concepts central to the case study method (as it is discussed in this course)

Dimensions of case study research

Levels of analysis, causal effects and causal mechanism

 

Lab (~60 min):

Dimensions in participants’ research

TuesdayCausation and modes of causal inference

Lecture (~100 min):

Associations and causal inference

Causal effects: correlation vs. set relations

Basics of Bayesianism

 

Lab (~60 min):

What are causal effects and causal mechanisms in participants’ research?

How can participants infer causation in their study?

WednesdayConcepts and specification of population

Discussion of assignment from day 2 (~30 min)

 

Lecture (~100 min):

Scope conditions

Positive concepts, negative concepts, and continua

 

Lab (~60 min):

Identification of scope conditions and concepts in participants’ projects

ThursdayTypes of cases and case selection

Discussion of assignment from day 3 (~45 min)

 

Lecture (~90 min):

Characteristics of types of cases (typical, deviant, most-likely etc.)

Selection rules for different types of cases

 

Lab (~45 min):

Reflection on general case selection strategies

FridayTypes of cases and case selection

Discussion of assignment from day 3 (~45 min)

 

Lecture (~90 min):

Characteristics of types of cases (typical, deviant, most-likely etc.)

Selection rules for different types of cases

 

Lab (~45 min):

Identification appropriate type of case and selection rule in participants’ projects

MondayWithin-case analysis: method & practice

Assignment/lab (~90 min, done in class):

Identifying observations in empirical research

 

Lecture (~90 min):

Process tracing & collecting observations

Tying evidence to concepts and inferences

Pros and cons of different types of sources

TuesdayCross-case comparisons: advanced issues

Discussion of assignment  from day 6 (~45 min)

Lecture (~120 min):

Units of analysis and time in comparisons

Multi-case comparisons

Binary and categorical measurement in comparisons

Role of scope conditions

WednesdayWithin-case analysis: method & practice

Assignment/lab (~90 min, done in class):

Identifying observations in empirical research

 

Lecture (~90 min):

Process tracing & collecting observations

Tying evidence to concepts and inferences

Pros and cons of different types of sources

ThursdayFrom observations to inferences

Discussion of assignment  from day 8 (~45 min)

Lecture (~140 min):

Unique and contradictory inferences

FridayGeneralization, summary and Q&A

Lecture (~90 min):

Strategies of generalizing causal inferences

Summary of course

Q&A (~90 min)

Day-to-Day Reading List

Day 
Readings 
Monday 1

Rohlfing, Ingo (2012): Case Studies and Causal Inference: An Integrative Framework. Basingstoke: Palgrave Macmillan: chap. 1.

 

Voluntary

Gerring, John (2004): What Is a Case Study and What Is It Good For? American Political Science Review 98 (2): 341-354.

Tuesday 2

Brady, Henry A. (2008): Causation and Explanation in Social Science. Box-Steffensmeier, Janet M., Henry Brady and David Collier (eds.): The Oxford Handbook of Political Methodology. Oxford: Oxford University Press: 217-226.

Rohlfing, Ingo (2012): Case Studies and Causal Inference: An Integrative Framework. Basingstoke: Palgrave Macmillan: sections 2.3 and 2.4.

Rohlfing, Ingo (2012): Case Studies and Causal Inference: An Integrative Framework. Basingstoke: Palgrave Macmillan: chap. 8. (will be online)

 

Voluntary (causation and causal inference)

Machamer, Peter, Lindley Darden, and Carl F. Craver (2000): Thinking about mechanisms. Philosophy of Science 67 (1): 1-25.

Machamer, Peter (2004): Activities and Causation: The Metaphysics and Epistemology of Mechanisms. International Studies in the Philosophy of Science 18 (1): 27-39.

Gerring, John (2005): Causation: A Unified Framework for the Social Sciences. Journal of Theoretical Politics 17 (2): 163-198.

Hedström, Peter and Petri Ylikoski (2010): Causal Mechanisms in the Social Sciences. Annual Review of Sociology 36 (1): 49-67.

Lebow, Richard Ned (2010): Forbidden Fruit: Counterfactuals and International Relations. Princeton: Princeton University Press: chap. 2.

Goertz, Gary and James Mahoney (2012): A Tale of Two Cultures: Contrasting Qualitative and Quantitative Paradigms. Princeton: Princeton University Press: chap. 2.

 

Voluntary (Bayesianism)

Bennett, Andrew (2010): Process Tracing and Causal Inference. Brady, Henry E. and David Collier (eds.): Rethinking Social Inquiry: Diverse Tools, Shared Standards. Lanham: Rowman & Littlefield: 207-219.

Beach, Derek and Rasmus Brun Pedersen (2012): Process Tracing Methods. Ann Arbor: University of Michigan Press: chaps. 5, 6.

Rohlfing, Ingo (forthcoming): Comparative hypothesis testing via process tracing. Sociological Methods & Research (http://dx.doi.org/10.1177/0049124113503142)

Rohlfing, Ingo (2013): Bayesian causal inference in process tracing. SSRN working paper (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2301453)

Wednesday 3

Goertz, Gary (2006): Social Science Concepts: A User's Guide. Princeton: Princeton University Press: 27-53.

Ragin, Charles C. (2000): Fuzzy-Set Social Science. Chicago: University of Chicago Press: chap. 2.

 

Voluntary

Sartori, Giovanni (1970): Concept Misformation in Comparative Politics. American Political Science Review 64 (4): 1033-1053.

Walker, Henry A. and Bernard P. Cohen (1985): Scope Statements - Imperatives for Evaluating Theory. American Sociological Review 50 (3): 288-301.

Adcock, Robert and David Collier. 2001. "Measurement validity: A shared standard for qualitative and quantitative research." American Political Science Review 95 (3):529-546.

Thursday 4

Rohlfing, Ingo (2012): Case Studies and Causal Inference: An Integrative Framework. Basingstoke: Palgrave Macmillan: chap. 3. (will be online)

 

Voluntary (types of cases)

Lijphart, Arend (1971): Comparative Politics and the Comparative Method. American Political Science Review 65 (3): 682-693.

Eckstein, Harry (1975): Case Study and Theory in Political Science. In Greenstein, F.I. and N. W. Polsby (eds): Strategies of Inquiry. Handbook of Political Science, vol. 7. Reading: Addison-Wesley: 92-123.

Skocpol, Theda (2003): Doubly Engaged Social Science. In Mahoney, James and Dietrich Rueschemeyer (eds): Comparative Historical Analysis in the Social Sciences. Cambridge: Cambridge University Press: 407-428.

 

Voluntary (case selection)

King, Gary, Robert O. Keohane and Sidney Verba (1994): Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton University Press: 128-148.

Collier, David and James Mahoney (1996): Insights and Pitfalls: Selection Bias in Qualitative Research. World Politics 49 (1): 56-91.

Seawright, Jason and John Gerring (2008). Case-Selection Techniques in Case Study Research: A Menu of Qualitative and Quantitative Options. Political Research Quarterly 61 (2): 294-308.

Friday 5

Same as day 4

Monday 8

Rohlfing, Ingo (2012): Case Studies and Causal Inference: An Integrative Framework. Basingstoke: Palgrave Macmillan: chap. 4.

 

Voluntary

Lijphart, Arend (1971): Comparative Politics and the Comparative Method. American Political Science Review 65 (3): 682-693.

Lieberson, Stanley (1991): Small Ns and Big Conclusions: An Examination of the Reasoning in Comparative Studies Based on a Small Number of Cases. Social Forces 70 (2): 307-320.

Savolainen, Jukka (1994): The Rationality of Drawing Big Conclusions Based on Small Samples: In Defense of Mill's Methods. Social Forces 72 (4): 1217-1224.

Lieberson, Stanley (1994): More on the Uneasy Case for Using Mill-Type Methods in Small-N Comparative Studies. Social Forces 72 (4): 1225-1237.

George, Alexander L. and Andrew Bennett (2005): Case Studies and Theory Development in the Social Sciences. Cambridge: MIT Press: Chap. 8.

Mahoney, James (1999): Nominal, Ordinal, and Narrative Appraisal in Macrocausal Analysis. American Journal of Sociology 104 (4): 1154-1196.

Tarrow, Sidney (2010): The Strategy of Paired Comparison: Toward a Theory of Practice. Comparative Political Studies 43 (2): 230-259.

Slater, Dan and Daniel Ziblatt. 2013. The Enduring Indispensability of the Controlled Comparison. Comparative Political Studies 46 (10): 1301-1327

Tuesday 9

Rohlfing, Ingo (2012): Case Studies and Causal Inference: An Integrative Framework. Basingstoke: Palgrave Macmillan: chap. 5.

 

Voluntary

George, Alexander L. (1979): Case Studies and Theory Development: The Method of Structured, Focused Comparison. In Lauren, Paul Gordon (ed): Diplomacy. New Approaches in History, Theory, and Policy. New York: Free Press: 43-68.

Mahoney, James (2000): Strategies of Causal Inference in Small-N Analysis. Sociological Methods & Research 28 (4): 387-424.

George, Alexander L. and Andrew Bennett (2005): Case Studies and Theory Development in the Social Sciences. Cambridge: MIT Press: chap. 9.

Gerring, John, and Rose McDermott (2007). An Experimental Template for Case Study Research. American Journal of Political Science 51 (3): 688-701.

Anckar, Carsten (2008): On the Applicability of the Most Similar Systems Design and the Most Different Systems Design in Comparative Research. International Journal of Social Research Methodology 11 (5): 380-401.

Wednesday 10

Bennett, Andrew and Jeffrey Checkel (2014): Process Tracing: From Methodological Roots to Best Practices. Bennett, Andrew and Jeffrey Checkel (ed.):  Process Tracing in the Social Sciences: From Metaphor to Analytic Tool.  Cambridge: Cambridge University Press: 1-37.

 

Voluntary

Collier, David, Henry Brady and Jason Seawright (2004): Sources of Leverage in Causal Inference: Toward an Alternative View of Methodology. In: Brady, Henry  and David Collier (eds): Rethinking Social Inquiry. Diverse Tools, Shared Standards. Rowman & Littlefield: Lanham: 250-264.

Checkel, Jeffrey T. (2008): Process tracing. A. Klotz and D. Prakash (eds.): Qualitative methods in international relations: a pluralist guide. Basingstoke: Palgrave Macmillan: 114-127.

Gerring, John (2008): The Mechanismic Worldview: Thinking inside the Box. British Journal of Political Science 38: 161-179.

Hall, Peter A. (2008): Systematic Process Analysis: When and How to Use It. European Political Science 7 (3): 304-317.

Beach, Derek and Rasmus Brun Pedersen (2013): Process-Tracing Methods. Ann Arbor: University of Michigan Press: chap. 1.

Thursday 11

Thies, Cameron G. (2002): A Pragmatic Guide to Qualitative Historical Analysis in the Study of International Relations. International Studies Perspectives 3(4): 351-372.

Lieshout, Robert H., Mathieu L. L. Segers and Anna M. van der Vleuten (2004): De Gaulle, Moravcsik, and the Choice for Europe: Soft Sources, Weak Evidence. Journal of Cold War Studies 6 (4): 89-139.

 

Voluntary

Collier, David (2011): Understanding Process Tracing. Political Science & Politics 44 (4): 823-830.

Collier, David (2010): Process Tracing: Introduction and Exercises. Online document (http://polisci.berkeley.edu/people/faculty/CollierD/Proc%20Trac%20-%20Text%20and%20Story%20-%20Sept%2024.pdf), accessed 11/01/06.

Mahoney, James (2012): The Logic of Process Tracing Tests in the Social Sciences. Sociological Methods & Research 41 (1): 570-597.

Friday 12

Rohlfing, Ingo (2012): Case Studies and Causal Inference: An Integrative Framework. Basingstoke: Palgrave Macmillan: chap. 9.

 

Voluntary

Rueschemeyer, Dietrich (2003): Can One or a Few Cases Yield Theoretical Gains? In Mahoney, James and Dietrich Rueschemeyer (eds) Comparative Historical Analysis in the Social Sciences. Cambridge: Cambridge University Press: 305-332.

Ruzzene, Attilia (2012): Drawing Lessons from Case Studies by Enhancing Comparability. Philosophy of the Social Sciences 42 (1): 99-120.

Schatz, Edward and Elena Maltseva (2012): Assumed to be Universal: The Leap from Data to Knowledge in the American Political Science Review. Polity 44 (3): 446-472

Software Requirements

None.

Hardware Requirements

None.

Literature

Pieces of literature you might have a look at for preparation for the course (in addition to the literature in the reading list above):

Lange, Matthew (2012): Comparative-Historical Methods. SAGE: Los Angeles.

Levy, Jack S. (2008): Case Studies: Types, Designs, and Logics of Inference. Conflict Management and Peace Science 25: 1-18.

Van Evera, Stephen (1997): Guide to methods for students of political science. Ithaca: Cornell University Press.

Yin, Robert K. (2008): Case Study Research: Design and Methods. Thousand Oaks, Calif.: Sage Publications.

The following other ECPR Methods School courses could be useful in combination with this one in a ‘training track .
Recommended Courses Before

Research Designs

Introduction to QCA

Expert Interviews

Introduction to Qualitative Data Analysis with Atlas.ti

Introduction to NVivo for Qualitative Data Analysis

Recommended Courses After

Advanced Multi-Method Research

Advanced Process-Tracing

Expert Interviews

Introduction to Qualitative Data Analysis with Atlas.ti

Introduction to NVivo for Qualitative Data Analysis

Advanced QCA

Additional Information

Disclaimer

The information contained in this course description form may be subject to subsequent adaptations (e.g. taking into account new developments in the field, specific participant demands, group size etc.). Registered participants will be informed in due time in case of adaptations.

Note from the Academic Convenors

By registering to this course, you certify that you possess the prerequisite knowledge that is requested to be able to follow this course. The instructor will not teach these prerequisite items. If you are not sure if you possess this knowledge to a sufficient level, we suggest you contact the instructor before you proceed with your registration.


Share this page
 

"Man is by nature a political animal" - Aristotle


Back to top