ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Your subscription could not be saved. Please try again.
Your subscription to the ECPR Methods School offers and updates newsletter has been successful.

Discover ECPR's Latest Methods Course Offerings

We use Brevo as our email marketing platform. By clicking below to submit this form, you acknowledge that the information you provided will be transferred to Brevo for processing in accordance with their terms of use.

Causal Inference in the Social Sciences I: Randomised Controlled Trials and Matched Designs

Course Dates and Times

Monday 29 July – Friday 2 August

09:00–10:30 / 11:00–12:30

Dániel Horn

horndnl@gmail.com

Eötvös Loránd University

This one-week course provides a thorough introduction to the most widely used methods of causal inference (also known as program evaluation). This is an applied methods course, with equal emphasis on methods and their applications.

The target audience is practitioners and would-be practitioners of policy evaluation who want to measure the effects of policy interventions and policy changes. We especially recommend the course to researchers in social sciences (PhD students and young scholars) interested in measuring the effect of interventions and policy changes.

Methods covered

  1. Randomised Controlled Trials (intent to treat effects, imperfect compliance, local average treatment effects, encouragement design, power calculations)
  2. Matching (exact matching, matching on the propensity score and more)
  3. Methods based on Linear Regression (ensuring common support, functional form choices; regression versus matching).

ECTS Credits for this course and below, tasks for the additional credits

1 additional credit Pass a quiz in advance each day.

2 additional credits As above, plus complete a take-home paper.


Instructor Bio

Dániel Horn is a research fellow at the Centre for Economic and Regional Studies of the Hungarian Academy of Sciences, and associate professor at the department of Economics, Eötvös Loránd University in Budapest.

Besides economics courses he has taught statistics, introduction to Stata and different public policy design and evaluation courses for over five years at PhD, MA and Bachelor levels.

He has been conducting educational impact assessment for over a decade. His research areas include education economics, social stratification and educational measurement issues.

This one-week course provides a thorough introduction to the most widely used methods of casual inference, also known as program evaluation. This is an applied methods course, with equal emphasis on methods and their applications.

The target audience is practitioners and would-be practitioners of policy evaluation who want to measure the effects of policy interventions and policy changes. We especially recommend the course to researchers in social sciences (PhD students and young scholars) interested in measuring the effect of interventions and policy changes.

The knowledge and skills developed in this course are essential to understand and effectively participate in policy evaluations. They are necessary for anyone who works with the results of policy evaluations: those who commission evaluations and want to judge feasibility and methodological soundness of competing proposals, and those who use the results of evaluations and want to know the validity and reliability of their results.

The ideal student has some statistical background and is not afraid of working with data but does not have a firm methodological background in program evaluation or econometrics.

The course provides up-to-date methodological knowledge, together with the corresponding intuition and related skills in handling software, interpreting results and presenting those results.

By the end of this course, you should be able to:

  • understand the methodology of an evaluation study, including its fine details, and form a well-grounded judgment about its value
  • judge whether the evaluation was carried out in a methodologically sound way given the circumstances, and form an educated opinion on how much credibility one should give to its conclusions
  • design – or effectively participate in a team that designs – an evaluation study using the methods covered: randomised experiments, matching, regression analysis
  • perform an impact evaluation analysis – or effectively participate in a team performing such an evaluation – using the above methods
  • interpret the results of analyses of experimental, matching-based or regression-based policy evaluations
  • use the methods covered in this course in other contexts.

During the course, I will:

  • provide a thorough technical introduction to the methods but emphasise also intuition and practical aspects
  • show the methods in action by going over published evaluations
  • replicate some of the published evaluation results using original data, check their robustness and dig deeper if possible.

Methods covered

  1. Randomised Controlled Trials – intent to treat effects, imperfect compliance, local average treatment effects, encouragement design, power calculations
  2. Matching – exact matching, matching on the propensity score
  3. Linear Regression – ensuring common support, functional form choices; regression versus matching

Day 1 – Introduction to impact evaluation
Why do we need to evaluate? Identifying the counterfactual. Getting the question right and formulating hypotheses. Causal inference versus association. The potential outcomes framework. An example: The Job Corps program in the US Heterogenous Treatment Effects. Average Treatment Effects.

Day 2 – Randomised Control Trials (RCT)
Intro to RCT. Why does randomisation help? Randomisation in Practice. Examples. (The NSW program in the US; PROGRESA in Mexico). Intent to treat effects. Imperfect compliance. Local average treatment effects

Day 3 – RCT & Matching
RCT regressions. Intro to matching: Unconfoundedness. The Common support. Exact matching.

Day 4 – Linear Regressions & Matching
Simple linear regression. Ensuring common support. Functional form choices. Regression versus matching. Matching on the propensity score. Examples in matching. Teacher training in Israel.

Day 5 – Matching
Examples in matching. (The New Deal for Lone Parents programme in the UK; debate based on the NSW program in the US). Student presentations.

Basic knowledge of statistics and regression analysis at an undergraduate level, and some knowledge of Stata or R.

Day Topic Details
1 Introduction to impact evaluation

Lectures

2 Randomised Control Trials (RCT)

Lectures

3 RCT & Matching

Lectures

Takehome for day 5. Replicating LaLonde 1986.

4 Linear Regressions & Matching

Lectures

Takehome for day 5. Using LaLonde 1986 to analyse the effect of NSW

5 Matching

Lectures, student presentation

Day Readings
1

Impact Evaluation in Practice (World Bank, 2011), CH1-4.

2

Imbens and Wooldridge 2009.

3

LaLonde, Robert. 1986. Impact Evaluation in Practice (World Bank, 2011), CH7.

4

Impact Evaluation in Practice (World Bank, 2011), CH7.

5

Dehejia, R., Wahba, S. 1999. Jeffrey Smith and Petra Todd 2005

Software Requirements

Stata or R.

Hardware Requirements

Please bring your own laptop with software installed.

Literature

Compulsory texts

Impact Evaluation in Practice (World Bank, 2011)

Imbens and Wooldridge, “Recent Developments in the Econometrics of Program Evaluation.” Journal of Economic Literature, 47(1), 2009, pp. 5-86.

LaLonde, Robert. 1986. “Evaluating the Econometric Evaluations of Training Programs with Experimental Data.” American Economic Review 76 (September): 604–20.

Dehejia, R., Wahba, S. 1999. Causal effects in noexperimental studies: Reevaluating the evaluation of training programs. Journal of the American Statistical Association 94(448), 1053-1062.

Jeffrey Smith and Petra Todd 2005 Does Matching Overcome Lalonde's Critique of Nonexperimental Estimators? Journal of Econometrics Vol 125 Issue 1-2.

Optional texts

Deaton, Angus. “Instruments of Development: Randomization in the tropics, and the search for the elusive keys to economic development.”  January 2009.  Princeton mimeo.  We will discuss section 4.

Diamond, Alexis and Jasjeet S. Sekhon(2012)  “Genetic Matching for Estimating Causal Effects: A General Multivariate Matching Method for Achieving Balance in Observational Studies.” Review of Economics and Statistics

Duflo, Esther, Rachel Glennerster and Michael Kremer (2007) Using Randomization in Development Economics Research: A Toolkit CEPR Working Paper 6059. Sections 2.1 and 2.2.

Guido W. Imbens, 2010. "Better LATE Than Nothing: Some Comments on Deaton (2009) and Heckman and Urzua (2009)," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 399-423, June.

Iacus, Stefano M., Gary King, und Giuseppe Porro. (2011) „Causal Inference without Balance Checking: Coarsened Exact Matching“. Political Analysis.

Imbens, Guido W. 2003. Sensitivity to exogeneity assumptions in program evaluation. American Economic Review 96:126–32.

Joshua Angrist and Victor Lavy 2001 Does Teacher Training Affect Pupil Learning? Evidence from Matched Comparisons in Jerusalem Public Schools. Journal of Labor Economics Vol 19. No. 2.

Morgan, Stephen L. and Christopher Winship. Counterfactuals and Causal Inference: Methods and Principles for Social Research.

Peter Dolton and Jeffrey A. Smith The Impact of the UK New Deal for Lone Parents on Benefit Receipt. IZA DP No. 5491

Recommended Courses to Cover Before this One

Summer School

Introduction to Stata

Multiple Regression Analysis: Estimation, Diagnostics, and Modelling

Recommended Courses to Cover After this One

Summer School

Causal Inference in the Social Sciences II

Introduction to Experimental Research in the Social Sciences