ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Your subscription could not be saved. Please try again.
Your subscription to the ECPR Methods School offers and updates newsletter has been successful.

Discover ECPR's Latest Methods Course Offerings

We use Brevo as our email marketing platform. By clicking below to submit this form, you acknowledge that the information you provided will be transferred to Brevo for processing in accordance with their terms of use.

Program Evaluation and Impact Assessment

Course Dates and Times

Monday 1 to Friday 5 August 2016
Generally classes are either 09:00-12:30 or 14:00-17:30
15 hours over 5 days

Dániel Horn

horndnl@gmail.com

Eötvös Loránd University

This one-week course provides a thorough introduction to the most widely used methods of program evaluation, also known as impact assessment. This is an applied methods course, with equal emphasis on methods and their applications.

The target audience is practitioners and would-be practitioners of policy evaluation who want to measure the effects of policy interventions and policy changes. We also recommend the course to a broader audience of researchers in social sciences who are interested in measuring the effect of interventions and policy changes.

The methods covered in this course are (1) Randomized Controlled Trials (intent to treat effects, imperfect compliance, local average treatment effects, encouragement design, power calculations); (2) Matching (exact matching, matching on the propensity score and more); (3) Methods based on Linear Regression (Ensuring common support, functional form choices; regression versus matching)


Instructor Bio

Dániel Horn is a research fellow at the Centre for Economic and Regional Studies of the Hungarian Academy of Sciences, and associate professor at the department of Economics, Eötvös Loránd University in Budapest.

Besides economics courses he has taught statistics, introduction to Stata and different public policy design and evaluation courses for over five years at PhD, MA and Bachelor levels.

He has been conducting educational impact assessment for over a decade. His research areas include education economics, social stratification and educational measurement issues.

This one-week course provides a thorough introduction to the most widely used methods of program evaluation, also known as impact assessment. This is an applied methods course, with equal emphasis on methods and their applications.

 

The target audience is practitioners and would-be practitioners of policy evaluation who want to measure the effects of policy interventions and policy changes. We also recommend the course to a broader audience of researchers in social sciences who are interested in measuring the effect of interventions and policy changes.

 

The knowledge and skills developed in this course are essential to understand and effectively participate in policy evaluations. They are necessary for anyone who works with the results of policy evaluations: those who commission evaluations and want to judge feasibility and methodological soundness of competing proposals, and those who use the results of evaluations and want to know the validity and reliability of their results.

 

The ideal student has some statistical background and is not afraid of working with data but does not have a firm methodological background in program evaluation or econometrics.

 

The course provides up-to-date methodological knowledge, together with the corresponding intuition and the related skills in handling software, interpreting results and presenting those results. Students successfully completing this course should be able to understand the methodology of an evaluation study, including its fine details, and they should be able to form a well-grounded judgment about its value. They should be able to judge whether the evaluation was carried out in a methodologically sound way given the circumstances, and they should be able to form an educated opinion on how much credibility one should give to its conclusions.

 

Students who complete this course should be able to design - or effectively participate in a team that designs - an evaluation study using the methods covered in the course (randomized experiments, matching, regression analysis). They should also be able to perform an impact evaluation analysis - or effectively participate in a team performing such an evaluation - using these methods. They should be able to interpret the results of analyses of experimental, matching-based or regression-based policy evaluations. Finally, the students should also be able to use the methods covered in this course in other contexts.

 

During the course we shall

(a) Give a thorough technical introduction to the methods but emphasize intuition and practical aspects as well

(b) Show the methods in action by going over published evaluations

(c) Replicate some of the published evaluation results using original data, check their robustness and dig deeper if possible

 

The methods covered in this course are

(1) Randomized Controlled Trials

intent to treat effects, imperfect compliance,  local average treatment effects, encouragement design, power calculations

(2) Matching

exact matching, matching on the propensity score, (and coarsened exact matching, genetic matching, weighting on the propensity score)

(3) Linear Regression

Ensuring common support, functional form choices; regression versus matching

 

Topics to be covered:

Day1. Intro. to impact evaluation: Why do we need to evaluate? Identifying the counterfactual. Getting the question right and formulating hypotheses. Causal inference vs. association The potential outcomes framework. An example: The Job Corps program in the U.S. Heterogenous Treatment Effects.  A verage Treatment Effects.

Day 2. Randomized Control Trials (RCT):  Intro to RCT. Why randomization helps? Randomization in Practice. Examples. (The NSW program in the U.S.; PROGRESA in Mexico) Intent to treat effects. Imperfect compliance. Local average treatment effects

Day 3. RCT & Matching: Power Calculations (Optimal Design Plus) RCT regressions. Intro to matching: Unconfoundedness. The Common support. Exact matching.

Day 4. Linear Regressions & Matching: Simple linear regression. Ensuring common support. Functional form choices. Regression versus matching. Matching on the propensity score.

Examples in matching.  (Teacher training in Israel.)

Day 5. Matching: Examples in matching (The New Deal for Lone Parents program in the U.K.  Debate based on the NSW program in the U.S.) New methods in matching: coarsened exact matching, genetic matching, weighting on the propensity score

Basic knowledge of statistics and regression analysis, at an undergraduate level, and some knowledge of Stata or R.

Day Topic Details
Monday Intro. to impact evaluation

Lectures

Tuesday Randomized Control Trials (RCT)

Lectures

Wednesday RCT & Matching

Lectures (Takehome for day 5. replicating LaLonde 1986.)

Thursday Linear Regressions & Matching

Lectures

Friday Matching

Lectures

Day Readings
Monday

Impact Evaluation in Practice (World Bank, 2011), CH1-4.

Tuesday

Imbens and Wooldridge 2009.

Wednesday

LaLonde, Robert. 1986. Impact Evaluation in Practice (World Bank, 2011), CH7,

Thursday

Impact Evaluation in Practice (World Bank, 2011), CH7,

Friday

Dehejia, R., Wahba, S. 1999. Jeffrey Smith and Petra Todd 2005

Software Requirements

Stata (or R) will be required for the course.

Hardware Requirements

Participants need their own laptop with software installed

Literature

Texts (*compulsory)

*Impact Evaluation in Practice (World Bank, 2011)

*Imbens and Wooldridge, “Recent Developments in the Econometrics of Program Evaluation.” Journal of Economic Literature, 47(1), 2009, pp. 5-86.

*LaLonde, Robert. 1986. “Evaluating the Econometric Evaluations of Training Programs with Experimental Data.” American Economic Review 76 (September): 604–20.

*Dehejia, R., Wahba, S. 1999. Causal effects in noexperimental studies: Reevaluating the evaluation of training programs. Journal of the American Statistical Association 94(448), 1053-1062.

*Jeffrey Smith and Petra Todd 2005 Does Matching Overcome Lalonde's Critique of Nonexperimental Estimators? Journal of Econometrics Vol 125 Issue 1-2.

Deaton, Angus. “Instruments of Development: Randomization in the tropics, and the search for the elusive keys to economic development.”  January 2009.  Princeton mimeo.  We will discuss section 4.

Diamond, Alexis and Jasjeet S. Sekhon(2012)  “Genetic Matching for Estimating Causal Effects: A General Multivariate Matching Method for Achieving Balance in Observational Studies.” Review of Economics and Statistics

Duflo, Esther, Rachel Glennerster and Michael Kremer (2007) “Using Randomization in Development Economics Research: A Toolkit.”  CEPR Working Paper 6059. http://econ-www.mit.edu/files/806 Sections 2.1 and 2.2.

Guido W. Imbens, 2010. "Better LATE Than Nothing: Some Comments on Deaton (2009) and Heckman and Urzua (2009)," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 399-423, June.

Iacus, Stefano M., Gary King, und Giuseppe Porro. (2011) „Causal Inference without Balance Checking: Coarsened Exact Matching“. Political Analysis.

Imbens, Guido W. 2003. Sensitivity to exogeneity assumptions in program evaluation. American Economic Review 96:126–32.

Joshua Angrist and Victor Lavy 2001 Does Teacher Training Affect Pupil Learning? Evidence from Matched Comparisons in Jerusalem Public Schools. Journal of Labor Economics Vol 19. No. 2.

Morgan, Stephen L. and Christopher Winship. Counterfactuals and Causal Inference: Methods and Principles for Social Research.

Peter Dolton and Jeffrey A. Smith The Impact of the UK New Deal for Lone Parents on Benefit Receipt. IZA DP No. 5491

Recommended Courses to Cover Before this One

Introduction to Regression Analysis

Intro to GLM: Binary, Ordered and Multinomial Logistic and Count Regression Models

Recommended Courses to Cover After this One

Causal Inference in the Social Sciences

Experimental Research: Methodology, Design and Applications in the Lab and the Field