ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Your subscription could not be saved. Please try again.
Your subscription to the ECPR Methods School offers and updates newsletter has been successful.

Discover ECPR's Latest Methods Course Offerings

We use Brevo as our email marketing platform. By clicking below to submit this form, you acknowledge that the information you provided will be transferred to Brevo for processing in accordance with their terms of use.

Survey Design

Course Dates and Times

Monday 29 February to Friday 4 March 2016
Generally classes are either 09:00-12:30 or 14:00-17:30
15 hours over 5 days

Peter Lugtig

p.lugtig@uu.nl

University of Utrecht

This course gives an overview of the design and implementation of surveys from the initial planning phase to the data preparation as a final step. Topics include survey mode assessment and selection, sampling frames and designs, nonresponse, interviewer effects, questionnaire design, cognitive pretesting and data editing. The course is taught from a Total Survey Error perspective weighing up data quality at each step of the process against associated costs. The course is taught through formal lectures in which the theoretical foundation in the literature is discussed and less formal discussions of survey design in existing survey research. First, the choice for the survey mode is discussed, and how different ways to sample respondents follow from that choice. On the second day the issue of survey nonresponse is treated. How to prevent, analyse and correct for it. On the third and fourth day the actual survey content is discussed. How to write survey questions, make sure they measure what they intend to measure, test them, and finally, how to assess whether survey data are of good quality. On the final day we focus on data editing, maximizing data quality over the whole process. The course will be applicable to surveys of individuals, households and organisations.


Instructor Bio


Why take a course on survey design?

Surveys are everywhere. Within the social sciences, the majority of empirical studies rely on surveys to collect data on demographics, attitudes and behaviour. Setting up a survey may seem to be a relatively simple process. Everyone can ask questions! In practice however, conducting a survey often turns out be hugely complicated, for various reasons. First of all, the number of choices for the basic design of your survey can seem overwhelming. You have to think about the choice of survey mode, obtaining a good sample, limiting non-response, asking good questions and analysing data, all within time and cost constraints.

To make things even more complicated, each individual design choice affects other aspects of the survey design. For example, choosing to do a survey online is generally cheap and quick, but it will be hard to obtain a representative sample, and some questions are hard to ask online. Moreover, the right choice of a survey design depends on your study population and your research question.

This course introduces you to survey design. We will discuss the various stages that you encounter in doing a survey, and will evaluate the trade-offs between different design choices you may face. We do this from the perspective of Total Survey Error. The overall goal is to limit the overall error of your survey, in order to enable you to give the best answer possible to your research question.

Focus of the course

The course aims to give an overview of the survey design and survey processes process from a Total Survey Error perspective. It prepares students to take more specialized courses in one of the later weeks of the GESIS summer school (see learning objectives).

What will not be covered

In this course, we will not cover the analysis of survey data. That is, how to analyse your survey data once you have collected them. For this, you will need to take (or have taken) a general statistics course for social scientists.

We will also not cover how to work with software for implementing Internet surveys or surveys in other modes. If you would like to work with Internet survey software, we refer to the course on web surveys.

Finally, the course will not cover qualitative interviews. Our course focuses on doing a survey with structured, closed–ended questions. If you would like to learn more about qualitative research and the combination of qualitative and quantitative research, we refer to the course on mixed-methods research. Finally, the topic of mixing survey modes (Internet, Face-to-Face, phone and mail), we do cover mixed-mode surveys, will be covered only at a basic level.

How will the course work?

The Instructor will give interactive lectures introducing the topics of the day. There will be ample room for discussion, and I encourage students to contribute their own experiences and questions. There will be room for discussing specific topics you may wish to know more about. The lectures include some practical group exercises and demonstrations. In the afternoon, students are expected to work on a problem that is linked to the materials discussed in the morning, and optionally read some more in-depth literature. On day 1, you will design your own sample and choose a survey mode. On day 2 you analyse how non-response in surveys may bias your results and explore methods to correct for these. On days 3 and 4, you work on designing and evaluating your own questionnaire. I encourage students to work on their own projects during the afternoon sessions, but will provide example questionnaires and datasets for those students that do not have their own survey project (yet) to work on.

  • No previous experience in survey research is needed, however, some practical experience in conducting surveys and analysing data will be beneficial.
  • A basic understanding of statistics is assumed, at the level of the Analysis of Variance (ANOVA) and multiple regression.
  • It is not necessary to have any skills on using statistical software. The course focuses on the design of surveys, not on the analysis of survey data.
Day Topic Details
1 Survey processes in various interviewer-assisted and self-completion modes Sampling strategies and coverage. The first day sets the scene of the course. First, we discuss the strengths and weaknesses of surveys as a research design in comparison to other research methods. We introduce the Total Survey Error (TSE) framework and discuss how the survey mode affects the potential for different survey errors. Dimensions of survey mode – computer- vs. paper-based, interviewer-assisted vs. self-completion and aural vs. visual – are contrasted. During the second part of this day we dive into the availability of sampling frames and their coverage of the population as well as the effect of different sampling strategies (clustering, stratifying and unequal selection probabilities) on the variance of survey estimates. We shortly contrast probability and non-probability samples. On this first day participants receive the opportunity to develop a survey design (mode and sampling strategy) for their own research question and to be guided therein. 3 hours of lectures/discussions. 2 hours of practical exercises. 2 hours of reading (optional).
2 Nonresponse processes , prevention, analysis, and correction. On day two we cover the various types on nonresponse in survey data and how optimizing the data collection processes may minimize them. Specifically, we look at the effects of incentives on nonresponse rates and nonresponse bias. Also, we will spend some time focusing on the role of interviewers and fieldwork procedures in general to monitor fieldwork. Surveys always contain some degree of nonresponse. The course will show participants how they can correct for unit nonresponse by means of weighting. The effect of these treatments on the analyses is demonstrated. On this second day participants either practice working with simple survey weights, and analysing data with weights. For this, knowledge of either SPSS or Stata is required. Alternatively, they design a fieldwork strategy for their survey that aims to minimize coverage and nonresponse errors. 3 hours of lectures/discussions. 2 hours of practical exercises. 2 hours of reading (optional)
3 Questionnaire design and data accuracy. The third day looks into the psychology of survey response. We assess why operationalizing our research questions into survey questions can be intricate, which survey factors may affect responses and how different respondents differentially understand our questions. Finally, a bit of time is spent discussing effective lay-outs of questionnaires. On day three participants develop an own short questionnaire taking into account the design principles covered during class. 3 hours of lectures/discussions. 2 hours of practical exercises. 2 hours of reading (optional).
4 Questionnaire testing, mode, interviewers, and cross-national comparisons. Day four continues with questionnaire development. Today, however, we look into methods of pre-testing survey questions, including the qualitative technique of cognitive interviewing. During cognitive interviews, the role of the interviewer is crucial. Interviewer effects on measurement are discussed here, as well as the effects of other (self-administered) survey modes on measurement error. In addition, ways of assessing measurement error in survey data are presented. Finally, we discuss how to do surveys in different cultures, with the goal of comparing countries within the framework of Total Survey Error. Course participants practice the pre-testing techniques learned in class with the questionnaires they developed on day three. 3 hours of lectures/discussions. 3 hours of practical exercises 1 hour of reading (optional)
5 Data preparation: assessing measurement quality, Survey quality vs. costs. On the final day we look at how to develop and assess measurement errors after we have collected data Specifically, many research questions include the use or development of item scales. The course demonstrates how to develop and assess the accuracy of such scales. To conclude the course we look back at the Total Survey Error framework and evaluate how different costs associated with survey design decisions might affect data quality. We discuss the methods that can be used to assess the different components of the Total Survey Error framework and discuss how we may trade-off survey quality against survey costs. 3 hours of lectures/discussions. 2 hours of reading (optional)
Day Readings
1 Suggested reading : Groves, R.M. et al. (2009) Survey Methodology (2nd edition). New York: Wiley, chapters 2, 3, 5 & 9 Fowler, F. J. (2009), Survey Research Methods (4th Edition), London: Sage. Chapter 3 (Sampling).
2 Groves, R.M. et al. (2009), chapters 6, 10.5 & 10.6 De Leeuw, E. D., J. J. Hox, and D. Dillman (2008). International Handbook of Survey Methodology. New York, chapters 17 & 19. Lynn, P. (1996) Weighting for non-response. In Totman et al et al. Survey and statistical computing, available on: http://iserwww.essex.ac.uk/home/plynn/downloads/Lynn%201996%20Weighting.pdf
3 Groves et al. (2009), chapter 7 Dillman, D.A., J.D. Smyth, and L.M. Christian (2009) Internet, Mail and Mixed-Mode: The Tailored Design Method, 3rd Edition. Wiley and Sons, chapters 4 and 5 Fowler, F.J. (1996) Improving survey questions – design and evaluation. London, Sage, Chapters 1-4 Tourangeau, R. (2003). “Cognitive aspects of survey measurement and mismeasurement.” International Journal of Public Opinion Research, 15, pp. 3-7
4 Groves et al. (2009), chapter 8 Presser, S. , M.P. Couper, J.T. Lessler, E. Martin, J. Martin, J.M. Rothgeb, and E. Singer (2004) “Methods for Testing and Evaluating Survey Questions”, Public Opinion Quarterly, 68 (1): 109-130. Fowler, F.J. (1996) Improving survey questions – design and evaluation. London, Sage, Chapters 5 and 6. De Leeuw, E. D., J. J. Hox, and D. Dillman (2008). International Handbook of Survey Methodology. New York, chapter 20.
5 Groves et al. (2009), chapters 2 (again) & 10 Fowler, F. J. (2009), Survey Research Methods (4th Edition), London: Sage. Chapter 9. Campbell, D.T. and Fiske ,D.W. (1959) Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin 56(2): 81-105 (again)

Software Requirements

Either SPSS (version 18 of higher) or STATA (version 11 or higher).

Hardware Requirements

None

Literature

None, see above

Recommended Courses to Cover Before this One

Winter School:

  • WA105 – Introduction to SPSS
  • WA 106 – Introduction to STATA

Recommended Courses to Cover After this One

Winter School :

  • WD204 – Advanced Multi-Method