In modern social sciences, statistical analysis has long been established as an inference tool.
Although most published work relies on MLE for its statistical analysis, many researchers seem to be ignorant about MLE per se and rely instead on statistics software packages. Some interpret MLE results as if they were based on the least squares technique. Others cannot distinguish likelihood from probability.
This course, therefore, aims to deepen your understanding of how statistical packages use MLE. This will help you better understand inference, because MLE is closely related to the likelihood model of inference (King 1998).
We begin by discussing some important concepts:
Most importantly, we clarify likelihood and probability, and their distinct role in inference. We also look at foundations of probability theories, which many students learn only implicitly in conventional social science statistics lectures. These will help you understand the likelihood-based model of inference.
Using a linear regression example, you will also learn about computation of MLE (maximization algorithms) and their properties in inference statistics, in particular asymptotic ones.
After gaining a conceptual overview of MLE, you will learn how to program R to obtain MLE. R packages are available, so you won't have to program the maximization algorithms yourself, but you will learn how to define the likelihood function.
The course starts with a linear regression model and extends the classes of statistical model to discrete regression models, such as binary logit/probit and poisson models.
We cannot cover a wide range of statistical models in this short course, but after completing it, you should be able to apply the basic concept of MLE and program skills to various statistical models in different practical situations.