Day 1: What is Bayesian Statistics?
The first day of the course is dedicated to the basics of Bayesian statistics, its differences from classical (frequentist) statistics, basics of Bayesian computation using Markov chain Monte Carlo, and the basics of the JAGS modeling language. Before Bayesian topics are introduced, several topics from frequentist statistics are reviewed in order to facilitate the introduction of the Bayesian perspective. These include common probability distributions, null hypothesis significance testing and the t-test, confidence intervals, likelihood function, Normal linear regression (a.k.a. OLS), and Binomial logistic regression (a.k.a. logit). The review is not a substitute for a course on these topics. After an exposition of the basic features of Bayesian statistics a user-friendly tools for Bayesian computation—the JAGS language—will be introduced.
Day 2: Linear Regression and GLM
The Normal linear model regression and its relatives from the GLM family, such as the Binomial logistic regression are the workhorses of applied social science research. The second day of the course is dedicated to their Bayesian versions. The frequentist and Bayesian version of these models often produce numerically close coefficient values. However, their interpretation is different, and the Bayesian one might in many research contexts appeal more. In addition, Bayesian statistics allows to use extra-data information, e.g., from case studies or expert knowledge, which can lead to quantities that are also numerically different from those under frequentist equivalents of the models. Finally, in Bayesian statistics modeling of hierarchies is straightforward, and simple in JAGS or similar modeling languages, which will be shown on the hierarchical (or multilevel) linear model.
Day 3: Bayesian Model Checking and Evaluation
The final day of the course focuses on model checking and evaluation from the Bayesian perspective. The topics covered are posterior predictive checks, inspections of residuals, and goodness-of-fit measures. In Bayesian statistics, all unobserved quantities are random variables. Consequently, it is possible to compute and inspect posterior distributions not only for regression coefficients and similar quantities, but also for the individual fitted values, or fit statistics such as the R².