Hide menu

732A46 Bayesian Learning

Course information


Aims


The course aims to give a solid introduction to the Bayesian approach to statistical inference, with a view towards applications in data mining and machine learning. After an introduction to the subjective probability concept that underlies Bayesian inference, the course moves on to the mathematics of the prior-to-posterior updating in basic statistical models, such as the Bernoulli, normal and multinomial models. Linear regression and spline regression are also analyzed using a Bayesian approach. The course subsequently shows how complex models can be analyzed with simulation methods like Markov Chain Monte Carlo (MCMC). Bayesian prediction and marginalization of nuisance parameters is explained, and introductions to Bayesian model selection and Bayesian decision theory are also given.

Contents


  • Introduction to subjective probability and the basic ideas behind Bayesian inference
  • Prior-to-posterior updating in basic statistical models, such as the Bernoulli, normal and multinomial models.
  • Bayesian analysis of linear and nonlinear regression models
  • Shrinkage, variable selection and other regularization priors
  • Bayesian analysis of more complex models with simulation methods, e.g. Markov Chain Monte Carlo (MCMC).
  • Bayesian prediction and marginalization of nuisance parameters
  • Introduction to Bayesian model selection
  • Introduction to Bayesian decision theory.

Intended audience and admission requirements


This course is given primarily for students on the Master's programme Statistics and Data Mining. It is also offered to Master students in other subjects and to interested Ph.D. students (with a more advanced examination).

Students admitted to the Master's programme in Statistics and Data Mining fulfill the admission requirements for the course.
Students not admitted to the Master's programme in Statistics and Data Mining should have passed:
  • an intermediate course in probability and statistical inference
  • a basic course in mathematical analysis
  • a basic course in linear algebra
  • a basic course in programming
It also required to have a basic knowledge of linear regression, either as a part of a statistics course, or as a separate course.

Course plan


The TimeEdit schedule for the course is available here.

Module 1 - The Bayesics

Lecture 1: Basics concepts. Likelihood. The Bernoulli model. The Gaussian model.
Read: BDA Ch. 1, 2.1-2.5 | Slides
Code: Beta density | Bernoulli model | One-parameter Gaussian model

Lecture 2: Conjugate priors. The Poisson model. Prior elicitation. Noninformative priors.
Read: BDA Ch. 2.6-2.9 | Slides

Lecture 3: Multi-parameter models. Marginalization. Multinomial model. Multivariate normal model.
Read: BDA Ch. 3. | Slides
Code: Two-parameter Gaussian model | Prediction with two-parameter Gaussian model | Multinomial model

Lab 1: Exploring posterior distributions in one-parameter models by simulation and direct numerical evaluation.
Lab 1


Module 2 - Bayesian Regression and Classification

Lecture 4:
Lecture 5:
Lecture 6:
Lab 2:


Module 3 - More Advanced Models and MCMC Simulation

Lecture 7:
Lecture 8:
Lecture 9:
Lab 3:


Module 4 - Flexible Models and Model Inference

Lecture 10:
Lecture 11:
Lecture 12:
Lab 4:

Literature


  • Bayesian Data Analysis by Gelman, Carlin, Stern, och Rubin, Chapman & Hall, Third edition. The book's web site can be found here.
  • My slides.

Examination


The examination for the course Bayesian Learning, 6hp, consists of
  • written reports on the four computer labs (2 hp)
  • individual written report on a project that applies Bayesian methods for data analysis (4hp)
The instruction for the project report is available here.

Some extra R code


  • OptimExample1.R Simple optimization example to illustrate the use of R's optimizing routine in optim.R
  • OptimizeSpam.zip Finding the posterior mode and approximate covariance matrix by numerical optimization methods. This code fits a logistic or probit regression model to the spam data from the book Elements of Statistical Learning. Its a good example since the optimization for the logistic model is very stable, but this is not the case for the probit
  • NormalMixtureGibbs.R Simulates from the posterior distribution of the parameters in a mixture-of-normals model.
  • SimulateDiscreteMarkovChain.R Simulates from Markov Chain with three states.

RStan code


Bugs code


Other material



Page responsible: Mattias Villani
Last updated: 2015-03-30