Logo

Lessons

  • Lesson 0: Setting up computing resources
  • Lesson 1: Introduction to Bayesian modeling
  • Lesson 2: Parameter estimation by optimization
  • Lesson 3: Markov chain Monte Carlo and Stan
    • Why MCMC?
    • Random number generation
    • The basic idea behind MCMC
    • Warm-up
    • Generating a transition kernel: The Metropolis-Hastings algorithm
    • Further reading on MCMC
    • “Hello, world” —Stan
    • Parameter estimation with Markov chain Monte Carlo
    • Variate-covariate models with MCMC
  • Lesson 4: Display of MCMC results and diagnostics
  • Lesson 5: Prior and posterior predictive checks
  • Lesson 6: Hierarchical models
  • Lesson 7: Principled pipelines

Exercises

  • Exercise 1. Practice with Bayesian modeling
  • Exercise 2. Paremeter estimation by optimization
  • Exercise 3. First foray into MCMC
  • Exercise 4: More Bayesian inference with MCMC
  • Exercise 5: Bayesian modeling with prior and posterior predictive checks
  • Exercise 6: Hierarchical models
  • Exercise 7: Principled pipelines

Schedule

  • Schedule overview
  • Daily schedule
PoL workshop on statistical inference, part 2
  • Lesson 3: Markov chain Monte Carlo and Stan
  • View page source

Lesson 3: Markov chain Monte Carlo and Stan

  • Why MCMC?
  • Random number generation
  • The basic idea behind MCMC
  • Warm-up
  • Generating a transition kernel: The Metropolis-Hastings algorithm
  • Further reading on MCMC
  • “Hello, world” —Stan
  • Parameter estimation with Markov chain Monte Carlo
  • Variate-covariate models with MCMC
Previous Next

Last updated on Aug 12, 2024.

© 2021–2024 Justin Bois. With the exception of pasted graphics, where the source is noted, this work is licensed under a Creative Commons Attribution License CC-BY 4.0. All code contained herein is licensed under an MIT license.


Built with Sphinx using a theme provided by Read the Docs.