Hi all,
I wanted to publicize a short course I’ll be teaching on topics related to Bayesian causal inference. See below for course title, abstract, outline, and other information.
The course will be held in-person at the American Causal Inference Conference (ACIC) 2026 meeting in Salt Lake City, Utah, USA from 1-5pm local time on Monday, May 11.
Conference registration is required, more information here: 2026 Meeting – SOCIETY FOR CAUSAL INFERENCE
More about me here: https://stablemarkets.netlify.app/
Course Title:
Stress-Testing Assumptions: Bayesian Methods for Sensitivity Analysis in Causal Inference
Course Abstract:
Observational studies are often conducted to estimate causal effects of biomedical treatments. These methods invariably rely on statistical and causal identification assumptions. The former are required in settings with imperfectly observed data (e.g. measurement error, missing data, etc.) to connect the complete data distribution to the observed data distribution. The latter are required to connect the observed data distribution to the distribution of potential outcomes. In general, both sets of assumptions are untestable.
When these assumptions do not hold, Bayesian sensitivity analyses allow us to formally encode subjective beliefs about the violation structure via prior distributions. Causal inferences are made using an updated posterior that reflects uncertainty about these violations. Moreover, nonparametric approaches allow the data to drive posterior beliefs about identifiable aspects of the model, while letting priors drive posterior beliefs for the non-identifiable aspects.
This course teaches the methodological and computational concepts behind Bayesian sensitivity analyses. We cover several examples in point-treatment settings including treatment misclassification, unmeasured confounding, and missing not-at-random outcomes. We walk through implementation using synthetic data with computing done in Stan – a widely-used publicly available platform for fitting Bayesian models. Parametric and nonparametric models are covered.
Course Outline:
The course is comprised of three parts:
Part 1 – Bayesian causal estimation in an ideal point-treatment setting where we have completely observed data and the usual causal assumptions hold. Key concepts covered include:
• Basics of Bayesian inference: priors, likelihoods, and posteriors.
• Bayesian implementation of the g-formula.
• Basics of the Stan programming language.
• Implementation example of the g-formula in Stan.
Part 2 – We build on Part 1 to allow for assumption violations. Implementation in Stan is discussed throughout. Specifically, we discuss the following examples of sensitivity analyses for:
• Unmeasured confounding.
• Exposure/treatment misclassification.
• Incomplete outcome information – specifically values that are missing not-at-random in both treatment arms.
Part 3 – While previous parts use parametric Bayesian models, Part 3 will teach participants the basics of sensitivity analysis with nonparametric Bayesian models. Key concepts covered include:
• Infinite and truncated Dirichlet process mixtures.
• Data augmentation concepts.
• A Stan example of conditional average treatment effect (CATE) estimation using truncated Dirichlet Processes with missing not-at-random outcomes.
Course Learning Objectives:
Participants can expect to leave the course with the following:
- Understanding of Bayesian inference for causal effects.
- Understanding of the general Bayesian framework for sensitivity analysis.
- Understanding of concrete implementation in Stan.
- References to important foundational papers and textbooks in this area.
The ideal participant has: 1) familiarity with causal inference in point-treatment settings using outcome and treatment-modeling approaches within the frequentist paradigm; 2) understanding of probability at the level of an introductory graduate course; 3) facility with the R programming language.
Prior exposure to the following are helpful but not necessary: 1) facility programming in languages that are strong and statically typed with an object-oriented paradigm (e.g. C++, Java); 2) familiarity with Bayesian inference at the level of an introductory graduate or advanced undergraduate level.

