Concept of Bayesian Updating in Modelling

Hello everyone,

I would like some clarification on the concept of Bayesian updating in the context of modelling.

Suppose I run a Poisson regression model to report relative risk ratios for a set of covariates from an initial dataset, and then later on I receive follow-up data (let’s assume at an interval of once every 5-8 months) and use the previous fitted Poisson model informed by its estimates to infer again relative risk ratios for those same variables based on the new follow-up dataset.

My questions are: 1.) does the new result from the updating render the previous result useless? Or can all results from the initial and from the updating be reported?

2.) Can results from repeated model updating from new and newer data be compiled and reported in a temporal fashion or is this a critical error?

Apologies for asking such dodgy questions. Any clarifications are much appreciated.


You can do this exactly only if you have a conjugate model where the posterior is available in closed form. That lets you use it as a prior for future models.

Instead, if you don’t have that, the easiest thing to do is just fit the new data. If you have data y_1 and y_2 with parameter \theta, you might want to use the posterior

p(\theta | y_1) \propto p(y_1 | \theta) \cdot p(\theta)

as the prior for data y_2, which looks like this:

p(\theta | y_2, y_1) \propto p(\theta | y_1) \cdot p(y_2 | \theta).

If you unfold the “prior” here, you get this:

p(\theta | y_1) \cdot p(y_2 | \theta) \propto p(y_1 \mid \theta) \cdot p(\theta) \cdot p(y_2 \mid \theta) = p(y_1, y_2 \mid \theta) \cdot p(\theta).

In other words, you achieve the same affect by just fitting both y_1 and y_2 at the same time.

To answer your questions (1) and (2), which feel like the same question to me. Yes, you can report in temporal fashion. I’m not sure what kind of error you are worried about. This isn’t hypothesis testing, so you’re not going to mess up calibration. I’m also not sure what you mean by useless. What you’ll see is how the estimate of \theta evolves in face of increasing data, which is what the ML folks call a “learning curve.”


Thank you very much for the response and clarification.