Model uncertainty using Stan


#1

Hello all,

I am working on a project that is modeling life expectancies for several countries and years.

We are trying to assess model uncertainty of an estimate using a Bayesian approach (e.g., the effect of GDP on life expectancy) not only regarding the specification of a particular class of model (let’s say OLS) but also to the use of different model classes (e.g., OLS versus Poisson when predicting mortality rates).

For instance, I run an OLS and Poisson model. I predict values and estimate the first difference of a change in the key independent variable. I get distributions for those differences.

  1. I could combine those distributions weighting them by a performance measure. Does that make sense? It would be an ensemble process, but not focused on prediction per se but on the first differences for the variable of interest.
  2. Does anyone have experience trying to do something like this?

Any ideas would be very appreciated!

Thank you so much,
Sebastian


#2

By “OLS” do you mean linear regression? Usually optimization with least squares doesn’t give you uncertainty.

I’m not sure what you mean here.

You can, in general, define derived quantities in terms of other random variables in a Stan model and get posterior uncertainty.


#3

Thanks for your answer, Bob.

OLS = linear regression.

Sorry, my question was no clear. What we are trying to do is some kind of model averaging using different classes of models (linear, non-linear, boxcox). I haven’t seen applications of that kind of model averaging or stacking using a Bayesian approach.

We don’t know how to combine these models because they transform the dependent variable (e.g. boxcox), so using WAIC/LOO wouldn’t be appropriate. We always can recover the original scale of the data to assess model GOF and predictive error using MSE, but we are not sure how to weigh them.

In Machine Learning, for instance, a strategy is to use predictions of individual models as predictors for the outcome, and get weights using linear regression or random forests. But we would like to get an idea of model uncertainty and our dataset is not big (~ 200 observations, countries by year).

Any ideas or suggestions?

Thank you so much!
Sebastian


#4

How about Bayesian stacking https://projecteuclid.org/euclid.ba/1516093227 ?

LOO (and waic, but no need to use waic if you can do psis-loo) is appropriate if you take the Jacobian of that transformation into account when computing the log predictive density. After you have LOO log-score you can do Bayesian stacking as described in the above mentioned article.


#5

Thanks for your reply!

That means that if do something like:

m1: y ~ x1+ x2
m2: log(y) ~ x1 + x2 # dependent variable transformation

Can I get model weights using?

log_lik_list = list()
log_lik_list[[1]] = extract(m1)[[“log_lik”]] }
log_lik_list[[2]] = extract(m2)[[“log_lik”]] }

loo::model_weights(log_lik_list, method=“stacking”)

Thank you!


#6

No. You need to take into account the Jacobian of that transformation log(y) when computing the log predictive density. See my comment in thread Model comparison via PSIS-LOO for 3-level hierarchical growth model


#7

Great, thanks!


#8

You can just make a mixture model.


#9

Stacking paper section 4.3 demonstrates that mixture model with small data can perform badly (at least in M-open case).


#10

Thanks for the pointer. I’ll take a look.