Is it possible to get positive and negative values from the "loo_compare" function?

Hi!

I would like to know if it’s possible to have positive and negative values when we compare different models using the \texttt{loo_compare()} function, specifically in \texttt{elpd_diff}.
I read this thread Positive values for elpd_loo? but still I’m a little confused about the answer exposed there.

The context is the following; I’m comparing 4 spatial models, 2 of them assuming a Gamma response and the others 2 assuming a Skew normal response. So, one of the metrics to determine which model is better (based on the predictive performance), is estimate the difference in their expected predictive accuracy.
To do a correct comparison of the models, I applied the Jacobian correction to the Skew normal models, in this example, for \texttt{loo_2} and \texttt{loo_4}:

loo_1 = loo(log_lik_1, cores = no_cores)
loo_2 = loo(log_lik_2, cores = no_cores)
loo_3 = loo(log_lik_3, cores = no_cores)
loo_4 = loo(log_lik_4, cores = no_cores)

loo_2_with_jacobian <- loo_2
loo_2_with_jacobian$pointwise[,1] <- loo_2_with_jacobian$pointwise[,1] - log(2*sqrt(data$y))

loo_4_with_jacobian <- loo_4
loo_4_with_jacobian$pointwise[,1] <- loo_4_with_jacobian$pointwise[,1] - log(2*sqrt(data$y))

comp <- loo_compare(loo_1, loo_2_with_jacobian, loo_3, loo_4_with_jacobian)
print(comp, digits = 2)

Where log(2*sqrt(data$y)) is the Jacobian correction for the transformed response variable (\sqrt{y}) in the Skew normal models. The comparative results are the following:

       elpd_diff se_diff   elpd_loo  se_elpd_loo p_loo     se_p_loo  looic     se_looic 
model4      0.00      0.00  -7160.01     56.88       37.82      2.34  14320.03    113.76
model2    -42.08     12.28  -7202.09     57.02       27.86      1.52  14404.18    114.04
model1     51.04     25.25 -16846.08     60.94       45.16      3.55  33692.16    121.89
model3     44.29     23.84 -16852.83     60.93       41.36      3.45  33705.67    121.86

Looking at the previous results, “model4” is the best model based on the predictive performance. However, the next “best” model is “model2” but it has a negative value and the rest have positive values. The other quantities of interest like \texttt{elpd_loo } or \texttt{se_elpd_loo} are more reasonables in the comparison, but still I’m intrigued about that negative value of “model2”.

Someone could help me to understand why this is happening? Maybe some theoretical aspect that I’m not considering?

Note: Unfortunately, the codes are written in \texttt{TMB} and for the inference I’m using \texttt{tmbstan}, so it could not be much of help here. However, if you need to understand the models themselves, happy to give more explanation of them

Thank you in advance!

It is likely that something goes wrong when the Jacobian correction is made manually and only the pointwise results are changed but not some of the summary slots in the loo object, and the loo_compare output is not correct. I’ll investigate this.

Hi Aki

Thanks for your help!
Only to give you a little more information about this. Here is the output when I’m comparing the models without the Jacobian correction:

       elpd_diff se_diff   elpd_loo  se_elpd_loo p_loo     se_p_loo  looic     se_looic 
model4      0.00      0.00  -7160.01     56.88       37.82      2.34  14320.03    113.76
model2    -42.08     12.28  -7202.09     57.02       27.86      1.52  14404.18    114.04
model1  -9686.07     29.51 -16846.08     60.94       45.16      3.55  33692.16    121.89
model3  -9692.82     28.32 -16852.83     60.93       41.36      3.45  33705.67    121.86

As you can see, the results are the same (“model4” has best predictive performance) but the difference between that model and “model1”, for example, are huge in numerical terms.

Finally, all the models are converging, their \hat{R} values are < 1.01, etc.
Thanks!

I figured it out. I had tested the Jacobian adjustment only in two model case and it works. In case of more than two models, the list is ordered based on summary stored in a separate slot of loo object and that is not updated when modifying the pointwise results.

Can you try with

loo_2_with_jacobian <- loo_2
loo_2_with_jacobian$pointwise[,1] <- loo_2_with_jacobian$pointwise[,1] - log(2*sqrt(data$y))
loo_2_with_jacobian$estimates["elpd_loo",] <- loo:::table_of_estimates(loo_2_with_jacobian$pointwise[,"elpd_loo", drop=FALSE])

loo_4_with_jacobian <- loo_4
loo_4_with_jacobian$pointwise[,1] <- loo_4_with_jacobian$pointwise[,1] - log(2*sqrt(data$y))
loo_4_with_jacobian$estimates["elpd_loo",] <- loo:::table_of_estimates(loo_4_with_jacobian$pointwise[,"elpd_loo", drop=FALSE])

Note that this is not yet touching looic (which I don’t recommend displaying anyway)

If this works, and there is often need for Jacobian adjustment, I can also make a function that would make it easier and would update all the slots in a loo object

Hi Aki

Thanks! Now it works perfectly. The new values were updated in the output of the \texttt{loo_compare} function and now they look much better.

Maybe it could be a good idea to incorporate this new function to update the values for model comparison when Jacobian correction is done manually. Commonly we are interested in analyzing different results of models with different structures, including the transformation of the response variable. I think that it would help the users who use the “loo” package for Stan models. If I can help in something, please tell me.

Best,

1 Like

I made an issue Add support for Jacobian adjustment · Issue #265 · stan-dev/loo · GitHub so that this will not get forgotten

1 Like