Loo package


Hello everyone! I am new in statistic and have an issue with loo package usage.
Particularly, after model implementation in stan, I wanted to use loo package to check
the predictive performance of my model.
the problem is estimated elpd and lpd are very large, for example, elpd is -19708 with
248.5 standard error.
Does it mean that model is not accurate or I can use this model?
all k-pareto values are good less than 0.5

Thanks in advance


The elpd is primarily intended to compare with other models for the same outcome variable. If you have a lot of data, the elpd can be very large in absolute magnitude but may be bigger or smaller than the elpd of another model.


Thank you!
I was worried about that because the estimated effective number of predictors p_loo was more than 500 which is impossible in my dataset and SE was large.
Thanks again


The estimator of p_loo can be flaky (it is supposed to be improved in 2.0) but that doesn’t mean there is necessarily anything wrong with the elpd.




Yes, these are reasons to be worried. Usually impossible p_loo values and large SE are associated with high k values or error in log_lik computation. Since you had all k<0.5, I would suspect error in your code. Could you tell more about your data and model? For example how many observations n and what model and likelihood? If you can show your model code, it would be easiest to check that there is nothing suspicious there.


Sure, probably there are mistakes in my code. I believe I failed in log_lik description.
I have 4000 observations, 6 predictors including 5 categorical and one continuous variables.
The model is logistic regression with binomial distribution, and I took binomial as likelihood.
the model code is as follow:

data {
int N;// number of observations
int n[N];//trials
int y[N];//successes
int M;
matrix[N,M] x;//predictor matrix
parameters {
vector[M] Beta;
real sigma;//between-observation variance
real gamma[N];//random effect

transformed parameters {
vector[N] mu;// probability
for (i in 1:N) {
mu[i] =inv_logit(dot_product(x[i],Beta)+gamma[i]);
model {
Beta~normal(0,1000);//priors are vague
sigma ~ inv_gamma(0.001,0.001);
generated quantities {
real log_lik[N];
for (i in 1:N) {
log_lik[i] = binomial_lpmf(y[i]| n[i],mu[i]);


Code seems valid (there are ways to make it faster, and priors could be better).

The total number of parameters in your models is M+N+1=4007, so observing p_loo around 500 is not impossible. gamma is also parameter. Without prior on gamma, p_loo would around 4000, but the prior is regularizing it. Prior is also regularizing enough that PSIS-LOO works, that is, the posterior of gamma[i] is not too much different with or without the observation (y[i],n[i]).

To summarize: Based on the code and values you gave, p_loo>500 is not impossible.


Thanks for your reply.
Does it mean that by changing priors I could get more proper p_loo values?


The current p_loo>500 is proper. I think you have enough data to learn the posterior of the parameters of prior for gamma well, and you just have such overdispersion compared to binomial that distribution for gamma is wide which is reflected in p_loo>500. Since Pareto k diagnostic was ok, this completely proper result.


Ok, got it. Thank you


There was mistake in the code above…
I mean my mistake.
The problem is I declared likelihood as binomial, and log likelihood as binomial_lpmf.
Seems it is right, however, I realized that I got good k values when I wrote log likelihood as binomial_logit_lpmf.
Can I use it in this way, or I’m making it incorrect?


That was correct.
Since you had mu[i] =inv_logit(dot_product(x[i],Beta)+gamma[i]);
it’s correct to use y~binomial and log_lik[i] = binomial_lpmf
If you use binomial_logit_lpmf then you don’t need inv_logit separately.


I knew it. But with binomial_lpmf I got not satisfying pareto k values.
Is it better to use k-fold cross validation in this case?
Thank you very much for reply