Is it possible to "thin" already fitted model

Having successfully fitted a large model, sometimes when I run add_loo I get

Error: vector memory exhausted (limit reached?)

To get add_loo to work, I have to refit the model with a smaller number of iterations – though it’s guesswork as to how many iterations I need to reduce it too.

In brms, is it possible to “thin” an already fitted model – i.e., subtract some subset of the samples so as to get add_loo to not exceed memory?

I know that there is an option pointwise = TRUE in add_loo, for when memory gets exceeded, but using that option takes a very long time (often longer than refitting the original model with a smaller number of iterations).


I had to compute stacking weights of models fitted with different number of iterations. It did not work because the log-likelihood matrices were of different dimensions.

To avoid refitting models, I used the “extract_log_lik” function, within which you can specify the number of iterations. I then computed relative efficiencies and stacking weights directly with functions in the “loo” package.

It seems you might be satisfied by this approach, adapted to compute looIC instead of stacking weights!

All the best,

Hi Lucas

I don’t see an “iterations” type command in extract_log_lik, but I am probably missing something obvious or misunderstanding. Could you walk me through the process?

My mistake! I was talking about the “log_lik.brmsfit” function. It has an argument “nsamples”.

If needed, I will post code tomorrow!

Aah thanks – so having run something like LogLiks1 <- log_lik(MyModel1, nsamples = 500) and LogLiks2 <- log_lik(MyModel2, nsamples = 500), can I then just run loo(LogLiks1, LogLiks2)?

Argument of loo may also be passed to add_loo so you may change the number of samples being used by defining the nsamples argument. You will have to install the github version of brms though, as previuously changing nsamples was not allowed in some wrappers of loo.

Awesome – I have just tried this and it works! This will be a huge time saver for me – thank you.