Hi,

I am trying to compare two models using the loo package.

Number of data points = 600,000, post-warmup iterations = 2000, # chains = 10

To compute log likelihood from all samples, I need a matrix of size 600K x 20K. This would take very long time and memory.

Any recommendations to make this more efficient?

Can I only use a small number of iterations instead of all 2000? any other suggestions?

Thanks!