Out of memory when calculating loo with a large log_lik (8G)

Yes, sure. I actually mentioned the model in this post How to calculate log_lik in generated quantities of a multivariate regression model - #6 by Michelle. The model is not so complex, For data, I have ~ 350 individuals, each has ~ 2000 observations. These 2000 observations of one individual are not independent but have some level of autocorrelation, and I modeled them using a linear variate-covariate model with a known design matrix plus AR(1) and Gaussian for residuals. Then we also know the parameter for the design matrix of all individuals follows a kind of hierarchical structure. I also expect this hierarchical structure could help to regularize the individual parameter estimations since 2000 observations are sort of noisy. Therefore, the data matrix is modeled as multivariate regression, in which the design matrix is known based on our domain knowledge and follows a hierarchical structure. That’s why I have more than 4000 parameters, which are parameters for the design matrix and AR(1) per individual and other higher group-level parameters. In the end, since we can make different assumptions about the hierarchical structure, I have several different models and I hope model comparison could answer the question of which model is more generalizable or have better prediction ability. I also have one model doing massive univariate regression without hierarchical structure to see whether each individual’s data is actually enough to estimate well the parameter of interest such that we don’t need a multilevel model. You mentioned previously to consider K-fold validation, but if we consider the hierarchical structure I am not sure I can take part of individuals and fit the model since it is hard to decide how to partition the data. Or maybe I should take part of observations and keep all individuals for each fold, but it means I will have to fit the model multiple times and I am not sure if the autocorrelation will cause some potential problem.

May I ask the reason why we don’t need loo at all in this case? What should I consider if I would like to do model comparison?

Thank you in advance @jonah, based on what I checked, the memory usage was only a few M, during loo computation, it could go up to ~90G and take 12-20G after the calculation, but as we know the final results of loo is very small like a few tens of M. I can not free this 12-20 G in R after calculation, I tried gc() it did not work. I also tried ls() to see all objects in the session and print their size, but none of them is larger than 1G. So I guess it might be related to loo or it is an inherent issue of R? This causes a problem that I can not run loo for other models in one script, because this memory usage will accumulate and finally abort the session. For instance, I have 4 models, the first 3 will accumulate around 60G, and loo itself will need 90G for calculation of an 8G log_lik (using more cores is basically not possible in this case). Note that I did rm() followed by gc() to free LLmat and rel_n_eff.