Hi, everyone, I have compared fifteen candidate models using both R-hat and LOOIC in Rstudio. The winning model had an R-hat value below 1.1 and the lowest LOOIC score among the other fourteen models. Additionally, I performed a posterior predictive check (PPC) to assess whether the winning model fit the actual behavior data well, using 4K posterior samples. During the model fitting process, each candidate model is fitted with four independent MCMC chains using 1K iterations (after 2K iterations for initial warmup per chain), resulting in 4K valid posterior samples.
I am wondering if there are other methods that could provide stronger evidence for model comparison or identification. Some references suggest using model recovery analysis to mitigate the risk of model misidentification. In this approach, for each model, a randomly selected sample from the 4K valid posterior samples is used to generate a synthetic dataset for all valid participants. Each synthetic dataset, based on a specific generating model, is then used to fit each of the alternative models, and the best-fitting model is identified using LOOIC scores. This process is repeated 100 times to compute the percentage of cases in which each model is identified as the best model based on synthetic datasets from a particular generating model. A higher percentage assigned to the generating model suggests better model identifiability.
However, when I attempted to apply this method to six representative models, RStudio was unable to complete the process due to memory demands. In reality, this me Given these constraints, I would greatly appreciate any suggestions for more computationally efficient methods for model comparison or identification in Bayesian computational modeling, similar to LOOIC.
Thank you in advance for your insights!