It seems Stan doesn’t have any available results built for AIC, WAIC and LOO-CV. Particulalry for the joint model. I was trying to save the log likelihood for my own model. But I had to put survival log likelihood in the “Model” block. That’s why the observed log likelihood cannot be calculated in the “generated quantities” block.
The option “__lp” doesn’t work because it is the posterior log likelihood not the likelihood for the AIC and other criterion.
Does anyone have a good idea to calculate those values? Also, why do we need those model comparison criterion for Baysian analysis given the prior or MCMC algorithms could be quite different?
Since LOO-CV requires multiple fits and checking the left-out sample against the prediction each time it cannot really be built-in, but instead require an external function to put together the multiple estimates.
AIC is not a bayesian criterion (and arguably neither is BIC); DIC seems to be tolerated around these corners, but WAIC is probably preferred. Whatever the (bayesian) criterion, you can compute the likelihood directly using a function like normal_lpdf (in the transformed parameters block and use it in the model block, or afterwards in the generated quantities block if you like the ~ distribution notation), that will give you the values necessary to compute the criteria.
Most of the packages in the Stan ecosystem (e.g. the rstanarm::stan_jm implementation of the survival-longitudinal joint model) are integrated with the loo package that provides both PSIS-LOO (recommended) and WAIC (discouraged) computations.
To have loo work with a custom model, you need to compute separate log likelihood for each unit you are planning to leave out (this would usually be subjects, but don’t have to be). An example on how to do this is at Writing Stan programs for use with the loo package • loo
I am not sure I understand what you mean here. Note that if some value needs to be reused in both model and generated quantities block, you can compute it in transformed parameters - the transformed parameters are available in both subsequent blocks.
You don’t need cross validation for Bayesian analysis. It is just a step that many people found helpful There are other criteria you can use to compare models and/or select hypothesis (my current thinking on the topic is at Hypothesis testing, model selection, model comparison - some thoughts. Or you may not need to compare/select at all!
Regarding the “different priors” part - cross validation only tries to estimate which of the models (including their priors) will do best in a given prediction task. So the priors matter only to the extent they influence the posterior predictions of the models. The Cross Validation FAQ has a bit more details on when and how you might find CV useful.
Thank you for pointing out the workaround way. I will try to calculate the survival likelihood in the “transformed parameter” block rather than in the “model” block. Because I need to manually define the survival likelihood and obtain them in the “generated quantities” block for the WAIC or other criterion afterwards. I personally agree with you that Bayesian methods don’t need those model comparison methods. Particularly when the prior is meaningful and very important, even the WAIC or other criterion doesn’t favor. The model should be considered. These criterion cannot identify the impact of the prior at all. But the posterior analysis definitely combines both the prior and the data. These criterion may make sense when the prior is non-informative or cast little influence on the posterior or the data dominating the posterior.
Thank you for your time to reply in details. They are very useful.