Is there any way to fix the logLik computation without changing the priors or refitting the models? We fitted 100 models for a simulation procedure, which took about a week on a server. Changing the priors would lead to refitting all models and the need to resubmit an application for server usage.
In addition, it seams that changing the priors might solve the problem, but it is not guarantied to do, especially if the new priors are still weakly informative (i.e. still have hight variance).
I also understood this post in the way that this problem is reasoned by a perfect fit for the majority of the observations. This is could also be the source of our problem since:
(1.) Obviousy, this seems to be more likely with binary or categorical data.
(2.) Our simulation study wanted to show that our model converges to the data generating process on average. Therefore, a perfect fit for the majority of the observations might occur.