With this model I estimate the univariate (but partially-pooled) effects (\beta_j), and generate the log-likelihood matrix -> \text{log_lik}^{P \times N \times J}, where P = Chains \cdot Iterations, N = number of observations, J = number of effects (columns in X)
My main question is how to estimate LOOIC (and to interpret p_loo) in univariate models?
Currently, I transform the original \text{log_lik} into \tilde{\text{log_lik}}^{O \times N} where O = Chain \cdot Iterations \cdotJ and use this as input to loo. Any systematic error here?
Posterior predictive checks show that the model has high predictive accuracy (based on the observed data). I can include the model and the loo estimates if needed.
Thank you and I agree. To make log_lik depend on Y I transform the original \text{log_lik}^{P \times N \times J} matrix to \tilde{\text{log_lik}}^{(P \cdot J) \times N} -> input to loo.
I am just unsure about the idea of providing as input (to loo) for each observation the combined log-likelihood values estimated using J different parameters. This would yield an average (across J parameters) out-of-sample predictive accuracy (Right?). Alternatively, I could compute a total (additive for each parameter) elpd. I have no idea which way is more suitable, given the structure of my model?
When I apply the first approach, I have difficulty interpreting p_loo, which is a dramatic underestimate of the actual number of parameters (J), and this is not a consequence of the partial-pooling.