Dear all,

I have a question concerning LOO-CV for non-factorisable normal models, developed in the paper “Leave-one-out cross-validation for non-factorizable normal models” by Bürkner, Gabry, Vehtari. The idea of the paper is that for models, which can be written in the form y \sim N(0, C), there is a straignforward way for the calculation of the point-wise log-predictive density, using the two quantities: \bar{c} = \text{diag}(C^{-1}) and

g = C^{-1}y. Then the point-wise log-predictive density can be found as \log p(y_i \mid y_{-i},\theta) = -\frac{1}{2} \log(2 \pi) + \frac{1}{2} \log \bar{c}_{ii} - \frac{1}{2}\frac{g_i^2}{\bar{c}_{ii}}. Note, that in all the expression above y denotes an actual data vector.

Further in the paper the authors give an example of SAR, modeling spatial data, and notice that the model can be presented as y-W^{-1}\eta \sim N(0, C). Here, as before, y is the actual data, while W^{-1}\eta are the estimates obtained via model fitting. Fair enough, we still evaluate the estimates, be they on the left- or on the right-hand side of “~”, against the real data y.

And now my question comes. I work extensively with the point pattern data and Log-Gaussian Cox process (LGCP) model. It can be understood as a spatial Poisson process with random location-dependent intensity \lambda(s). On the log-scale \lambda with covariates can be written as a non-zero mean GP: log(\lambda) = X’ \beta + N(0, C). I would like to be able to compare such models. The problem is that, unless the model formulation is compromised, there is no explicit data vector y, i.e. both parts in log(\lambda) - X’\beta \sim N(0, C) consist of the estimated quantities. (The model fitting is done via the explicit LGCP likelihood, which uses the entries of \lambda as parameters.)

My suspicion is that due to the absence of the real data vector y, I will only indirectly measure convergence and not the predictive ability of the model, by applying the above method. Are there any supporting or alternative opinions?