The much simplified GP model is:

```
data {
int n;
vector[n] y;
matrix[n,n] C;
}
parameters {
real sigma2_e;
}
model {
matrix[n,n] K;
vector[n] postmean;
K = C + diag_matrix(rep_vector(sigma2_e,n));
postmean = C*inverse(K)*y;
y ~ multi_normal(rep_vector(0, n), K);
}
```

My question is if the way I calculate “postmean” is correct. In models where I calculate the “postmean” variable as above for later modeling (not shown), it does not seem to work (get log probability errors). I am familiar with obtaining the posterior mean as giving in the Stan book under the heading “Latent variable GP” where the model is:

```
model {
vector[N] f;
{
matrix[N, N] L_K;
matrix[N, N] K = cov_exp_quad(x, alpha, rho);
// diagonal elements
for (n in 1:N)
K[n, n] = K[n, n] + delta;
L_K = cholesky_decompose(K);
f = L_K * eta;
}
rho ~ inv_gamma(5, 5);
alpha ~ normal(0, 1);
sigma ~ normal(0, 1);
eta ~ normal(0, 1);
y ~ normal(f, sigma);
}
```

with f being the posterior mean. I dislike this way of obtaining the posterior mean because I see no reason to sample for it if it’s possible to obtain it exactly. The latent approach does not give me the log probability errors I get using the exact approach. Hence, that is why I believe my way of calculating the exact posterior mean in Stan is incorrect.