where \Sigma_\phi was a diagonal covariance matrix, with ith entry \sigma_i = \phi^2. Now, as proposed by @avehtari, we want to generalize the problem a bit, and fit a model with a dense covariance matrix. So how do we generate this covariance?
My proposition is use as diagonal elements \sigma_{ii}^2 = \phi^2 + \epsilon, where \epsilon is a random perturbation, and for the off-diagonal elements use:
where \rho is a correlation coefficient, which will vary during the experiment. Ok, should I add another perturbation to \sigma_{ij}? Is it ok for \Sigma_\phi to be a stochastic function of \phi?
For the effective sample size experiments N_eff BDA3 vs. Stan
I used simple version:
transformed data {
vector[D] zero = rep_vector(0.0, D);
cov_matrix[D] S;
cholesky_factor_cov[D] L;
for (i in 1:D) {
for (j in 1:(i-1)) {
S[i,j]=s;
S[j,i]=s;
}
S[i,i]=1.0;
}
L = cholesky_decompose(S);
}
but what you propose would work, too. I guess you use LKJ construction if you want to generate random matrices, or you could use cov_exp_quad with random x and varying lengthscale (increasing lengthscale increases correlation)
I think the most important thing is to figure out how the global parameter \phi drives the covariance matrix of the local parameters, \theta. In my original proposition, \phi contains the average variance and the average correlation.
Would it make sense to use a scientific example for the case study? I’ll try to read up on specific applications of the Laplace approximation, but let me know if you have a prefered example.
Just to refine my comment: in applications of latent Gaussian models, do we usually specify the covariance matrix or the cholesky decomposition thereof in terms of the global parameters?
Covariance matrix can be deterministic from \phi, and most often is.
Better to have first working simulation example with speedup, and where you can show the performance with respect to increasing correlation. You could make a plot like in N_eff BDA3 vs. Stan - #21 by avehtari , just use that same multivariate normal as latent function, e.g., for log-intensity of Poisson.
You can generate Poisson distributed data from your model, and you can check how the speed changes when you vary the parameters of data generating model.
I’ve a few models that use “latent gaussian” that I’d like to see laplace approximation applied on if the C++ code working well enough. They IBNLT: classification (but this is in the manual), time series, and survival. I can send over the code, if you’re interested. The poisson model has been used in Betancourt’s case study and in Rob’s case study, I’d like to see diversity. For more information on GPs in survival analysis, Alan Saul’s dissertation is excellent, and a great read.
Please take Aki’s suggestion, but I’m extremely interested in scientific examples too, and a survival one might make a good example (we can use it on a problem that’s already been done in GPstuff, to verify the results are accurate. I have most of this work done, and I can send over the code. I left off with the latent AFT survival component almost identical to the GPstuff vresion, but my latent function was jagged, and I was trying to impose constraints and combine kernels to see if I can extract the smooth component.).
And I link laplace approximation is a general optimization method and can be applied over a variety of models, and works particularly well for GPs.
We can have a dense covariance matrix with the covariance function describing the properties of the function (i.e. matern52 is less smooth than squared exponential). You generate the matrix with the covariance function, I haven’t put time into your proposed kernel. But - when we use a well known covariance function we can see how well it peforms on other well known models.
Although, admittedly, I’m blatantly ignoring prior distribution specification that can reduce sampling pathologies, as described in Betancourt’s case studies (i.e. the modeling in my case studies is trash), the following document might help you conceptualize the problem better. I’m aware of the problems, and I’m eager to fix, it’s just not a priority right now:
@anon79882417 Thanks for all the great input. I’ll continue focusing on simulated examples, which seems to be general consensus about what should be done first, and I’ll then dive into working with a few scientific examples, possibly some of the models you proposed.
Small point of confusion. When approximating p(\theta | y, \phi), we wish to match the mode and the curvature. According to @anon75146577, we can show that the hessian of p(\theta | y, \phi) at \theta = \theta^* is:
\mathcal H = \Sigma^{-1} + H
where H_{ij} = \frac{\partial^2}{\partial \theta_i \theta_j} \log p(\theta | y, \phi) and is evaluated at \theta^*. We then have:
p(\theta | y, \phi) \approx \mathrm{Normal}(\theta^*, \mathcal H^{-1})
From this, we can do a few calculations, as done in the attached file: MathAppendix.pdf (139.3 KB)
However, in the simple case where \Sigma is diagonal with each diagonal element being \sigma_\phi, the resulting log posterior is:
which is inconsistent with the example Stan code, attached laplace_dan.stan (2.0 KB).
We match the code if we use \mathcal H = H, i.e. without adding \Sigma^{-1}. In the case where we have a dense covariance matrix, using \mathcal H = H also works better (else the model gets stuck in a never-ending sequence of Metropolis proposals rejections). But \mathcal H = H does not seem quiet correct. I’ve tried working out the result, but right now, it’s more algebra than I think might be needed. I thought I’d check whether I’m missing something basic.
For f the density of a normal, N(\mu, \Sigma), we indeed have that the Hessian of \log f verifies H_f = - \Sigma^{-1}.
Now let g be the function we want to evaluate. Because we approximate the function as a Gaussian, we also make the following approximation \Sigma^{-1} \approx - H_g. \Sigma^{-1} gives us the negative curvature of the approximate function, and setting it equal to - H_g therefore amounts to matching the curvatures of the approximation and the real distribution.
Note further the curvature of the real function is evaluated at its mode. For a Gaussian, the curvature is constant, but this is in general not the case. The point here is that, when doing a Laplace approximation, we focus on the mode the distribution we approximate, which is why we match modes and curvature at the mode.