Hey there,
I was hoping some of you might give me some reassurance about the way I am modelling a set of parameters. In particular, I have a model that contains a set of N parameters \sigma_i that are strictly positive and for which I have some prior information that I would want to account for. This information comes in the form of a distance matrix D_{ij} that characterises the differences between indices i. To do so, at the moment, I am implementing this as Gaussian process, such as the prior for \sigma_i is modelled as follows:
\log(\sigma_i) \sim \text{MVNormal}\left(\bar{\sigma}, K\right),
\bar{\sigma}\sim \text{Normal}\left(0,1\right),
K_{ij} = \eta\,\exp(- \rho D_{ij}^2) +\delta_{ij}\,s,
s, \eta \sim \text{Exponential}\left(1\right)
\rho\sim \text{Exponential}\left(0.5\right)
This assumes \sigma_i to be lognormally distributed, and it seems to works pretty well with simulated data. That said, I am not sure if I am making any fundamentally flawed assumptions regarding the covariance of \sigma_i. I am surely coding things such that the covariance depends not only on D_{ij}, but also on the value of \sigma_i and \sigma_j. However, is that really bad if it seems to work well with simulated data? I haven’t quite found anything regarding this in the forum; though I’m not quite sure I am using the right search terms. Other distributions such as truncated MVNormal would behave similarly, but I don’t necessarily see the benefit of this over a lognormal distribution. Any ideas or thoughts?
Thanks in advance for your help