Again - really informative post thank you. So I had a go at implementing a log-normal. I generated some data as follows: gen_logmvn_y2_x3.R (888 Bytes)
I modified my code above to look like this:
model{
//Priors
L_Omega ~ lkj_corr_cholesky(1);
L_sigma ~ normal(0, 5);
alpha ~ normal(0, 3);
to_vector(beta) ~ normal(0, 2);
//likelihood
log(y) ~ multi_normal_cholesky(mu, L_Sigma);
for (dv in 1:NvarsY)
target += -log(y[dv]);
}
Ran it and extracted as follows:
fit1 <- stan(file="Stan models/linear_mvlognorm.stan",
data = datalist, chains = 4,
seed=40, refresh=25, iter = 2000)
params = extract(fit1)
#Extract intercepts
> sapply(1:datalist$NvarsY, function(y) mean(params$alpha[, y]) )
[1] 19.98743 20.00082
#Extract betas
> sapply(1:datalist$NvarsY, function(x) sapply(1:datalist$NvarsX, function(y) mean(params$beta[, x, y])) )
[,1] [,2]
[1,] 0.2957225 0.5843032
[2,] 0.3633446 0.6990684
[3,] 0.1943753 0.2902356
The design matrix for these was:
> sigma[3:5,1:2]
y1 y2
x1 0.3 0.6
x2 0.4 0.7
x3 0.2 0.3
So I guess that seems pretty good, although the betas are a little off. Funnily enough - I tested it commenting out the target+=
statement and it didnt’ seem to make much difference.
Edit: One other thing I’m wondering - I don’t understand the target +=
thing very well. But log(y[dv])
should be a vector of length N. How does target +=
know to add log(y[dv])
to the correct vector in the array of outcome vectors ? My y variable is declared as: vector[NvarsY] y [N];