I want to normalize my parameters after sampling them from the prior distribution and before using them to perform posterior sampling. I do not need a second set of parameters that is normalized or anything. Still, rather I need to ensure that my parameters are (always) normalized as otherwise, they do not make sense. My code looks something like this:
data {
real zero;
}
transformed data {
matrix[5, 3] n = [[ 0.7660444, zero, 0.7660444],
[-0.7660444, zero, 0.6427876],
[zero, 0.7660444, -0.6427876],
[zero, -0.6427876, -0.7660444],
[zero, 0.6427876, 0.6427876]];
for (i in 1:5) {
n[i, ] = n[i, ]./norm2(n[i, ]);
}
}
parameters {
matrix[5, 3] m;
}
model {
for (i in 1:3){
for (j in 1:5) {
m[j, i] ~ uniform(m[j, i]-0.01, m[j, i]+0.01);
}
}
for (i in 1:5) {
m[i, ] = m[i, ]./norm2(m[i, ]);
}
target += sum(m.+n).^2;
}
The issue is that
for (i in 1:5) {
m[i, ] = m[i, ]./norm2(m[i, ]);
}
doesn’t work, i.e. I cannot perform calculations like this in the model block. How can I solve this? I was considering using a transformed parameters block, but I don’t think that I understand it enough to implement it here. Specifically, I am not sure whether my normalized parameters would be used for posterior sampling.
The side issue that I have here is using a for-loop to sample from a prior distribution for a matrix parameter. Is it possible to do this more efficiently, such as some kind of vectorized prior sampling statement (it could be uniform or normal distribution)?