Eg, I have a transformed parameters block that looks like
transformed parameters {
real a;
real b;
for (i in 1:N) {
a <- some_expression_that_depends_on_i
b <- some_expression_that_depends_on_i
# compute things using a and b.
}
}
this is okay, right? I don’t have to re-declare a and b in every iteration of the for-loop.
Yes, but you probably should declare a
and b
inside the for loop. Otherwise, you will get back posterior margins for them that pertain only to the i = N
case.
Ah, perfect, thanks so much for the quick reply. I don’t need the posteriors of a and b (ie, I never access them in the fitted stan model). I just wanted to make sure that computations that used them subsequently would be correct…eg,
transformed parameters {
real a;
real b;
vector[N] phi
for (i in 1:N) {
a <- i
b <- i
phi[i] <- a + b
}
}
will fill in the correct values for phi (phi[1] = 2, phi[2] = 4, etc).
Yes, but don’t do it that way. Do it like this:
transformed parameters {
vector[N] phi;
for (i in 1:N) {
real a = i;
real b = i;
phi[i] = a + b;
}
}
Thanks! I understand that’s the preferred way if I wish to recover the posterior marginals of a and b (and I’ll do it that way in future). But either
transformed parameters {
vector[N] phi;
for (i in 1:N) {
real a = i;
real b = i;
phi[i] = a + b;
}
}
or
transformed parameters {
real a;
real b;
vector[N] phi;
for (i in 1:N) {
a = i;
b = i;
phi[i] = a + b;
}
}
should yield the same inferences for phi, correct?
They both yield the same distribution for phi
but the first version does not even let a
and b
exist outside that loop, so not only do you not waste space storing them, you cannot make mistakes accidentally referencing them later.
1 Like
Yep, makes sense! Thanks so much for taking the time; I understand how things work better now.