Assign priors for parameters in a matrix

Hello,

Sorry, I just trapped myself in this silly question about syntax. For instance, for a matrix beta(MxN), I’d like to assign std_normal() to each element of it, should I do to_vector(beta) ~ std_normal()? beta ~ std_normal() seems invalid. Or should I do array[M] vector[N] beta, for(m in 1:M) {beta[m] ~ std_normal()}? Which one is more efficient or they are just the same? Do you think the last one is more efficient if I am using beta row-wise? for instance, for each row of data Y[M x N] for (i in 1:M) {Y[m] ~ normal(beta[m] * X, sigma)}

Then I started to think what do we mean when we assign a distribution to a vector of parameters? For instance, we see this syntax in all kinds of case studies, vector[n] theta, theta ~ normal(0, 5), do we mean each element in vector theta follows its own normal(0, 5) distribution? or all elements of theta are sampled from one normal(0, 5) distribution once? Thank you very much.

I think the sampling is independent anyway: so to say, if each element is sampled from normal(0,5) or all elements are sampled from the same normal(0,5) is the same statement

Regarding the matrix, yes, the for-loops will be helpful

1 Like

@aakhmetz Thanks a lot for your reply. I understand the sampling is independent, so mathematically they are the same, however, I wonder what’s the difference in terms of computation efficiency. As you mentioned, for loops will be helpful compared to to_vector(), but I don’t know exactly why it is better. Kind of vague understanding is because to_vector() will sample MxN times the prior (sometimes I feel the opposite, maybe to_vector() will only sample once the prior for all elements), but for loop only do M times. I guess it would be helpful to dig a bit deeper and write more efficient Stan code.

I don’t have the ready-to-go answer, but here on the forum people discussed a lot what would be faster to vectorize or not vectorize the functions. As I remember the answer was that it depends, and you need to know more the technical side of Stan implementation. You may check those discussions

1 Like