Problems fitting model with group-level variance. Problem with vectorization?

I hope this is a simple question. I want to explore multilevel modeling in a finance context. I have T dates, and at each date there are G different “groups” of returns. We assume that each of these groups have a linear relationship to some market return, with no intercept. This design matrix in R is achieved via:

X = model.matrix(r ~ mkt:factor(group_var) - 1, data = dat)

I would like to compute the no-pooling model in Stan, where each group_var has its beta and its own variance. Below is my failed attempt:

data {
int<lower=1> T; // number of time periods
int<lower=1> G; // number of groups
vector[TG] r; // returns
matrix[T
G, G] X; // design matrix
}

parameters {
vector[G] beta;
vector<lower=0>[G] sigma_g;

}

model {

beta ~ normal(1,1);
eps_q_t ~ normal(0, 1);

for(i in 1:T) {
r[(G*(i-1) + 1): (G*i)] ~ normal(X * beta, sigma_g);
}

}

This yields the error Exception: normal_lpdf: Random variable has dimension = 17, expecting dimension = 3349,

where in my data G = 17 and T = 3349/G = 197. Somehow the implication is that r[(G*(i-1) + 1): (G*i)] has dimension 3349, but this would be strange because the slice has index-length G. What’s the best practice here? I have been reading the documentation but have not found an example similar to the above.

No, X * beta has dimension 3349. I think you can just do r ~ normal(X * beta, sigma_g);.

1 Like

Thank you, silly mistake.