Hi

I’m trying to fit a (relatively) simple hierarchical model and it runs very slowly if at all (using RStan). The set up: I have a number of pairs of portfolios. Each pair of portfolios has a bunch of returns (say r_1,t, r_2,t). The returns are correlated. These are stacked into a matrix and I have a second index variable.

For each portfolio I want to model the difference in risk adjusted returns (so mean (r_1)/ sd (r_1) - mean (r_2) / sd (r_2)) and this difference is then drawn from a parent distribution. This is my model code:

```
data {
int<lower=1> N; // Number of periods
int<lower=1> M; // Number of pairs of portfolios
int index [N * M];
vector [2] rets [N * M];
}
parameters {
real pop_mu; // Population mean
real<lower=0> pop_sigma; // Population sd
vector [M] mu1 ;
vector<lower=0> [2] sigma[M] ; // 2 groups
vector<lower=0, upper=1> [M] r ;
real diff [M];
}
transformed parameters {
cov_matrix [2] T [M];
vector [2] mu [M];
for (m in 1:M) {
real mu2; // Temporary variable
mu2 = (diff [m] + mu1 [m] / sigma [m, 1]) * sigma [m, 2];
mu [m, 1] = mu1 [m];
mu [m, 2] = mu2;
T [m, 1, 1] = square (sigma [m, 1]);
T [m, 2, 2] = square (sigma [m, 2]);
T [m, 1, 2] = r [m] * sigma [m, 1] * sigma [m, 2];
T [m, 2, 1] = T [m, 1, 2];
}
}
model {
diff ~ normal (pop_mu, pop_sigma);
pop_mu ~ normal (0.0, 1.0/12.0);
pop_sigma ~ gamma (6.0, 150.0);
r ~ uniform (0, 1);
mu1 ~ normal (0, 0.04);
for (m in 1:M)
sigma [m] ~ normal (0, 0.04);
for (m in 1:M)
for (n in 1:N)
rets [(m - 1) * N + n] ~ multi_normal (mu [index [(m - 1) * N + n]], T [index [(m - 1) * N + n]]);
}
generated quantities {
real diff_ppc [10]; // How many of these do I need?
for (n in 1:10)
diff_ppc [n] = normal_rng (pop_mu, pop_sigma);
}
```

This runs for a simply generated test set where (say) M is 10 but when I try to run with M = 48 (and N = 240) I get nothing really happening. So have I set the code up wrong? Is there a better way to do this?

Thanks

David