Hi everyone!

I would like to optimize my code but I don’t succeed for now :

I my current code, I have a parameter that is expressed as a matrix, and I specify the prior of each component of the matrix using a loop : but it’s pretty slow. It looks like that :

```
parameters{
matrix[N,L] mu;
...
}
model{
for(i in 1:N){
for(j in 1:L){
mu[i,j] ~ normal(0,1);
...
}
}
}
```

I read in Specifying priors for a matrix that I could use the function to_vector to express the prior of a matrix in a quicker way. So It would be :

```
parameters{
matrix[N,L] mu;
...
}
model{
for(i in 1:N){
for(j in 1:L){
...
}
}
to_vector(mu) ~ normal(0,1);
}
```

The code do compile and is way faster, however the n_eff are way smaller. So I was wondering if the code was correct? And If there is another way to optimize the run time without damaging the n_effective,

Thank you in advance for the help!

I don’t think it should have an effect. How smaller? Have you tried running it multiple times? If the model has a problematic posterior, some runs might be much worse than others even of the same model depending the initial values.

For both code, I had 500 iterations with 250 warm-up and 4 chains.

For the first code with the loop, I obtained those n_eff

For the second code with the to_vector prior, I obtained those n_eff

(the codes that I posted is a simplyfication of the actual code, as it’s the only part that is modified from the “loop code” to the “to_vector code”. The initial values and the data are the same).

I’m going to try to run those code with other initial values to see if the issue remains

That’s very likely not enough warmup iterations for the adaptation to complete, which would explain the unstable results. Can you try running it with the default 1000+1000 iterations?

Running the same code with 2000 iterations and 1000 warm-up with 4 chains also lead to quite different n_eff :

For the first code with the loop, I obtained those n_eff :

For the second code with the to_vector prior, I obtained those n_eff: