Hi guys,

I have issues with a stochastic volatility model I wrote. I run the model on different subsets of the data (i.e. different markets) and different frequencies (yearly vs quarterly). I have the following observations;

- Yearly gives me no issues.
- Quarterly converges nicely for some markets, but not all (about a 1/3).

The stochastic volatility is in a state Eq. of a repeat sales model. With a repeat sales model I subtract the price when the good is sold from the price when it is bought (in logs, so this results in a log return). The change in price is modeled in a state Eq. as a random walk with time-varying signal. The signal is also modeled as a random walk. Moreover, I **also** model the noise in the measurement Eq. as a random walk. I take the exponent of the signal/noise to ensure no negative variance parameters. Here is the code;

```
model_string = "
data {
int<lower=0> N; // number of observations
int<lower=0> Nt; // number of time periods
vector[N] yVar; // log returns of object i
int sell[N]; // index of when is sold (1,...,T)
int buy[N]; // index of when is bought (1,...,T), with buy < sell
}
parameters {
real<lower=0> sigEpsH; // sigma time-varying noise
real<lower=0> sigEtaH; // sigma time-varying signal
vector[Nt-1] innovMu; // innovations of index
vector[Nt-2] innovEta; // innovations of signal
vector[Nt-1] innovEps; // innovations of noise
real pEta; // signal[t=1]
real pEps; // noise[t=1]
}
transformed parameters {
vector[Nt] mu;
vector[Nt-1] sigEta;
vector[Nt] sigEps;
sigEta[1] = pEta;
sigEps[1] = pEps;
for(t in 2:(Nt-1))
sigEta[t] = sigEta[t-1] + innovEta[t-1]*sigEtaH;
for(t in 2:Nt)
sigEps[t] = sigEps[t-1] + innovEps[t-1]*sigEpsH;
mu[1] = 0;
for(t in 2:Nt)
mu[t] = mu[t-1] + innovMu[t-1]*exp(sigEta[t-1]);
}
model {
sigEtaH ~ normal(0,1);
sigEpsH ~ normal(0,1);
innovMu ~ normal(0,1);
innovEta ~ normal(0,1);
innovEps ~ normal(0,1);
pEta ~ cauchy(0,5);
pEps ~ cauchy(0,5);
yVar ~ normal(mu[sell] - mu[buy],
exp(sigEps[sell]) + exp(sigEps[buy]) );
}
generated quantities {
vector[Nt-1] signal;
vector[Nt] noise;
vector[Nt] index;
vector[N] log_lik;
signal = exp(sigEta);
noise = exp(sigEps);
index = exp(mu)*100;
for(i in 1:N)
log_lik[i] = normal_lpdf(yVar[i]|mu[sell[i]] - mu[buy[i]],
exp(sigEps[sell[i]]) + exp(sigEps[buy[i]]) );
}"
}
```

The amount of iterations / chains and adapt_delta / max_tree_dept are all fine and probably have nothing to do with it, as the estimates (and Rhat) really **explode** (I sometimes get estimates of in the 1M, where it should be something like 0.05!). My guess is that there are too many parameters going over the same index, and all are modeled random walks. In essence a collinearity issue. Problem is that I do not really want to change the model itself. (And it does work on a yearly frequency.) Is there any transformation trick that you guys know that could help me out? I also noticed that if I put more stringent priors (or truncation) in the model block (parameter block), it can also converge, but again, I donât want to do thisâŚ

As usual, many thanks,

Alex