I have implemented the double exponential smoothing model

```
data {
int<lower=3> n;
vector[n] y;
int<lower=0> h;
}
parameters {
real<lower=0, upper=1> alpha;
real<lower=0, upper=1> beta;
real<lower=0> sigma;
}
transformed parameters {
real l;
real lp;
real b;
vector[n] mu;
lp = y[1];
l = lp;
b = y[2] - y[1];
mu[1] = y[1];
mu[2] = y[2];
for (t in 2:(n-1)) {
l = alpha * y[t] + (1 - alpha) * (lp + b);
b = beta * (l - lp) + (1 - beta) * b;
mu[t+1] = l + b;
lp = l;
}
}
model {
for (t in 3:n)
y[t] ~ normal(mu[t], sigma);
}
```

Is the definition of likelihood based just on part of the data valid? It seems to work smoothly, but I’d like to know if it won’t lead to any possible problems on Stan’s side?