Hi

Given the adage of starting small, I have a very simple model which I’m failing to fit correctly.

I think I’m clearly misunderstanding something fundamental, and I’m not really sure what I should be searching for to better understand it.

```
// step_test
data {
int<lower=0> N;
real y[N];
real predicted[N];
}
parameters {
real<lower=0> sigma;
real delta[N];
}
model {
sigma ~ normal(0, 1);
delta ~ normal(0, 10);
for(n in 1:N){
y[n] ~ normal(predicted[n] + delta[n], sigma);
}
}
```

Why is sigma not basically zero?

Why is there so much variation in the posterior for delta?

Is it that there is not enough freedom for the sampler to explore the parameter space given that the only probable value for sigma is 0? I’ve tried setting init=0.

Is there any way of fitting something like this?

I later want to add parameters which will generate the prediction and some uncertainty to y, but I am still interested in the difference between y and prediction.

Thanks

I asked something similar a while back and marked it solved, this was a mistake.

R code to test:

```
N = 10
y = rep(1:N)
predicted = y + rnorm(N, 0, 10)
```