For a current research project, I am trying to approximate a model with the Ornstein-Uhlenbeck process.

To do this I use Bayesian inference with the help of Stan.

However, I am relatively new to Stan and Bayesian inference and I am currently struggling with a certain behavior of a prior.

I won’t include the full Stan-Model for the Ornstein-Uhlenbeck process and instead concentrate on a minimal model that still shows the behavior that is puzzling to me.

I have data for x, which describes the movement of a sample. One characteristic parameter for this process is the characteristic time \tau, which is implicitly connected to the log-likelihood for x through the mean in the likelihood function.

However, the question I now have is more fundamental and I wonder what the following Stan code actually does.

I am feeding it a simplified dataset x which is just a list with values 1 to 100

```
data {
int<lower=0> t;
vector[t] x;
}
parameters {
real<lower=0> tau; // dummy parameter
}
model {
print("tau = ", tau);
tau ~ normal(100, 1);
x ~ normal(6, 3);
}
```

In this example, I have the parameter \tau which is completely independent of the data x.

Now, I feel like I am misunderstanding the line `tau ~ normal(100, 1)`

.

How I understand it is that after the model is run \tau should be a normal-distributed parameter with a mean of 100 and a sigma of 1.

What I get instead is that \tau has a mean of 55.69 and a sigma of 29.09.

Also when running 50 iterations with the data x the print statement in the STAN model gives me values like

```
Chain 1: tau = 4.11202
Chain 1: tau = 4.16449
Chain 1: tau = 4.16449
Chain 1: tau = 4.25171
Chain 1: tau = 4.25171
Chain 1: tau = 4.38693
```

I spent the day understanding how those mean and sigma values are found and what that print statement actually means but after digging through the documentation I just seem to lack the knowledge of what I have to look for.

I am thankful for any help.