Hi all,

I’m working on a simulation of a decision-theory problem where the variance is, at first, completely unconstrained. The speed is estimated with;

```
speed_tau ~ gamma(4,1);
measured_speed[pos:(pos+num-1)] ~ normal(Individual_Throw_Speed[p], speed_tau[p]);
```

At first, there is only one throw per person, and the decision maker is deciding whether or not to throw again - but the estimated variance tends towards infinity. That means when I generate new data based on the fit, they don’t know what the variance is, and it may be huge.

I’d like to know if there is something I can do to allow the posterior not to explode. The only way I can think of is to manually generate a prior speed_tau if there is only 1 throw - this seems kludgy, so I wanted to check if there were other options.

The obvious answer is to just add a prior `speed_tau`

…

Generating new group-level parameters (here, the group is an individual with zero or more throws) from the priors is the typical way to do predictive inference for new groups. And you want to do it in `generated quantities`

in Stan to make sure you include the uncertainty from the model fit, plus the uncertainty from sampling the group-level parameters from the priors, and then the uncertainty of generating the throw speeds from the normal distribution. Then you can simulate entirely new players with zero observations. And of course, you can generate fits for players with only a single observation.

Usually there is a hierarchical prior for mean speed and standard deviation that you fit with the other data rather than just using a fixed `gamma(4, 1)`

. Maybe that’s what @sakrejda meant given that you already did have a prior for `speed_tau`

. Same with the individual throw speed, which you also don’t know.

On a second look I wonder why this variance would explode… a gamma(4,1) prior should be enough, no?