I am working on a model where data is an integer (and either is a sample of the unknown distribution or censored observation). Because the integer can be negative and might have much less variance than a poisson or negative binomial, I decided to write a distribution. I thought I found a nice solution using a continuous distribution and “rounding” but my implementation returns nan in the tails and I’m having trouble correcting this while maintaining differentiability.

For example:

```
function{
real roundNormal_lpmf(int x, real mu, real sigma){
return log_diff_exp(normal_lcdf(x+0.5 | mu, sigma), normal_lcdf(x-0.5 | mu, sigma));
}
}
```

This returns nan with x=6, mu=0.1, sigma = 0.5. but is fine for x=4 (returns -25.9764).

I’ve tried to use an is_nan() in the function and substituting some number close to neg_inf but then I get:

Rejecting initial value:

Gradient evaluated at the initial value is not finite.

Stan can’t start sampling from this initial value.

Any ideas?