I think the issue only arises with the current rstan 2.16.2.
I was also getting this with rstan 2.15.2.
If you run it multiple times, say 100, what fraction will fail?
How is your memory usage?
About 30% fail with r=.1. None fail when r=1 and all fail when r=0.
I don’t know how to determine memory usage or CPU usage. Certainly there is nothing particularly noticeable if I look at Activity Monitor.
It seems that when it fails, the model is not initializing within 100 attempts. But rather than saying that, it just says
"c++ exception (unknown reason)".
I discovered two things that might be of interest with respect to this problem:
- It runs correctly if I replace
tau[i] ~ exponential(betas);
target += -betas*tau[i]+log(betas);
My understanding is that these should do exactly the same thing, so the fact that one crashes and the other doesn’t is interesting. Not only does it not crash, it produces what look like correct results. I should mention that I get the same problem with gamma distributions, and these problems are also corrected when I replace the sampling notation by an explicit log density.
- Even though no one else seems to get these errors, I get them when I run it on another Mac. So It’s not something weird about this one laptop. It might be something weird about how I’m setting it up, but I’m not really competent to do anything particularly idiosyncratic. I recently had to install clang4, because Rcpp had started giving fatal errors after some update.
There’s clearly something different about your install if you get different answers than others.
The main difference between our probability functions and rolling your own is that our probability functions will reject and cause expections whereas the handwritten ones with functions like
log() will just propagate NaN values until they get rejected in the end. We did it that way to provide warning messages, which should now be turned back on in RStan 2.16 (they are for me, but again, people are reporting different behavior on different installs).