I am working with a very simple system and fail to get the adaptation (of the stepsize) working.
For the record, my system consists of multiple univariate Gaussian variables with mean mu=0 and the provided data are the variances (model block:
y ~ multi_normal(mu, matCov);)
Now when I use a stepsize which I know is good for the system I get a high acceptance rate and not very accurate sampling but that’s okay for now. However when I don’t provide a stepsize and want to adaptive algorithm to figure it out, it finds a stepsize which is two orders of magnitude bigger than the one I know is working (roughly 0.01 versus 1). This obviously leads to the acceptance rate being 0.
This happened with a) 1K burnin iterations with int_time=200 and also with b) 20K burnin iteration with int_time=1.
- Are there any input setting for the adaptation I should be playing with? I know there are a couple but had too little insight to use them wisely yet.
- Is the stepsize that the adaption finds in the same scale/units as the one I provide explicitely? Should I take care of something there?