Model with Large number of parameters faces too many rejections

I have a large model which currently has 30 parameters, and a few more to come. I set pretty tight constraints around all of them to help convergence. However, I am still having the model giving up after too many attempts

Initialization between (-2, 2) failed after 100 attempts.
Try specifying initial values, reducing ranges of constrained values, or reparameterizing the model.

I am not sure there is really any thing wrong with the model or the specifications. It’s just a large model, and I expect it will take some trial and errors to get it to converge. Is there a way I can get stan to make more attempts. Or does someone think this truly means something is wrong with the model?

I am back on this problem. I have a 12 parameter model. For testing purposes, I first reduce it to 6 parameters, and feed the remaining 6 as input data. The model runs very well in that case, yielding accurate results for the 6 parameters. Then I changed the last 6 from input data to parameters, now I get back this non-convergence error. I don’t suspect any fundamental problem with the model, since it worked when the last 6 parameters are fixed by input data. I have applied fairly tight priors around them. So is this more a matter of telling stan to try more draws? How would I do that?

Without seeing the model it’s hard to say, but Stan generally has no problem fitting models with thousands of parameters if the posterior is reasonably well-behaved. I expect (based on the initialization error and your other post) that you’re not constraining your parameters appropriately when you declare them. A parameter strictly between 0 and 1 for example should be declared as

real<lower=0,upper=1> alpha;

These bounds should be declared for any parameter where there are hard boundaries, such as a uniform prior. Note that Stan has difficulties with hard boundaries, and uniform priors are generally a bad idea (though obviously I’m not sure what you’re calling “fairly tight priors”.

fairly tight priors in this case means normal(0, var), where the var is mostly in the decimals, and sometimes 10.0. Every parameter in the model has such prior applied to it.

Sometimes inits are unstable in regressions with varying scales—you can quickly overflow/underflow things like inverse logit (they involve exponentials) if you start multiplying 2 by a predictor with value 500. It often helps to regularize your predictors or try to find inits that match your expected posterior scales if you can’t do that. One thing to try is init=0 or init=0.1 or something tinier than 2.

You should be getting error messages from Stan 2.17.2 in all the interfaces to say what went wrong.

I’d like to qualify this a bit. Stan can have problems if there’s significant probability mass both very close to and also far away from a boundary, because the unconstraining transform can’t find a step size that works well everywhere.

Uniform priors aren’t so bad in and of themselves, it’s when they’re combined with artificial hard boundaries that truncate what would otherwise be probability mass beyond the artificial hard boundaries—that pushes Stan very hard to try to fit all the way out to the boundary.

Thanks for the clarification!

I should’ve also said that in some situations, this is largley mitigated by the Jacobian of the transform. So you always need to take that into account and figure out how things move on the unconstrained scale, which is where sampling is happening.