I would say it is not worthwhile. The drawback that a lot of people overlook is that for models that are so complicated that the default initial values do not work, the mode can be very far from the mean and median. Indeed, the region around the mode can have essentially zero posterior probability.
If you are having problems initializing, then post the model and maybe someone can help you overcome them directly.
What Ben said, but for some models it works in the Craig’s-list-used-car sense and might give you a hint if you’re stuck on why your initial values are not working. So no reason not to try it if you can’t find alternatives
I think you are both misunderstanding my question. It’s more fundamental rather than about a specific model. The default values work. What I’m missing is, why would they work in any way better than the mode? Why isn’t the mode the default?
The particular case that made me think about this: I have a model that is too large, and I can only do about 100 HMC iterations, so I want to squeeze as much as possible out of those 100.
he was referring to something called concentration of measure. And you should heed his warning: if your model is “big” (I understood this as high dimensional, correct me if I’m wrong), chances are that starting off from the mode is not going to do much in the way of speeding up convergence.
Because the added effort of finding it doesn’t pay off in high dimensions and in low dimensions it usually doesn’t matter.
In terms of advantages to initialising the sampler close to the mode, it seems to work well for clustering models.
In the Stan User Guide, under the Clustering Models -> Multimodality section, it mentions: “the advice often given in fitting clustering models is to try many different initializations and select the sample with the highest overall probability. It is also popular to use optimization-based point estimators such as expectation maximization or variational Bayes, which can be much more efficient than sampling-based approaches.”