Cmdstanr trouble providing initial parameter values

On one linux machine, I can provide initial values like m1$sample(..., function(chain_id) list(B=0), ...). On another linux machine with similar versions of everything, this doesn’t work. It seems like the parameter label “B” is no longer matching the “B” parameter in the model. I can still get the model to run by using init=0, but this is not ideal.

Sorry, I have no idea how to reproduce this problem. How can I even confirm that the parameter name matching isn’t working? The locale is set to en_US.UTF-8 on both machines.

Operating System: RedHat with kernel 2.6.32-642.13.1.el6.x86_64
Interface Version: cmdstanr 0.1.3 (not the install_github version)
Compiler/Toolkit: gcc 7.2

Do you mean you get an error (if so what does it say?) or it just doesn’t seem to use the init that you supplied?

Yeah this is tricky to debug because cmdstanr is just writing JSON files that get passed to cmdstan and cmdstan isn’t great about warning you about things like this.

There is no error. The model behaves as if the inits are random; the likelihood evaluates to NaN until it gives up. It seems like it isn’t using the init that I supplied. I’m guessing that’s because the name doesn’t match, for some reason. I’m thinking that maybe the encodings are different and the comparison is not locale aware, but that’s a wild speculation.

So the next step is to open a cmdstan issue requesting better diagnostics?

Yeah possibly. Since, like you say, it may not be possible for us to reproduce this, if it’s not too much of a pain could you check whether this happens when using cmdstan directly or only via cmdstanr? That would help determine where the issue should go and also who is the right person to work on it.

I have no clue but that’s an interesting guess. @rok_cesnovar @mitzimorris do you think that’s possibly what’s going on here? Or any other ideas why the same init specification would work on one linux machine but not another?

this came up before, and there’s a way to diagnose what the init values are -

To verify that the specified values will be used by the sampler, you can run the sampler with option algorithm=fixed_param , so that the initial values are used to generate the sample. Since this generates a set of idential draws, setting num_warmp=0 and num_samples=1 will save unnecessary iterations. As the output values are also on the constrained scale, the set of reported values will match the set of specified initial values.

Sorry folks. False alarm. It looks like the model was failing due to crap data, not crap starting values. I can’t reproduce the problem that I saw yesterday.