Couldn’t initialize metrics with a list of np.ndarrays or with a single one. Didn’t add an issue.
The last two points aren’t really advertised functionalities of the sample method, but this sure would be convenient (at least for me).
Anyhow, what I did in subclassing the CmdStanModel was just hacking together the appropriate json serialization.
Hm, I do not believe it is possible to infer the size of an array in Stan, or is it? Meaning for example having a data block
and only specifying v? Because I’m lazy, I didn’t want to always type (in python) data=dict(v=v, no_elements=len(v)) but just data=dict(v=v) and let no_elements be inferred from the shape of v. It’s of course not only about the shape of arrays, but still probably not a mainstream functionality.
Yes, the use case is not to just discard the aberrant chains and act as if everything was fine. Instead I have the following situation:
I have a reference solution/sample/method, and want to compare it to other methods. For some of the methods, there arise issues during sampling, leading to chains gettings stuck and taking super long. Eventually, they may find the other chains, or they may not. In either case, I wan’t to be able to terminate the chain, and work with the data that I have, to compare the different methods. Currently I only use the timings, but I could also work with either all of the data of the finished chains or with all the complete and partial data.
Things would of course be easier if I would just submit a huge job on a cluster, but, apart from not having access right now, doing things locally and in this way shortens the development iteration times.
this won’t fly because the fact that no_elements is an array dimension is not something that can easily be inferred. from the point of view of the generated c++ program, it’s just another int variable.
when foo and bar have different lengths, where is the problem?
Yes, I was just expressing my confusion at @ahartikainen’s question, because I did not believe it possible (in practice or in principle), because stan does not and cannot know that this int is actually just the size of that vector. I guess such a functionality could be added, but I don’t believe anyone would want to do that. Looks like there would be very little benefit for a lot of headache.
One of the reasons I wanted this functionality is because I have different models, which all can be characterized by some number of main states, which will then induce a model-specific number of auxiliary states and parameters. This can of course be partially done within Stan, but I did not do this for two reasons.
First, I wanted to keep the stan interface as similar as possible across models, and only change the functions (containing the ODE) and model blocks.
But also, I believe there were some issues with using inferred data (quantities from the transformed data block) in specifying the size of data arrays/vectors. I don’t know whether there is a way to do this in Stan, and as I had a solution ready, I didn’t bother investigating this further.
There would be no problem, just an exception thrown ;)