If we think most users won’t even use this then I think a named method that they can just ignore or not know about is safer than overloading the indexing method for the fitted model object.
Yeah, but if we do too many things like that then there will be too many options when you tab-complete after the $ sign.
I’m hoping we can do two pieces. The number two here is purely logistical - I’m hoping we can get all tech leads meeting in at most two meetings with substantial attendance overlap. Plus, we were pretty tired of talking about roadmap by the end of the second day last time, so it’s nice to have it split up to make it a more manageable proceeding. I think we can make the final doc or the concatenation of the two docs fairly coherent, though it’s always going to be a list of separate projects.
Yes majority vote, it can live until it’s replaced and the plan is to replace it every year (that’s in the SGB’s definition of TWG Director).
I’ll add a TL;DR to the top of the document, I think you’re right that a high-level summary would be really useful.
Okay, so, I think just this one last thing to resolve - fit[i] vs fit$attribute.
I will just say that if you’re a new user and you want to know how to get a single iteration, tab completion with a fit object with overloaded  isn’t going to help you. I’ve been programming in OCaml where often doc consists of editor-supported tab completion and I have to say I have no problem with a long list of methods with informative names (vs. symbols like
>>| in OCaml). How do you all want to decide these things? Sounds like @andrewgelman, @jonah, and @ahartikainen might prefer the method while @bgoodri doesn’t care too much but prefers the
. @ariddell do you have a preference on overloading
fit[i] vs adding a separate method for
Hi all. As the “user, not developer” in this discussion, it’s hard for me to say for sure what I will like before I use it. What I can say is that right now, I’m very often doing the following steps:
Fitting a Stan model.
Extracting a posterior summary (most typically, posterior medians, but I could want means or quantiles or single draws).
Using the summary to do later calculations as with MRP or graphing fitted models. When it’s single draws, I’ll loop thru the draws.
In step 3, I want to be able to have objects that are the same size and shape as the parameters in the Stan model. So, in my running example, scalar alpha, vector beta, 2x3x4 array theta.
Right now, my code is a mess because, after running Stan, I first have to extract the objects, then I have to do awkward steps to compute posterior medians or extract draws. For example:
fit <- stan(…)
alpha_hat <- median(extract(fit)$alpha)
beta_hat <- apply(extract(fit)$beta, 2, median) # or something like that; I can never remember how to do “apply”
theta_hat <- something something, I don’t actually know how to do this without looping thru all the dimensions of theta.
And then I can use alpha_hat, beta_hat, theta_hat in calculations.
So if we don’t have these extractor functions which allow me to pull out the posterior median, or mean, or random draws, or other things, then I need tons of helper code which makes this material difficult to teach, difficult to explain, and contrary to the spirit of probabilistic programming and Bayesian workflow.
I guess this is my personal analogy to people wanting things to be “Pythonic.” I want things to be “Bayesian workflow”-thonic. And I think this will be increasingly important.
It’s a pretty minor distinction between
fit$iteration(i). Does anyone have strong preferences?
How hard would it to create prototype for the interface (wrapping RStan2 + PyStan2 /(PyStan3)) and see what are the use cases?
We have some prototype code and I think @ariddell does as well. But in this case, we have 1 person in the last eight years who wants this functionality.
Sure and still I think current extract is hard, so is probably
fit['theta'] with non-user-defined functions (functions that are designed for 1 draw)
I think the one thing to consider is, do we want to have fit results run against external functions or run function against fit object
fit --> parameters for function function --> fit
Or do we assume users should use broadcasting / apply.
I don’t think there are many functions that expect one draw to be inputted that don’t have the capacity to input all the draws. But
mean(fit$theta) should definitely work and average over all iterations to produce something with the same dimensions as in the Stan program. I guess
mean(fit) could pass the
mean function to all parameters in
fit and return a list of
If someone is counting, I would like to have that functionality, too. I don’t have opinion on how it should be implemented.
Sure, I mean functions outside mcmc packages. Like physics simulation etc.
Do those functions basically work like the
generated quantities block, taking in one realization of the parameters from the posterior distribution and simulating the path of a particle over time?
Basically yes. Some of them can probably use broadcasting. Just that there are also other uses for mcmc draws than just common stats.
I get it. It is certainly possible to use
fit$iteration[i] to loop over all iterations, but that isn’t what Andrew is talking about. Is there a reason to prefer
fit$iteration[i] in these physics examples?
fit.iteration(i) with possible kw (
order= 'random' / 'default', n=-1 / 100 / 20, etc) will be much more flexible in python than fit[i] (and then we should have some idea for slicing --> fit[i-100:i] that would be sensible and still follow similar logic as
Not sure. If we go with
fit should slicing raise an exception.
In cases like this, where a feature is desired by a small number of (important) persons, I think we should at least entertain the possibility of maintaining a fork.
I would say that attempting to slice
fit should raise an exception until we can think of a legitimate use for it.
I wouldn’t go that far in this case. Implementing
fit$iteration(i) is not that big a deal.