Mixture models with different number of parameters. List-like datastructure?

I am fairly new to stan and learning by just playing around with a few different models, but I’ve now run into a difficulty with the programming constructs available.

I’m looking at an AR(p) model (you can see all the code here, suggestions welcome) and wanted to experiment with model order selection by setting up a finite mixture. Say I have a likelihood ar_lpdf(y, b) where b are the coefficients b(1), ..., b(p) of the autoregression y(t) = \sum_{\tau = 1}^p b(\tau)y(t - \tau) + v_t. Since p is itself a parameter, I’d like to create a mixture model roughly as follows

{
    vector[p_max] theta;  // mixture weights
    vector[1] b_1 ~ prior();
    vector[2] b_2 ~ prior();
    ...
    vector[p_max] b_p_max ~ prior();

    vector[p_max] lpdf;
    for(p in 1:p_max){
        lpdf[p] = log(theta[p]) + ar_lpdf(y, b_p);
    }
    target += log_sum_exp(lpfd);
}

so that the data y is AR(p) with probability \theta_p. The difficulty here is with the “...” in the pseudo-code. How do I properly keep track of the coefficients for each model? It’s not entirely clear to me how this should be done since each component of the mixture has a different number of parameters.

My current understanding tells me that the only options are to use matrix[p_max, p_max] B; where B[p, tau] = 0 whenever tau > p. Or to store everything in vector[(p_max * p_max - p_max) / 2] b; and manually do the index calculations.

However, it would be much more natural to store variable length vectors in a list, something like

{
    list[p_max] b_list;
    for(p in 1:p_max){
        vector[p] b ~ prior();
        b_list[p] = b;
    }
}

Or to use a text macro. But I don’t think such structures are available. Is there a better way to do this that I am missing? Is some kind of metaprogramming common? I could create the stan code as a Python string (looping over all the variable declarations that I need) and then writing that to a file, but this seems like it shouldn’t be necessary. (EDIT: It occurred to me that the latter suggestions are really undesirable as well because it would require recompiling the model)

Thanks!

1 Like

Welcome to the Stan community.

Tuples (a.k.a. lists) are a planned feature for the language, but for now you’ll need to store everything in a single vector and use clever indexing tricks. There’s an example of a good way to do so in the Stan User’s Guide, and you could search the Discord for “ragged array” for more information.

Ah thank you, I didn’t know the term “ragged” and didn’t find that page. Is it ever convenient to generate stan code from a higher level language? Or do most people find the constructs available in stan itself adequate?

That’s basically what brms does. I’ve played around with something similar in this R package, though it’s far from feature complete.

3 Likes

From my experience, generating code in high-level language (I use R primarily) is slightly annoying but not terribly so. I’ve used it on few occasions when working with pure Stan was just way more annoying. For a regular ragged array, I would personally just do the indexing in Stan. If you need something more complex - like ragged array of simplices, than I’ve succesfully moved to generating Stan code as using pure Stan requires you to reimplement the simplex transform and becomes quite challenging.

I’ve also succesfully used the extension mechanisms in brms to let brms generate most of the Stan code for me and only plug in the few parts that brms can’t handle.

1 Like

Thanks for this feedback – for my particular problem I’ve found it’s actually not that painful to do the indexing in stan itself, but definitely looking forward to tuples in the language. I’m also using Python so haven’t looked into any R packages. Is there a compelling argument to switch to R?

The Stan package ecosystem in R is richer. If you want to do anything resembling linear models, brms is just a great piece of software and makes your life so much easier - supports ton of stuff out of the box (splines, Gaussian processes, ordinal predictors, autoregressive models, …) and has built-in support for posterior predictive checks and other diagnostics.

There are good packages for visualisations and diagnostics (bayesplot, shinystan), and a bunch of packages for specific complex models (e.g. ctsem).

If you want to keep building models yourself in pure Stan I don’t think there is a big difference (although my experience with Python is quite limited).

Best of luck!

1 Like