Bmm: R package for easy and flexible Bayesian Measurement Modeling (v1.0.0)

I’m really happy to share the first official CRAN release of the Bayesian Measurement Modeling (bmm) R package, developed together with @g.frischkorn 🥳 🥳 🥳

tldr;

🥇 Hierarhical Bayesian estimation of complex measurment models in nearly any design

🥈 Uses brms as a translation engine for Stan and integrates into the existing R infrastructure

🥉 Provides a general Bayesian development framework for domain-specific models via highly modular code base, well-documented developer tools and useful templates

Is this post for me if I am not a behavioral scientist?
✔️ If interested in implementing complex domain models, you might find the 2nd half relevant

👀 We are looking for collaborators, feedback, ideas…
🙏 Please get in touch if interested or share with colleagues/students who might be

The what

What does bmm do? I briefly mentioned it in a comment to @jsocolar’s thread a few months ago: bmm translates complex psychological measurement models into brms syntax, allowing psychologists to estimate such models in a hierarhical bayesian framework, with minimal coding effort, for nearly any experimental design.

With the exception of drift diffusion models[1], this was not previously feasible without project-specific custom Stan or Jags scripting, usually accompanied by a lot of cursing, debugging and general despair (… just me?) bmm aims to solve this problem by doing for cognitive modeling what brms did for generalized multi-level regression. Because it uses brms under the hood, bmm returns an object that inherits from brmsfit; because of that, bmm integrates directly in the existing Bayesian R infrastructure. All the excellent tools built by this community for postprocessing, plotting, inference, model comparison, etc, can be used out of the box with bmm outputs.

In this blog post we give an overview of the package: the problems it solves, how to use it, and what its design principles are. There we focused more on the user side. But for the community here I thought it would be more valuable to highlight the potential role bmm could play as a general model development framework (stil very much a work in progress - we welcome all feeback, ideas & collaboration).

The how

Our key insight was that despite their relatively higher complexity, many substantive scientific models[2] can be expressed as distributional regression problems. Although 95% of brms users only care about how the conditional mean of the response changes with explanatory variables[3], brms truly begins to shine when used for distributional regression. Unlike its frequentist uncle lme4, brms lets you specify a predictor formula for any free parameter of supported distributions. In other words, there is nothing theoretically special about the mean/location parameter[4] - you can just as easily estimate how the variance, scale, shift or any other parameter changes with any combination of explanatory variables.

In the end, if you can write down a theoretical model’s likelihood, you can treat it like any other distribution and estimate it via brms, taking advantage of it powerful and flexible formula syntax. Sometimes that’s easy to do with custom families, other times you might have to “abuse” some mathematical tricks and the non-linear formula syntax beyond their original design… but hey, if it works, it works.

In essence, just as Stan is a higher-level interface for C++, and brms is in turn a higher-level interface for Stan, bmm serves as a yet another level of abstraction over brms (“It’s turtles all the way down!”[5] ). We are not the first to think of this, and I recently made a short list/review of all packages I could find that do something similar. These packages all appeared in the last year or so, and they share in common the idea of “hacking” brms to estimate various domain models. This is truly a testament to what an incredibly useful tool @paul.buerkner has built[6].

The why

So if brms already offers custom_families and can be “hacked” to do this, why a separate package? We never planned to make a package - when we discovered how to implement some of our cognitive models 2 years ago, we initially wrote a tutorial about how to do precisely that. But at some point it occured to us that building a general framework that abstracts away has many benefits - drasticly increased code efficiency, no need to look-up the “formula tricks” for different projects, standardized and well documented models with known properties available to the community, lowering the technical barrier to colleagues not trained in math and modeling. Also, since the models we implement are grounded in particular subject-matter theories, bmm can provide more opinionated, empirically or theoretically grounded defaults for priors, comprehensive data checks, plausible initial values to improve sampling, etc.

The future

Even though our current focus is on cognitive/psychological models, we have been consciously developing the package to be very modular, such that eventually it could be used as a general purpose framework for developing and testing domain-specific models from any discipline. Even currently, If a model’s likelihood function can be defined in Stan, and the parameters’ can be expressed as a generalized (non)-linear function of predictors (i.e. anything implementable in a brmsformula), bmm can probably implement it, regardless of the domain.

The core of the code base is built such that the model translation, data processing, prior configuration, etc, are all done via generic S3 methods. Thus, all it takes to implement a new model is to define it as a new S3 class, and provide corresponding S3 methods for each step in the pipeline.

To make this process easier, we have written a detailed developer’s guide and created a use_model_template() function, which will generate all necessary boilerplate code to define a new model class and the corresponding methods. If you want to add a new model, you only need to call this function, and fill out only the parts that are unique for the new model.

This makes it really easy to add new models if you fork the github bmm package repo and want work with the developmental package. But this makes the workflow a bit tedious, unless you absolutely plan to contribute the model to the package. I am now imagining a much better API for a future release, in which any user who loads the package via the good old library(bmm) call, would be able to use constructor functions to define new models that will work immediately with bmm.

The same API could be used to develop new packages for models specific fields. That is, rather than making bmm a one-stop-shot with hundreds of models added over time, I imagine a core package that provides the interface and generics, which anyone can use with models defined in other packages. @jsocolar suggested something along those lines a few months ago, and I like this idea more and more.

This, at least for me would be incredibly useful when developing novel computational models. When we do that, it usually requires many different iterations over possible functional forms and parametrizations to find a version of the explanatory model that can account for many differrent target phenomena. So rather than my previous workflow of keeping dozens of variations of stan files, which I need to adjust the moment I want to test the models on a different set of experiments, we could just define all the different model versions as bmmodel objects, take advantage of the modularized configuration functions that calibrate the models with experimental variables, and just do model estimation with bmm, getting a bmmfit result that then can be systematically compared across the family of candidate models.

These final thoughts are very much just at the ideation stage, but anything that let’s me spend less time organizing and rewriting code and more time thinking about theory is a win for me. I’m hoping that the right set of code design can make bmm not only useful for end users who just want to apply an established measurement model to their data for inferential purposes, but also to theoreticians who develop such models.


  1. thanks to the native brms and Stan support for the Wiener distribution ↩︎

  2. by “substantive” I mean models with ontological commitments whose parameters have a clear theoretical interpretation. In such models parameters are thought to represent measurable attributes of real entities and processes; as opposed to the data-agnostic off-the-shelf statistical models in which parameters are summary statistics. The line is sometimes blury, of course. ↩︎

  3. I’m not trying to be… mean. Those are Paul’s own words :) ↩︎

  4. It is only trivially special due to historical reasons and traditional statistical notation in R. The formula syntax y ~ x1 does not mean that “the response y is distributed as x1”, as it might be in Stan. A typical call such as lm(y ~ x, family = gaussian()) or brm(y ~ x, family = gaussian()) means that “y follows a normal distribution whose \mu parameter is a linear function of predictor x1”, or :

    y \sim N(\mu, \sigma) \\ \mu = \beta_0+\beta_1x

    In contrast to other regression functions in R, brms let you specify a group of formulas, one for each distributional parameter. However, it has kept the resp ~ predictor convention to denote the formula for the location parameter. Relevant discussions here and here. For me, an ideal API would make it transparent that y follows a certain distribution, and would separate that from the regression formulas of the parameters. The current mixed convention of bf(y ~ x1, sigma ~ x1) is just confusing. Which is why we eventually decided to build a wrapper around brmsformula() called bmmformula(), which makes all parameters explicit and specifies the response in the model object instead. ↩︎

  5. …and they all speak math ↩︎

  6. Paul has also been incredibly open to changes in brms that made the integration easier ↩︎

13 Likes

Thanks for sharing. It’s so gratifying when people like our underlying tools enough to extend them. We’ve thought about these issues a lot, but came across a bunch of stumbling blocks that I ask about below. I’m really curious how you solve them, as brms itself doesn’t.

How is that possible if you’re targeting brms? My understanding is that there’s only a limited set of things that can be coded in brms compared to directly coding in Stan. The upside is that brms provides good code for the models it can express.

For example, @mitzimorris just extended brms to handle clinical measurement errors for diagnostic tests with differing sensitivity and specificity. This requires a custom link function and couldn’t be done directly in brms. And that’s a really really simple thing to do in Stan code.

Indeed, that’s most people’s experience. That’s one of the motivation for answering questions here. Coming from a CS and professional software development background, the Stan project has been sobering to me in realizing that people who do applied stats usually don’t have much experience writing code with loops, indexes, and types. I used to work on programming language theory when I was a professor and then I spent 10 years in industry. So my experience was really unusual among people who use Stan. I just designed Stan the way I’d want to code, based on my having used BUGS. When designing Stan’s language, I was still learning stats, so the bottleneck for me was modeling, not coding.

What makes you say that? In my experience, many if not most of the Stan users are concerned with calibrated posterior predictive inference. Presumably you’ll provide that, too.

This also surprised me, as I never found the problem that brms solved to the bottleneck for me. But judging by the volume of queries on the forums and downloads, brms is more popular than writing models in Stan itself.

Paul started the project way back in the earliest days of Stan, but since then, there have been a wide range of contributors that made it what it is today. As far as I can tell, Paul’s mostly off working on ML-type amortized inference. We can’t keep up with all the brms questions on the forums, which are mostly of the form “how do I modify brms to do X”.

If possible, a way to get more leverage would be to build your system into brms itself. But if it’s an orthogonal syntax, that won’t work.

How do you achieve that without reproducing the complexity of Stan itself? I couldn’t tell from the links how you’d write custom models, like with sensitivity vs. specificity, or with ordinary differential equations or root finders, etc. Stan itself lets you go pretty far beyond standard GLMs.

This is indeed a huge bottleneck to what @andrewgelman calls “workflow,” by which I think he means practices for doing Bayesian stats on a computer. How does having a bunch of different bmmodel objects help?

here’s the brms code that adds a custom link function to a binomial model:

library(brms)

# alldata is a dataframe of per-category binomial outcomes: number of tests, number of positive tests, categorical covariates sex, eth, age, time

alldata$spec = 0.95
alldata$sens = 0.7

hyper_data <- list(
  intercept_prior_mean = 0,
  intercept_prior_scale = 5,
  coef_prior_scale = 2.5
)

# define a *vectorized* custom family (no loop over observations)
binomial_sens_spec_vec <- custom_family(
"binomial_sens_spec", dpars = c("mu"),
links = c("logit"), lb = c(NA),
type="int", vars= c("trials", "vreal1", "vreal2"), loop = FALSE
)

# define the corresponding Stan density function
stan_density_vec <- "
real binomial_sens_spec_lpmf(int[] y, vector mu, int[] N, real[] vreal1, real[] vreal2) {
  return binomial_lpmf(y | N, mu * vreal1[1] + (1 - mu) * (1 - vreal2[1]));
}
"

stanvars_vec <- stanvar(scode = stan_density_vec, block = "functions")

priors <- c(
  set_prior(paste("normal(", hyper_data$intercept_prior_mean, ",", hyper_data$intercept_prior_scale, ")"), class = "Intercept"),
  set_prior(paste("normal(0,", hyper_data$coef_prior_scale, ")"), class = "b", coef = "sex"),
  set_prior(paste("normal(0,", hyper_data$coef_prior_scale, ")"), class = "sd", group = "eth"),
  set_prior(paste("normal(0,", hyper_data$coef_prior_scale, ")"), class = "sd", group = "age"),
  set_prior(paste("normal(0,", hyper_data$coef_prior_scale, ")"), class = "sd", group = "time")
)


fit = brm(cases | trials(tests) + vreal(sens, spec) ~ sex + (1 | eth) + (1 | age) + (1 | time),
          data = alldata,
          family = binomial_sens_spec_vec,
          prior = priors,
          stanvars = stanvars_vec)

More code was required to do proj_pred checks. The project team included @jonah, and it took a lot of digging on both our parts to get things running.

3 Likes

@Bob_Carpenter thank you for engaging in such depth. I think this is really important., so I’m really grateful we can discuss it. I’ve started to realize that depending on background, different people have a different conceptual framework of modeling, and sometimes we make incorrect assumptions about what we communicate.

I’ll try my best to answer. First some general thoughts and then specific replies. (PS: This became much longer than I planned. I hope it’s not too basic. It helped me clear up how I think about this and maybe would be helpful for other who end up reading this discussion. I might write a blog post with more details and illustrations - consider this a rough starting point. Will add specific replies to your comments in a separate comment).

Contents:

Model components in distributional regression

We (and brms more generally) implement a distributional regression approach to modeling. Within this framework, we can consider a model at two levels of abstraction:

  1. The core model, which specifies a likelihood function of the data given a closed set of parameters and their priors.

In a simple statistical model this is a distributional family - a normal, poisson or whatever, with its parameters. A more complicated example could be a drift diffusion model which gives the joint likelihood of response times and binary choices as a function of parameters drift-rate, bias, threshold, non-decision time. Another example could be a mixture of 3 von-mises distributions, which then has parameters for each distribution and the mixture weights. The parameters of this core model are a closed set and each plays a distinct part in the likelihood function (e.g. always 5 scalar paramers). In essence this is the part of the model that looks like this:

\begin{align*} y \sim f_Y(\theta_a, \theta_b, \theta_c, ...) && \text{likelihood of the data} \\ \theta_a \sim f_{\theta_a}(...) && \text{prior on parameter } \theta_a\\ ... && \text{more priors} \end{align*}

where y is the data (one or more variables), and \theta_a, \theta_b, etc are the core parameters

  1. A regression model layer, which specifies for a particular dataset how the core parameters change over observations.

This is a standard statistical formula of the type:

g(\theta_a) = \beta X

where g() is some link function that unconstrains a parameter, \beta are the regression parameters and X is the model matrix. Such a formula is specified for every core parameter of the model. This can include fixed and random effects at different levels. In contrast to the core model, which has a closed set of parameters, the regression model now has a vector for each core parameter. The core model priors correspond to the intercepts, and the regression layer needs to additionally specify priors for the effects.

What do the terms “model”, “model comparison”, etc, refer to?

This I think is where often misunderstandings occur because different fields use this differently, and also the same field uses it differently depending on the context.

In the cognitive modeling literature, by “model” we often refer to (1) - a drift diffusion model, a linear balistic accumultor, unequal-variance signal detection, a 3-parameter mixture model of memory, etc. In the bmm package we use the term “measurement” model in this sense - a core model without the regression layer. In the field, when we talk about “different models”, we usually mean different core models.

This corresponds to the idea of “model family” in the statistical literature. Where instead, usually when you discuss alternative models, you mean different regression-level specifications of the same model family (e.g. comparing the same core model in which a parameter \theta_a is allowed to vary with some predictor vs that model but with a fixed \theta_a parameter.

Understanding the bottleneck

To understand where the bottleneck and friction points in modeling for many of us come from, and how brms and bmm help in generating the Stan code for distributional models, we need to understand:

  • how the distinct components of such models map to Stan code
  • what goals modelers usually want to achieve
    • in a typical workflow, what do we have starting out and what do we want to get
  • how the modeling goals map to changes in these components
    • how do we move from what we have to what we want to get in Stan

Stan components

Here I don’t mean the syntactic part of data {}, parameters {}, model {} - although there is some overlap, the components sometimes are written over multiple parts.

1. Core model likelihood. One of:

  • a built-in function such as normal_lpdf(y | mu, sigma);
  • a complete custom function defined in the functions {} block of the type my_fun_lpdf(y | mu, sigma); This can either specify a series of equations from basic math functions, or combine them with built-in likelihood functions
  • a sequence of code in model {} that calls several functions storing values in intermediate variables. Equivalent to the second option, but messier

2. Data structure. Consists of:

  • Variables that contain:
    • observations
    • auxiliary information (size of other data vectors, number of observations, number of predictors, etc)
    • predictors for the regression layer

3. Parameter structure. Consists of:

  • Variables that define the which model parameters are sampled (rather than fixed)
  • Typically a variable corresponds to a core model parameter, and its structure (a vector, list, array, whatever), contains the values for the regression components

4. Priors
5. Regression layer

  • code in model {} which defines how the model matrix is combined with the regression parameter vectors to produce a final parameter vector to be used in the model likelihood
  • this is needed, because the likelihood function needs to know for each (set of) observations, which parameters to use. Usually trickiest with random effects, because the most efficient ways of coding them are not the cleanest (e.g. just using linear algebra is slower than looping over observations an subseting the parameter vector based on random effect indicator…)

6. Optimizations

  • code that plays no conceptual role in the model, but improves sampling or inference (parametrization, centering predictors, applying link functions)

What a user has and what they want to achieve

Depends on conventions in each field, of course, but in the behavioral sciences, this is very common.

We have:

  • Data(sets): typically in a long format data.frame, where one column stores observations (or more, but one for each distinct observation type, e.g. response times and accuracy), and other columns store potential predictor variables for fixed and random effects. This means, that each row represents a single observation (or a sufficient statistic), and all of its predictors.
  • Potential core model candidates
  • Potential predictor candidates

All three of these might vary, and typically the goal is to do some form of inference:

  1. Inference about the regression layer

Definition: For a given particular dataset and core model, which set of predictors on which parameters lead to the best model fit.

Use case: Very typical with pure statistical models, such as testing the hypothesis that reaction times in condition A are faster than in condition B, and using the default gaussian or log-normal core model. But also applicable with complicated domain specific models such as drift diffusion models in which we test the hypothesis that the drift rate is slower in conditation A vs B.

Assumptions: This type of data is described by a particular core model, and this particular dataset includes conditions that affect the parameters of the model.

What varies:

  • data: fixed in a particular instance, but we might want to apply the same regression approach for different experiments/data sources.
  • regression formula
  1. Inference about the core model

Definition: For data coming from a particular paradigm (e.g., memory decision in delayed estimation), what core model is “the best”

Use case: usually the aim is to compare different data generating assumptions (process models) or to compare what model describes the type of data best (measurement models).

Assumptions: we include regression components to account for variability over datasets (random slopes for participants, fixed effects for conditions), so that we can do a fair comparison of different core models

Bottleneck: How do the goals map to changes in stan code?

The big bottleneck is that much of the stancode we need to write depends on details we don’t care about, introducing a lot of overhead. The names and sizes of variables, parameters and regression layer syntax depends on the specific dataset and regression formula.

When we want to compare several regression formulations of the same core model to the same dataset, we would need either 1) as many copies of the stan code as variations, each with a lot of code duplications and idiosyncratic changes, 2) a very generic script with an abstract structure, which allows you to somehow map those choices to the same script by varying the inputs. This is very difficult to write for the average user. Then you also need to create a lot of the optimization code and regression flow anew for each project. These scripts can get hundreds of lines long for relatively simple cases.

A typical workflow without brms would be and not much training in software engineering (which is true for most statistical users).

  • if you’ve done this before, copy a script you have for a similar situation (e.g. another time I applied a logistic multi-level regression with random effects to a memory experiment; regression layer inferece goal; or my 3 different custom likelihood measurement models).
  • adapt variable names, sizes for the current data with the basic model
  • make multiple copies of the code, one for each regression version, and change relevant code
  • debug and test interactively
  • write complex R code to translate your tidy long-format data.frame into a stan-type data structure. Repeat as many times as you have alternative models, or write a custom function to do that for you
  • and a bunch of other things.

How do brms and bmm solve the bottleneck?

You could say at this point that a solution to all these problems is to write abstracted code that can be reused, and whose outputs will depend on the inputs, rather than having a different script for each input. Well, this is exactly what brms does.

Given:

  • a model family (core model; builtin or user-defined stan-code)
  • a long-format data frame
  • a standardized regression formula
  • priors specified via user-friendly syntax

It:

  • transforms the data to an appropriate format for stan
  • generates boiler plate code that does not depend on the core or regression model
  • creates mappings between user specified variables and standardized stan code for data and parameter variables
  • defines the regression layer syntax to feed parameters to the core model

🔥 Several things make it very flexible:

  • the brmsformula syntax:
    • via linear and non-linear formulas it can specify nearly arbitrary regression layer code, custom link functions, etc
    • turns out that it can do many things it was not designed for, which is what we take advantage of in bmm
  • custom families:
    • users can write a function in stan code and then pass that to brms via the custom_family() option. Through this you can define any custom core model you can code in Stan.
  • stan code injection
    • via the stanvars argument, you can specify arbitrary stan code to be inserted in any code section.

Summary - role of brms and bmm

Based on the last section, it means that as long as you have model that can be described in the above framework, you can generate the stan code for it with brms. You still have to provide some Stan code if 1) you are using a core model that is not a typical statistical family or 2) you want to affect some of the code logic via stanvars. Thus, brms provides you with three methods for injecting stancode: custom families (tells brms to use that function to increment the traget); stanvars: inject code in arbitrary blocks but after/before existing code; specify non-linear regression formulas injects code in how parameter vectors are combined to produce a final parameter passed to the core model.

All of this in brms gives you a lot of control over the stan code that you care about changing manually, while automatically generating the parts of the code that can be boilerplate.

This works, but still requires more code that just fitting a typical mixed-effects gaussian regression. What bmm then does is provide a framework to smooth this process for custom domain models, so that users won’t have to write even those things themselves. And for me as a model developer, I can use bmm to specify the common parts of a model I’m working on, and then more easily itterate on the code, by needing to only supply changes to the relevant code parts, and let the rest of the code be generated automatically.

1 Like

I hope the longer reply above gives more context on this. With the tools it offers, brms gives you a lot more control than people think. You can think of brms composed of two elements:

  1. Default boilerplate for built-in models
  2. Default boilerplate for regression formulas
  3. Stan code injection tools

You have three tools to inject custom stancode:

  • You can use custom_family() functions to pass arbitrary user-defined stan code likelihood functions to setup the core model
  • use non-linear formulas in brmsformula to put arbitrary custom link functions on parameters
  • use the stanvars argument to pass any additional stan code to be injected in a desired location

Yes, this is very true. Most scientists get very little formal training in proper software development practices. Me included - I’ve been learning them the hard way over time. But you have to realize that most R and Stan code in the while is written interactively, with users writing a script out of commands they run in real time, linking together different outputs. Most R users rarely write functions, and have a bunch of scripts that generated some outputs, but often cannot be replicated, because they have run-time dependencies on each other, and how they were run is usually not documented. These scripts tend to be super project specific, with large doses of code duplication. This is what my code used to be like as well, and I’m slowly improving (writing this package has been a big impetus to learn more proper techniques).

Knowing that this is the audience plays a big role in how we develop tools. But there is usually a big conceptual overhead in writing abstracted code. I think that the big strenght of brms and bmm is that they take care of that abstraction and provide an interface in which you have to specify a model at the level at which your thoughts are happening.

I can’t say anything about the people who code Stan themselves. This was mostly in jest based on what Paul told me in a recent conversation. While brms provides all of these complex tools, in his experience most users’s use case is to fit one of the common families (gaussian, lognormal, poisson), and only include regression predictors on the main location parameter. E.g. “Are reaction times faster in condition A than condition B?” - most people would use a normal or log-normal family, include a fixed and maybe random effects of condition on mu and test that model. But again, I am only relaying what Paul’s impression of his users is. Not that I think thsi is what people need to be doing :)

In my previous reply hopefuly it makes it clear what the bottleneck for a lot of people is and why brms helps. It removes the need to remember and write low level code details you don’t care about, and reduces the number of coding steps to specify and change a core model, a regression formula for the model or the data, while keeping everything else the same.

Don’t get me wrong - you still need to write stan code if you want to implement custom models. You can use any of Stan’s complexity you need. The point is that you can just focus on writing that part of the code, while you let all the other boilerplate that connects the data, the regression layer, etc be taken care of.

It is not completely flexible, of course. There are still parts you may not be able to code (although with the three different injection tools in brms that is becoming less and less likely). Whether this is useful or not depends on how much of your model code can be considered boilerplate consistent with a distributional regression framework.

Depends on whether you use bmm as an end user, applying built-in models to a particular dataset to answer a substantive question, or whether you use it as a development framework to develop novel core models. For the first use-case the answer is straightforward - you can vary bmmformulas, dataset and bmmmodel objects orthogonally, with very little code, to quickly specify the universe of model analysis you want to do.

For model development it’s a bit less straightforward. I am still crystalizing my thoughts around this. Let’s take a real case from my work last year. One of the models we now have in bmm is the “Signal discrimination model (SDM)”. It is a model for a specific memory task. This model was introduced 2 years ago by Oberauer (2023) who worked out the likelihood function. This corresponds to what I called “a core model”. He used maximum likelihood to estimate the model, which required that the model is fit separately to different participants and memory conditions.

What I wanted was to implement a bayesian version that allows us to fit the model in a multi-level setting - i.e., to add the regression layer component. I started working on it and had a working version pretty soon as a stan script, which I could fit to a simple design with no regression parameters and a small dataset. It took a long time to fit (hours for this simple case, many weeks for a complex dataset). I started to experiment with the code, working through the math, developing various approximations to the likelihood function, ending up with many different stan files for every model version. I needed to compare how fast each run and how much error various approximations had to the true model. It turned out that this depended on the parameter values that generated the data, the number of observations and predictors, etc.

So I had to develop many copies of the scripts for different test cases. I tried to keep the stan script to minimum, and do as much as I could in R, but still required a lot of extra code to coordinate everything. Long story short, it was a lot of work, and friction.

This is actually how I used to do it in the past years, but for this project I already figured out how to use brms to take care of a lot of the work. That helped immensily, because of modularity: I can define different datasets, brmsformulas and custom families and then write a simple script to iterate over their combinations and report results.

If I were to do it now with bmm it would offer one more advantage. With brms custom family function, I need to define each model version as a separate custom familiy, duplicating still a bunch of code. With bmm instead I can treat bmmodel objects as model constructors. bmmodels have nested classes, where a model can inherit code and functions from a parent and just add whatever changes it needs either via inputs or additional code. For example, in the package we have a hierarhy of models:

  • bmmodel
    • vwm
      • mixture_2p
      • non_targets
        • mixture_3p
        • imm
          • abc
          • bsc
          • full
      • sdm

where code and functions are always defined at the highest shared node. Etc. So if I now have an idea for a new model that extends the sdm model (which I do), I can already save myself a huge amount of effort and only code the parts of the new model that are unique and inherit everything else.

It is still really clean and straightforward code :) Having had to do the same, I would say it’s because the customization options in brms are not super well documented and more examples could help. Also these abilities are evolving naturally in response to people’s demand. I know from @paul.buerkner that he never expected or planned for how many people would want to do so much customisation. I think there is still a bunch of things that can be done to smooth the process and that will be through understanding the level of modularity necessary

I know that for example @g.frischkorn is working on adding functionality to bmm to make specifying link functions a one line option which would be a great further simplification!

Btw, there is a much easier way to specify link functions directly with non-linear formulas with no need for a custom family. You could replace all of this:

# define a *vectorized* custom family (no loop over observations)
binomial_sens_spec_vec <- custom_family(
"binomial_sens_spec", dpars = c("mu"),
links = c("logit"), lb = c(NA),
type="int", vars= c("trials", "vreal1", "vreal2"), loop = FALSE
)

# define the corresponding Stan density function
stan_density_vec <- "
real binomial_sens_spec_lpmf(int[] y, vector mu, int[] N, real[] vreal1, real[] vreal2) {
  return binomial_lpmf(y | N, mu * vreal1[1] + (1 - mu) * (1 - vreal2[1]));
}
"

stanvars_vec <- stanvar(scode = stan_density_vec, block = "functions")

fit = brm(cases | trials(tests) + vreal(sens, spec) ~ sex + (1 | eth) + (1 | age) + (1 | time),
          data = alldata,
          family = binomial_sens_spec_vec,
          prior = priors,
          stanvars = stanvars_vec)

with this:

formula <- bf(cases | trials(tests) ~ mu1 * spec + (1 - mu1) * (1 - sens), nl = TRUE) +
  lf(mu1 ~ sex + (1 | eth) + (1 | age) + (1 | time))

fit <- brm(formula, 
           data = alldata,
           family = binomial(),
           priors = priors)

This works because:

  1. The first formula in a brmsformula is always for the mu parameter, which is just the first parameter of the family
  2. You can specify mu to be a non-linear function of an auxiliary parameter. This implements a link function.
  3. Then specify your refression formula on the auxiliary parameter

And for bmm models we add a bit more syntactic sugar by automatically detecting when a formula should be treated as a non-linear formula (when any of the right handside terms for a formula appear as left-handside terms in another formula. So the above would be:

formula < bmf(
   mu ~ mu1 * spec + (1 - mu1) * (1 - sens),
   mu1 ~ sex + (1 | eth) + (1 | age) + (1 | time)
)

(we only put model parameters in the formulas, and responses are given separately.

6 Likes