Brmspy: Python-first access to brms (cmdstanr backend, ArviZ output)

Hi all. I wanted to share a Python interface I’ve been building for brms that may be useful for anyone who works across both R and Python environments.

brmspy provides a Python-side API for fitting brms models via cmdstanr, returning results directly as ArviZ InferenceData while still exposing the underlying brmsfit object when needed.

It’s designed for production pipelines where models are authored in the brms formula language but the rest of the workflow (data processing, prediction, dashboards, etc.) is Python-based.


What it does

  • Calls brms functions through rpy2 with correct parameter translation
  • Delegates all modeling logic to real brms. No Python-side reimplementation, no divergence from native behavior. Opinionated wrappers that rebuild formulas or stancode in Python inevitably drift from brms and accumulate their own bugs.
  • Preserves full brms parameter names
    (e.g. b_Intercept, b_x, sd_group__Intercept, etc.)
  • Returns ArviZ InferenceData by default for downstream analysis in Python
  • Still provides access to the R-side brmsfit object (model.r)
  • Includes helpers like prior(), formula(), make_stancode(), and summary utilities
  • Tested against multiple brms workflows (priors, prediction functions, stancode generation, etc.)
  • Works well with various algorithms. Tested with NUTS sampling, fullrank optimization and meanfield optimization.

I’m using this in production (several daily pipelines), so the focus has been stability and predictable behavior rather than feature breadth.


Test coverage & stability

The current release has:

  • 77% test coverage (Python-side)
  • 54 tests covering priors, model fitting, get_stancode, summary paths, predictions, and no-sampling modes
  • Continuous testing across Python 3.10–3.14

The interface is still early (0.1.x), but the core fitting, prior handling, and prediction paths have been hardened through adaptation in production systems.


Example

pip install brmspy
from brmspy import brms, prior

epilepsy = brms.get_brms_data("epilepsy")

model = brms.fit(
    formula="count ~ zAge + zBase * Trt + (1|patient)",
    data=epilepsy,
    family="poisson",
    priors=[
        prior("normal(0, 1)", "b"),
        prior("exponential(1)", "sd", group="patient"),
        prior("student_t(3, 0, 2.5)", "Intercept"),
    ],
    chains=4,
    iter=2000
)

# Python-side analysis
idata = model.idata

Documentation

Docs:
https://kaitumisuuringute-keskus.github.io/brmspy/

Source:
https://github.com/kaitumisuuringute-keskus/brmspy/

Pypi:
https://pypi.org/project/brmspy/

7 Likes

Very cool! Thanks for letting us know about this.

1 Like

Did some work on installations. 0.1.9 release adds prebuilt runtimes for brms + cmdstanr + CmdStan, which bring installation on a clean machine down to ~20–60 seconds (with decent internet speed) instead of the usual ~20–30 minutes of compiling from source.

All binaries are built in Github actions and they are signed with attestation. Binaries can be verified to have come from actions.

Quickstart:

from brmspy import brms

brms.install_brms(use_prebuilt_binaries=True)

This downloads and activates a precompiled set of R libraries and compiled cmdstan environment. The default versions will always be either the latest stable version of brms and cmdstanr. In case of incompatibilities (R4.5 and cmdstanr 0.8 on windows) beta releases are used.

Platforms and requirements

Targeted toolchain / runtime assumptions:

R: R >= 4.0

Linux (x86_64)

  • glibc >= 2.27 (Ubuntu 18.04+, Debian 10+, RHEL 8+)
  • g++ >= 9.0

macOS (Apple Silicon)

  • Xcode Command Line Tools (xcode-select --install)
  • clang >= 11.0

Windows (x86_64)

  • Rtools 4.x with MinGW toolchain
  • g++ >= 9.0
  • The installer calls cmdstanr::check_cmdstan_toolchain() and installs the matching Rtools version when needed.
  • For R 4.5 the prebuilt binary contains cmdstanr 0.9

Current coverage

  • Prebuilt runtimes are currently built and tested for R 4.5 on:
    • Linux x86_64
    • macOS arm64
    • Windows x86_64
  • Runtimes for R 4.0–4.4 will be built next using the same pipeline (tomorrow, very likely).

Each runtime is automatically tested. Linux and Macos binaries I have tested myself in various environments.

If you want to benchmark the cold-start time in a clean environment, here is a Colab notebook that demonstrates the prebuilt path:

Colab demo notebook – brmspy prebuilt runtime

Currently built runtimes can be viewed HERE

2 Likes

I’m actively moving towards a stable public API release. Currently implementing easy-to-use wrappers around common brms functions.

0.1.12

New Features

  • Added save_rds() for saving brmsfit or generic R objects.
  • Added load_rds_fit() for loading saved brmsfit objects and returning a FitResult with attached InferenceData.
  • Added load_rds_raw() for loading arbitrary R objects from RDS files.
  • Added fit alias brm.

Families

  • Added brmspy.families module with Python wrappers for brmsfamily() and family().
  • Implemented keyword-argument wrappers for the following families:
    student, bernoulli, beta_binomial, negbinomial, geometric,
    lognormal, shifted_lognormal, skew_normal, exponential, weibull,
    frechet, gen_extreme_value, exgaussian, wiener,
    Beta, dirichlet, logistic_normal, von_mises, asym_laplace, cox,
    hurdle_poisson, hurdle_negbinomial, hurdle_gamma, hurdle_lognormal,
    hurdle_cumulative, zero_inflated_beta, zero_one_inflated_beta,
    zero_inflated_poisson, zero_inflated_negbinomial,
    zero_inflated_binomial, zero_inflated_beta_binomial,
    categorical, multinomial, cumulative, sratio, cratio, acat.

Priors

  • Added default_prior() for retrieving default priors for a model formula and dataset (pd.DataFrame).
  • Added get_prior() for inspecting prior structure before fitting (pd.DataFrame).

API Organization

  • Reorganized brms wrappers into modular files under brmspy/brms_functions/
    (brm, diagnosis, families, formula, io, prediction, prior, stan).

Internal / Typing

  • Added RListVectorExtension protocol for return types that wrap R list-like structures.
    Enables automatic extraction of underlying R objects in py_to_r and kwargs_r.

Slightly expanded example:

Repo: GitHub - kaitumisuuringute-keskus/brmspy: Python-first access to R’s brms with proper parameter names, ArviZ support, and cmdstanr performance. The easiest way to run brms models from Python.

Future stable release (0.2)

There are a couple things to do before the public API will be frozen and I could actually recommend using the library without a fixed version:

  • Expand diagnostics functions: fixef, ranef, loo, loo_compare, posterior_summary, validate_newdata
  • Criterion support: add_criterion, add_loo, add_waic, add_ic
  • Proper error model (get rid of generic exceptions, make sure all R errors are readable)
  • Test coverage 85%+ (critical functions first)
  • R dependencies test coverage 80%+
  • Runtimes built for R 4.0 -4.5 for all 3 major OS (windows only has 4.5 at the moment).
2 Likes

Thanks for the update, definitely keep us posted. Hopefully continuing to post updates here will also help get you some new users.

Glad to hear updates are welcome :) I’ll still keep posts to bigger releases rather than every small change.

Right now I’m doing a bigger refactor for the first “stable” API release - about 95% there, mostly chasing edge cases with the R environment and dependency management. That won’t land today, so here’s yesterday’s larger update and a few QoL things that are coming with the stable release.

Yesterday’s release added:

  • summary() - full summary object with pretty printing (previously only fixed effects)
  • fixef()
  • ranef()
  • posterior_summary()
  • prior_summary()
  • loo()
  • loo_compare()
  • validate_newdata()
  • call() - a generic wrapper to call arbitrary brms or R functions

On the quality-of-life side I’m trying to stay as close as possible to brms’ formula interface.

Both of these will be valid formula definitions in the stable release:

from brmspy import bf, set_rescor, nlf, lf

formula1 = (
    bf("y ~ 1") +
    nlf("sigma ~ a * exp(b * x)") +
    lf("a ~ x", "b ~ z + (1|g)", dpar="sigma")
)

formula2 = bf("""
mvbind(tarsus, back) ~
    sex + 
    hatchdate + 
    (1|p|fosternest) + 
    (1|q|dam)
""") + set_rescor(rescor=True)

There’s no heavy Python-side machinery behind the + operator - it just delegates to R’s +:

@classmethod
def _formula_parse(cls, r: Union[str, robjects.ListVector]) -> "FormulaResult":
    from .helpers.conversion import r_to_py
    if isinstance(r, str):
        r_fun = cast(Callable, robjects.r("brms::bf"))
        r = cast(robjects.ListVector, r_fun(r))
    return cls(r=r, dict=cast(dict, r_to_py(r)))


def __add__(self, other):
    if isinstance(other, (robjects.ListVector, str)):
        # for performance reasons, shouldnt do a full parse here.
        other = FormulaResult._formula_parse(other)

    if not isinstance(other, FormulaResult):
        raise ValueError(
            "When adding values to formula, they must be FormulaResult "
            "or parseable to FormulaResult"
        )

    plus = cast(Callable, robjects.r("function (a, b) a + b"))
    combo = plus(self.r, other.r)

    return FormulaResult._formula_parse(combo)
2 Likes

brmspy update: 0.2.0 is on PyPI.

The main changes are:

  • new formula DSL that mirrors brms’ multivariate / distributional interface
  • a refactored runtime / installation layer
  • removal of the embedded loo/add_criterion helpers in favour of ArviZ
  • The public API is now stable, parameters, functions etc wont be removed, they may normally be deprecated and removed after a while. Internals (underscored modules or methods) may change without warning.

Repo: https://github.com/kaitumisuuringute-keskus/brmspy
Docs: https://kaitumisuuringute-keskus.github.io/brmspy/

For now I’m done with major changes and will mainly be focusing on bugfixes, performance and memory improvements.


Breaking changes

  • Removed loo, loo_compare, add_criterion. In embedded R mode I kept running into hard crashes around add_criterion + large models (especially on Windows). The intended path now is to use arviz.loo / arviz.compare on the .idata attribute.
  • install_brms(use_prebuilt_binaries=...)install_brms(use_prebuilt=...).
  • The “canonical” names going forward are brm and bf. Older aliases fit / formula are still exported but effectively deprecated.

Formula DSL + multivariate example

The formula side is now much closer to brms. You can build multivariate, distributional and non-linear models in Python using the same structure and just hand them straight to brms under the hood.

Runnable example on Google Colab.

A concrete example (BTdata multivariate model), side by side:

Python (brmspy)R (brms)
from brmspy import brms, bf, lf, set_rescor
from brmspy import skew_normal, gaussian
import arviz as az

df = brms.get_data("BTdata", package="MCMCglmm")

bf_tarsus = (
    bf("tarsus ~ sex + (1|p|fosternest) + (1|q|dam)")
    + lf("sigma ~ 0 + sex")
    + skew_normal()
)

bf_back = (
    bf("back ~ s(hatchdate) + (1|p|fosternest) + (1|q|dam)")
    + gaussian()
)

fit = brms.brm(
    bf_tarsus + bf_back + set_rescor(False),
    data=df,
    chains=2,
    cores=2,
    control={"adapt_delta": 0.95},
    silent=2,
    refresh=0,
)

for resp in fit.idata.posterior_predictive.data_vars:
    print(resp)
    print(az.loo(fit.idata, var_name=resp))
    print()
library(brms)
library(MCMCglmm)
library(loo)

df <- get(data("BTdata", package = "MCMCglmm"))

bf_tarsus <- bf(
  tarsus ~ sex + (1 | p | fosternest) + (1 | q | dam)
) + lf(sigma ~ 0 + sex) + skew_normal()

bf_back <- bf(
  back ~ s(hatchdate) + (1 | p | fosternest) + (1 | q | dam)
) + gaussian()

fit <- brm(
  bf_tarsus + bf_back + set_rescor(FALSE),
  data = df,
  chains = 2,
  cores = 2,
  control = list(adapt_delta = 0.95)
)

loo_tarsus <- loo(fit, resp = "tarsus")
loo_back   <- loo(fit, resp = "back")

On the Python side, all the “interesting” analysis is intended to go through ArviZ using the InferenceData:

import arviz as az

az.plot_ppc(fit.idata, var_names=["tarsus"])
az.plot_ppc(fit.idata, var_names=["back"])

loo_tarsus = az.loo(fit.idata, var_name="tarsus")
loo_back   = az.loo(fit.idata, var_name="back")

Formula objects are just thin wrappers around real brms formula objects. The + operator calls an R function function(a, b) a + b and re-parses the result, so behaviour should match brms as closely as possible.


Runtime / installation changes

The runtime / installation layer got a full cleanup:

  • split into _config, _r_env, _platform, _install, _state, etc., so importing brmspy doesn’t mutate the R environment

  • activate_runtime() now checks a manifest and a system fingerprint before touching .libPaths() or cmdstan paths, and either completes or rolls back

  • brmspy.runtime.status() returns a dataclass with:

    • active runtime path and version
    • system fingerprint / toolchain info
    • prebuilt compatibility
    • installed versions of brms, cmdstanr, rstan

There’s also a generic get_data() helper that loads example datasets from any installed R package, alongside the older get_brms_data().


Windows caveats

Windows still behaves differently:

  • Installing or reinstalling R packages / cmdstanr in the same embedded R session after they’ve been used is unreliable because DLLs stay locked and some packages don’t detach cleanly.

  • Because of that, automatic reuse of a previously used prebuilt runtime is disabled on Windows. The intended pattern there is:

    • start a fresh Python process
    • call activate_runtime() explicitly
    • run models, avoid doing heavy install work after the runtime has been used

There is an optional install_rtools=True flag on install_brms() which runs the official Rtools installer, but it’s off by default.

1 Like

After using brmspy in various environments, I have found some serious pain points left that makes it both brittle and unpredictable.

All main ones are simply unexpected crashes that either comes from segfaults in R itself or the rpy2 bridge that the library uses to make R calls. Due to the way that the R session is embedded within the Python context, this means that the Python environment will crash in an unrecoverable way too. In some cases it even leads to IDE crashes. While my choice to force ABI mode did make it better, it’s far from perfect and at least for me, it is not production grade and requires careful consideration of retry strategies in my AWS pipelines.

The lack of isolation for R is also the main culprit in brittle tests and need for serious OS-specific workarounds that I have built into brmspy too. Plus it makes the R session effectively unchangeable after you have imported some packages.

Since there’s no way I will be able to solve every unpredictable behaviour of R, rpy2 or various OS quirks, I took a step back to think on how the R session could be isolated without causing significant overhead.

Right now the architecture can be thought of as a single process of Python + R (Embedded). A crash in one immediately leads to problems in the other. So the logical choice seems to be to have 3 nodes. Main (py) + Worker (py) + R (embedded). The very obvious problem with this is the immediate overhead of memory use and data copying. The minimal requirements for such a setup would be that it uses no more memory than the two node architecture.

I have an experiment branch where I have managed to get the idea fully working.

The 3-node idea has these core features:

  • Uses shared memory to minimise the amount of data copying done and avoid any memory use peaks. This means we operate on same memory use assumptions as with direct rpy2 use
  • A module session that proxies regular-looking brms module calls to a worker (the IDE doesn’t know the difference). This allows doing the refactor without any significant API rework
  • Encoders/decoders for rpy2 that never copy data into the “workers Python heap”, they always copy Matrices, DataFrames and other R data type buffers directly into shared memory. No transformatsions will be done on matrices for example, we can reconstruct the buffer without transforming it into a numpy shared memory array.
  • Uses a context manager to make any changes to the R session. The R session can have its packages removed, reinstalled, R versions hotswapped etc with no side effects on any OS or a need to restart Python.
  • Environments and runtimes are isolated from one another. Runtimes will never be mutated by brmspy, they can be reused in environments.

A major side effect of this is that since the main processes idiosyncrasies no longer indirectly affect the Python process running our embedded R, previously easy-to-happen segfaults have gone down significantly. Loo functions for example cause zero issues with this setup.

Examples on how the WIP process for managing an R environment looks like:


# stops the current session, starts it fresh. Automatically detects R_HOME, LD_LIBRARY_PATH etc, sets an isolated location for installing user managed packages.
with brms.manage(environment_name="mrp") as ctx:
	# downloads cmdstan/brms/rstan into ~/.brmspy/runtime/{fingerprint}/
	# adds runtime to libPaths
    ctx.install_brms(use_prebuilt=True)

    # installs MCMCglmm into ~/.brmspy/environment/{environment_name}/Rlib
    ctx.install_rpackage("MCMCglmm")

# lets assume for some reason we want to switch to R4.4
# stops the current session, starts it fresh. Automatically detects LD_LIBRARY_PATH etc
with brms.manage(environment_name="legacy", r_home="path/to/r/4.4") as ctx:
	ctx.install_brms(use_prebuilt=True)

After these two steps, on macos, the folder structure for environments and runtimes would look like this:

.brmspy
├── environment
│   ├── default
│   │   ├── config.json
│   │   └── Rlib
│   ├── mrp
│   │   ├── config.json
│   │   └── Rlib
│   └── legacy
│       ├── config.json
│       └── Rlib
│           └── MCMCglmm
├── environment_state.json
├── runtime
│   ├── macos-arm64-r4.5-0.2.0
│   │   ├── cmdstan
│   │   ├── hash
│   │   ├── manifest.json
│   │   └── Rlib
│   └── macos-arm64-r4.4-0.2.0
│       ├── cmdstan
│       ├── hash
│       ├── manifest.json
│       └── Rlib
└── runtime_state.json

This whole process “just works” thanks to isolating the R session.

To illustrate the roundtrip a bit better, this is what it looks like for the brm function:

main:
-> brm(fit)
-> RModuleSession.__getattribute__ - checks if the module has the function, then caches its call if it does
-> RModuleSession._call_remote
  -> codec.encode - for args and kwargs. Puts numpy, arviz, pandas and xarray objects into shared memory buffers that require no transformation to reconstruct. If already on shared memory, just uses the shared memory information
  -> R objects are stored on worker side and exhanged as simple SexpWrapper(rid, repr) from and to main side (formula, priors in this case). 
  -> ships only the metadata and memory addresses to worker

worker:
-> codec.decode - for args and kwargs. reconstructs objects from buffers without transformations. Zero copy.
-> gets Sexp (R objects) from Sexp cache by rid (formula, priors).
-> brms.brm(fit) - calls embedded R's brms::brm().
  -> For any R Matrix, DataFrame returned, r_to_py(obj) copies buffers DIRECTLY into shared memory for reconstruction as Python objects. Respects column-major, row-major etc.
  -> Create arviz objects using existing buffers
<- back in worker main, turns any Sexp objects into SexpWrapper
<- codec.encode - stores metadata needed for reconstruction and shm information
<- sends minimal data representation back to main (e.g 3 512mb matrices would be a couple hundred bytes)

main:
<- codec.decode - reconstructs arviz objects from received buffers. No copying is done again, the underlying data points to the same memory adresses as in worker
<- Result of codec.decode is given to the user

There is some metadata construction and JSON parsing overhead, but it’s minimal and close to unnoticable compared to the actual time it takes to run large majority of brms functions. Since the codecs and rpy2 converters are per R or python type, we don’t need extensive logic for every function. With a few codecs and converters, the majority of data exchanged with R is already automatically covered. For any special or uncovered types, it does default to pickle, but nothing larger than 1mb will ever be passed in normal use cases.

Keeping memory use low is critical for my use cases, as sometimes the posteriors I work with are around 5gb. Doing any extra copies on that will be very expensive for any automated pipeline.

Hence I might even need to introduce parameters to copy the R data in a chunked manner, meaning that once a chunk is copied, it is partially removed from memory on R side from a matrix or dataframe and the Sexp output from brm() or any other wrapper will be NULL. GC would need investigating for this case obviously, not a priority at the moment.

Also, the library stays usable with no extra worker process when BRMSPY_WORKER="1" env var is set. More unstable and I see no point in directly using it in that mode, but it’s there.

I don’t think I’ll be shipping this for a couple more days, there’s a ton of edge cases I want to try with the experimental architecture. But so far, it’s shockingly stable and I haven’t even seen worker crashes.

This might look like over-engineering 101 :) , but I finally have some OSS time and I want to use brms from Python without worrying that some edge-case will blow up my session

2 Likes

Well, it’s done.

It’s a silly example, but it demonstrates well on how deeply R can be “abused” now with the new architecture.

import pandas as pd
import numpy as np

models = []

for _ in range(4):
    # Triggers a restart for R and SharedMemoryManager shutdown
    with brms.manage(environment_name="_test") as ctx:
        ctx.install_brms(use_prebuilt=True)
        if ctx.is_rpackage_installed("MCMCglmm"):
            ctx.uninstall_rpackage("MCMCglmm")
        ctx.install_rpackage("MCMCglmm")
        ctx.import_rpackages("MCMCglmm")

    df = brms.get_data("BTdata", package="MCMCglmm")

    model = brms.brm(
        formula="back ~ sex",
        data=df,
        family="gaussian",
        iter=100,
        warmup=50,
        chains=2,
        silent=2,
        refresh=0,
    )
    models.append(model)

# still uses shm
print(models[0].idata)

Doing some memory usage optimization and prediction functions standardisation work tomorrow. Noticed inconsistencies in arviz outputs between brm() and pred functions for multivariate vs univariate models. But that requires separate work, so I’ll rather keep that as 0.3.1.

0.3.0 - Process-Isolated R & Hot-Swappable Runtimes

25.12.14

Colab example

This release introduces a redesigned main–worker–R architecture to address stability issues caused by embedding R directly in the Python process. In real-world use, unpredictable failures, ranging from R segfaults to rpy2 crashes, could take down the entire Python runtime, invalidate IDE sessions, and make test behavior OS-dependent. The old single-process model also made R state effectively immutable after import, limited runtime switching, and required brittle workarounds for package management and CI isolation. These issues were not fixable at the level of defensive coding alone.

The new architecture jails R inside a dedicated worker process with shared-memory transport, zero-copy codecs, and a proxy module session that preserves the public API. All R data structures (matrices, data frames, ArviZ objects) transfer via shared memory without heap duplication, keeping memory use equivalent to the original design even for multi-gigabyte posteriors. R runtimes and environments are now fully isolated, hot-swappable, and safely mutable through a context manager. Worker-side crashes no longer affect the main interpreter, and previously fragile operations (e.g., loo functions) run without instability. The result is a predictable, reproducible, OS-agnostic execution model with significantly reduced failure modes.

Breaking Changes

  • Removal of top-level functions: brmspy.fit, brmspy.install_brms, and other direct exports from the root package have been removed. Users should import the brms module (e.g., from brmspy import brms or from brmspy.brms import bf).
  • install_brms API change: Global installation functions (install_brms, install_runtime, install_rpackage) are removed from the public namespace. Installation and environment modification must now be performed inside a brms.manage() context to ensure safe worker restarts.
  • Opaque R handles: The .r attribute on result objects (e.g., FitResult.r) is no longer a live rpy2 object but a SexpWrapper handle. These handles cannot be manipulated directly in the main process but can be passed back into brms functions for processing in the worker. They retain the R repr for debugging purposes.
  • Runtime module internalization: brmspy.runtime has been moved to brmspy._runtime and is considered internal. Public runtime interactions should occur via brms.manage().
  • Formula logic: FormulaResult has been replaced by FormulaConstruct. Formulas are now built as pure Python DSL trees and only converted to R objects during execution in the worker.

New Features

  • Context-managed environments: Added brms.manage(), a context manager that spins up a dedicated worker for administrative tasks. Exposes methods ctx.install_brms(), ctx.install_runtime(), and ctx.install_rpackage() among others which persist changes to the active environment.
  • Multi-environment support: Users can create and switch between named environments (e.g., with brms.manage(environment_name="dev"): ...) which maintain separate user libraries (Rlib) layered on top of the base runtime.
  • Environment persistence: Active environment configurations are saved to ~/.brmspy/environment_state.json and ~/.brmspy/environment/<name>/config.json.
  • Status API: Added brms.environment_exists() and brms.environment_activate() helpers for managing the lifecycle of R environments programmatically.

Environments & Runtime

  • Process Isolation: R now runs in a dedicated spawned worker process. Calls from the main process are serialized and sent via IPC.
  • Shared Memory Transport: Implemented a custom SharedMemoryManager based transport layer. Large numeric payloads (NumPy arrays, pandas DataFrames, ArviZ InferenceData) are written to shared memory buffers, avoiding serialization overhead.
  • Hot-swappable sessions: The R worker can be restarted with a different configuration (R_HOME, library paths) on the fly without restarting the Python interpreter.
  • Zero-copy codecs: Added internal codecs (NumpyArrayCodec, PandasDFCodec, InferenceDataCodec) that handle SHM allocation and view reconstruction transparently.
  • Sexp Cache: Implemented a worker-side cache for R objects (Sexp). The main process holds lightweight SexpWrapper references (by ID) which are rehydrated into real R objects when passed back to the worker.

API & Behaviour

  • Pure Python Formulas: Formula helpers (bf, lf, nlf, etc.) now return FormulaConstruct dataclasses. This allows formula composition (+ operator) to happen in Python without requiring a running R session until fit time.
  • Worker Proxy: The brmspy.brms module is now a dynamic proxy (RModuleSession). Accessing attributes triggers remote lookups, and calling functions triggers IPC requests.
  • Logging Bridge: Worker-side logs (including R console output) are captured and forwarded to the main process’s logging handlers via a QueueListener.

Documentation & Infrastructure

  • Versioned Documentation: Added mike support for deploying versioned docs (e.g., /0.3/, /stable/) to GitHub Pages via the docs-versioned workflow.
  • Architecture enforcement: Added import-linter with strict contracts to prevent leakage of internal layers (e.g., ensuring rpy2.robjects is never imported in the main process).
  • Internal Docs Generation: Added scripts to auto-generate API reference stubs for internal modules (_runtime, _session, etc.) to aid development.

Testing & CI

  • Hot-swap stress tests: Added tests that repeatedly restart the entire R runtime and SharedMemoryManager in a loop, then immediately access old SHM-backed arrays and InferenceData. These scenarios would crash instantly if any lifetime or reference handling were incorrect, making them an effective torture test of the new architecture.
  • Worker Test Marker: Introduced @pytest.mark.worker and a worker_runner fixture to execute specific tests inside the isolated worker process.
  • Coverage Aggregation: Updated CI to merge coverage reports from the main process and the spawned worker process.
  • R Dependency Tests: Switched r-dependencies-tests workflow to use the new isolated test runner script.
1 Like

0.3.1 - Standardized and modular InferenceData, improved memory management

25.12.23

This release standardizes the InferenceData structure across all prediction methods, ensuring consistent dimensions (chain, draw, obs_id) and variable naming conventions. It also improves shared-memory transport for Pandas DataFrames, enabling high-fidelity roundtripping of Categoricals and other types between R and Python.

Lets define:

  • C = n_chain (e.g 4)
  • D = n_draw (per chain, e.g 1000)
  • N = in-sample rows
  • M = newdata (out-of-sample) rows
  • canonical observation dim name: obs_id

brm() idata returns:

  • posterior
  • posterior_predictive
  • log_likelihood
  • observed_data
  • constant_data (new)

More specifically:

Group What it contains Variables Dims Shape
posterior posterior draws of model parameters params (e.g. b_*, sd_*, …) (chain, draw, …) (C, D, …)
posterior_predictive in-sample replicated outcomes (PPC) one var per response (e.g. count, Base) (chain, draw, obs_id) (C, D, N)
log_likelihood pointwise log-likelihood for observed outcomes one var per response (e.g. count, Base) (chain, draw, obs_id) (C, D, N)
observed_data observed responses used for fitting one var per response (e.g. count, Base) (obs_id,) (N,)
constant_data training covariates/ids used to generate in-sample quantities predictors/ids (e.g. zAge, zBase, Trt, visit, patient, …) (obs_id, …) (N, …)

Prediction functions

The goal of prediction functions is to keep their outputs composable. When working with very large datasets, we might want only SOME of the idata features and be able to join them easily.

Hence, this is perfectly valid:


m = brms.brm(formula, data=df, family="poisson", return_idata=False)

idata_post = brms.posterior(m).idata         # groups: posterior + constant_data
idata_obs  = brms.observed_data(m).idata     # groups: observed_data + constant_data
idata_ll   = brms.log_lik(m).idata           # groups: log_likelihood + constant_data

# optional: in-sample mean + linpred (stored in posterior)
idata_mu   = brms.posterior_epred(m).idata   # groups: posterior + constant_data   (var: count_mean)
idata_eta  = brms.posterior_linpred(m).idata # groups: posterior + constant_data   (var: count_linpred)

# Compose groups that don't overlap using extend()
idata = idata_post
idata.extend(idata_obs)   # adds observed_data
idata.extend(idata_ll)    # adds log_likelihood

# Compose within-group posterior variables using xarray merge
# (because extend() won't merge inside an existing group)
idata.posterior = xr.merge([idata.posterior, idata_mu.posterior, idata_eta.posterior])

print(idata.groups())
# -> ['posterior', 'observed_data', 'log_likelihood', 'constant_data']
idata
# -> Inference data with groups:
#	> posterior
#	> log_likelihood
#	> observed_data
#	> constant_data

Univariate model (count)

Function newdata? Groups returned Draw vars Draw dims Draw shape Const group returned
posterior(m) posterior parameters… (chain, draw, …) (C, D, …) constant_data (obs_id, …)->(N, …)
observed_data(m) observed_data count (obs_id,) (N,) constant_data (obs_id, …)->(N, …)
posterior_epred(m) No posterior count_mean (chain, draw, obs_id) (C, D, N) constant_data (obs_id, …)->(N, …)
posterior_epred(m, newdata) Yes predictions count_mean (chain, draw, obs_id) (C, D, M) predictions_constant_data (obs_id, …)->(M, …)
posterior_predict(m) No posterior_predictive count (chain, draw, obs_id) (C, D, N) constant_data (obs_id, …)->(N, …)
posterior_predict(m, newdata) Yes predictions count (chain, draw, obs_id) (C, D, M) predictions_constant_data (obs_id, …)->(M, …)
posterior_linpred(m) No posterior count_linpred (chain, draw, obs_id) (C, D, N) constant_data (obs_id, …)->(N, …)
posterior_linpred(m, newdata) Yes predictions count_linpred (chain, draw, obs_id) (C, D, M) predictions_constant_data (obs_id, …)->(M, …)
log_lik(m) No log_likelihood count (chain, draw, obs_id) (C, D, N) constant_data (obs_id, …)->(N, …)
log_lik(m, newdata) Yes log_likelihood count (requires response in newdata) (chain, draw, obs_id) (C, D, M) predictions_constant_data (obs_id, …)->(M, …)

Multivariate model (count, Base)

Function newdata? Groups returned Draw vars Draw dims Draw shape Const group returned
posterior(m) posterior parameters… (chain, draw, …) (C, D, …) constant_data (obs_id, …)->(N, …)
observed_data(m) observed_data count, Base each (obs_id,) each (N,) constant_data (obs_id, …)->(N, …)
posterior_epred(m) No posterior count_mean, Base_mean each (chain, draw, obs_id) each (C, D, N) constant_data (obs_id, …)->(N, …)
posterior_epred(m, newdata) Yes predictions count_mean, Base_mean each (chain, draw, obs_id) each (C, D, M) predictions_constant_data (obs_id, …)->(M, …)
posterior_predict(m) No posterior_predictive count, Base each (chain, draw, obs_id) each (C, D, N) constant_data (obs_id, …)->(N, …)
posterior_predict(m, newdata) Yes predictions count, Base each (chain, draw, obs_id) each (C, D, M) predictions_constant_data (obs_id, …)->(M, …)
posterior_linpred(m) No posterior count_linpred, Base_linpred each (chain, draw, obs_id) each (C, D, N) constant_data (obs_id, …)->(N, …)
posterior_linpred(m, newdata) Yes predictions count_linpred, Base_linpred each (chain, draw, obs_id) each (C, D, M) predictions_constant_data (obs_id, …)->(M, …)
log_lik(m) No log_likelihood count, Base each (chain, draw, obs_id) each (C, D, N) constant_data (obs_id, …)->(N, …)
log_lik(m, newdata) Yes log_likelihood count, Base (requires both in newdata) each (chain, draw, obs_id) each (C, D, M) predictions_constant_data (obs_id, …)->(M, …)

Standardised idata

All idata returned from brmspy functions is now standardised to be joinable with one another, keep DataFrame indexes correctly in obs_id and works uniformly for univariate and multivariate models.

  • brm(): Optional return_idata: bool argument. In case of large models, using false and only running methods you may need can be better for memory management (e.g brms.posterior_pred(fit)). When return_idata=True the function now also includes constant_data
  • posterior(): Returns draws in posterior and constant_data as idata.
  • observed_data() Returns observed_data and constant_data as idata
  • posterior_epred() Now returns predictions and predictions_constant_data in case there is newdata and posterior and constant_data when no newdata. Target variables are now suffixed with _mean.
  • posterior_predict() Now returns predictions and predictions_constant_data in case there is newdata and posterior_predictive and constant_data when no newdata. idata.
  • posterior_linpred() Now returns predictions and predictions_constant_data in case there is newdata and posterior and constant_data when no newdata. Target variables are now suffixed with _linpred.
  • log_lik() Always returns log_likelihood and depending on newdata=None returns constant_data or predictions_constant_data.
  • Added newdata kwarg based overloads for static typechecking to automatically recognise the correct returned groups for idata

This change allows composable architectures, where the user picks only the parts of idata they need for their analysis.

Pandas & R Type Conversion

  • Columnar SHM Transport: Improved ShmDataFrameColumns to transport DataFrames with mixed types via shared memory. Numeric and categorical columns now move between processes with zero-copy overhead, while complex object columns fall back to pickling individually.
  • Categorical Fidelity: R factors now correctly roundtrip to pandas.CategoricalDtype, preserving categories, integer codes, and ordered status across the main-worker boundary.
  • Broad Dtype Support: Enhanced converters to robustly handle pandas nullable integers (Int64), nullable floats, strings during R conversion.

Bug fixes and enhancements

  • Worker crash recovery: Added automatic recovery for R worker crashes
    (segfaults, BrokenPipeError, ConnectionResetError). The worker is restarted
    transparently and the call raises RWorkerCrashedError. The exception
    includes a recovered: bool flag indicating whether a clean worker session
    was successfully started, allowing pipelines to distinguish retryable
    crashes (recovered=True) from hard failures (recovered=False).
  • Numpy Encoding: Standardised encoding for object arrays. String arrays are now optimized as ShmArray; mixed object arrays gracefully fall back to pickling.
  • Improved SHM memory management: Introduced explicit temporary buffers that are cleaned up immediately after use, while non-temporary buffers are now tracked by ShmPool only until the next main ↔ worker exchange; buffer lifetime is then transferred to CodecRegistry, which ties shared-memory mappings to reconstructed objects via weakrefs, minimizing the number of active mappings and allowing timely resource release once those objects are garbage-collected.
1 Like