# Speeding up predictions in brms

I am wondering if there is a faster way to accomplish something in brms for a project I’m working on. I am developing a patient level microsimulation that includes a number of brms models that use splines on several continuous variables. The micro sim requires me to:

1. Sample a cohort of patients (e.g n = 8k)
2. Get predicted outcomes based on linked regression equations.
3. Repeat this process 1-2k times to incorporate first order (micro simulation) and second order (parameter) uncertainty.

When models don’t have splines this can be very fast since calculating the linear predictor for new data is straightforward. My solution so far has been to use the fitted function with nsamples = 1 but I find this to be very slow. For example, running all 10 fits takes about 6 seconds right now, which which gets out of hand quickly. My guess is that nsamples = 1 is very inefficient since it takes almost as much time to get a fitted object for the full posterior.

Is there something obvious that will allow be to quickly create a the required design matrix for new data to calculate predictions by hand?

If you can share your model, it may help others offer advice. Also, unless I’m misunderstanding, just running 1 sample will not give you a reliable estimate of the posterior.

Thanks for the prompt. Essentially I have something like:

brm(out1 ~ s(v1) + s(v2) + v3, family = bernoulli)

For 10 outcomes, each model with 4k posterior draws. What I need to do is essentially send 4k simulated data sets through the fitted function, so that I end up with predictions for 4k datasets, each based on one posterior draw. The default behaviour of predict is (as expected) to predict form the entire posterior for one dataset.

When I fit a simpler model either in stan/JAGS etc this is easier to do quickly since I can just index the posterior and calculate the linear predictor by hand. Introducing splines was nice in terms face validity of how some of these variables are expected to act, but makes the practice of calculating fitted values more arduous because of transformations of the data.