Priorsense package powerscale_sensitivity function on brmsfit object returns error

I have just discovered the newly updated priorsense (version 1.0.1) package.

I have done a gaussian family meta analysis using brms with the cmdstanr backend.

I want to assess the sensitivity of the model to variations in the power scaled prior and likelihood.

The model runs successfully with Rhat = 1.00 and Bulk and Tail ESS > 10000.

The loo function with moment_match = TRUE returns

         Estimate  SE
elpd_loo     -5.2 2.5
p_loo        14.6 1.9
looic        10.3 5.1
------
MCSE of elpd_loo is 0.1.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.5, 0.7]).

All Pareto k estimates are good (k < 0.7).

The call to priorsense package function returns:

powerscale_sensitivity(lipo.smd.ls.brms.0, variable = c('mu', 'sigma'))

Error in UseMethod("log_prior_draws") : 
  no applicable method for 'log_prior_draws' applied to an object of class "brmsfit"

This appears to be the correct call to the priorsense function.

The model code is as follows. I needed to adjust the step_size and adapt_delta.

priors <- c(prior(normal(0, 1), class = Intercept),
            prior(normal(0, 0.5), class = sd))

formula0 <- bf(formula = yi|se(vi) ~ 1 + (1|slab),
               family = gaussian())

lipo.smd.ls.brms.0 <- brm(formula = formula0,
                          data = tmp.smd.ls.df,
                          prior = priors,
                          seed = 12345,
                          sample_prior = TRUE,
                          save_pars = save_pars(all = TRUE),
                          save_cmdstan_config = TRUE,
                          control = list(adapt_delta = 0.9,
                                         step_size = 0.005),
                          iter = 32000,
                          chains = 4,
                          cores = 4)


Here is the data. There are 13 studies included in the meta analysis. The data uses the standardized mean difference of means with yi and vi calculated in the metafor package.

                slab      yi     vi 
1   Chevrollier 2022 -0.1567 0.0528 
2        Colvin 2018  0.1614 0.0749 
3        Curtis 2018 -0.8635 0.4914 
4         Fafaj 2020  0.3031 0.0361 
5     Fidkowski 2021 -0.2320 0.0596 
6            Ha 2019 -0.0519 0.0909 
7     Herskovic 2018 -2.1493 1.4966 
8      Hutchins 2015 -0.7036 0.0745 
9      Hutchins 2016 -0.3617 0.0689 
10 Nedelijkovic 2020 -0.3414 0.0299 
11     Perdikis 2022 -0.5946 0.0696 
12       Truong 2021 -0.4042 0.0404 
13         Yeap 2022  0.2106 0.0516

Please also provide the following information in addition to your question:

  • Operating System: Mac OS 14.5
  • brms * Version: 2.21.0
    priorsense Version: 1.0.1

I haven’t had a chance to try priorsense with a brms model yet, but @n-kall may be able to help. @n-kall Is this the right way to use this with a brmsfit object?

Thanks for reporting this issue. The required functions were recently moved to brms, so you’ll need to use the development version of brms until the next release on CRAN. You could do this via remotes::install_github("paul-buerkner/brms").

With a newer brms version, your code should work!

1 Like

Thanks for responding to my query.

I installed the github version of brms. It is labelled 2.21.6. The call to github also had me install later versions of rstan, loo, etc.

I reran my models with new updated packages using the same code and same data.

Still get an error message, but different.

powerscale_sensitivity(lipo.smd.ls.brms.0)
Error: The following variables are missing in the draws object: {‘lprior’}

Nathan

Hmm, I can’t seem to reproduce the error. Are you sure you refit the model with the latest brms? The lprior variable should be available in brms 2.21.6.

For reference, the output I get from your code is:

   variable                            prior likelihood diagnosis
   <chr>                               <dbl>      <dbl> <chr>
 1 b_Intercept                      0.00986     0.0268  -
 2 sd_slab__Intercept               0.0436      0.0331  -
 3 sigma                          NaN         NaN       NA
 4 Intercept                        0.00986     0.0268  -
 5 r_slab[Chevrollier,Intercept]    0.00915     0.0314  -
 6 r_slab[Colvin,Intercept]         0.00871     0.0514  -
 7 r_slab[Curtis,Intercept]         0.00709     0.0673  -
 8 r_slab[Fafaj,Intercept]          0.00989     0.0346  -
 9 r_slab[Fidkowski,Intercept]      0.00880     0.0293  -
10 r_slab[Ha,Intercept]             0.00777     0.0442  -
11 r_slab[Herskovic,Intercept]      0.00891     0.0416  -
12 r_slab[Hutchins,Intercept]       0.00926     0.0215  -
13 r_slab[Nedelijkovic,Intercept]   0.00971     0.0268  -
14 r_slab[Perdikis,Intercept]       0.00706     0.0205  -
15 r_slab[Truong,Intercept]         0.00862     0.0254  -
16 r_slab[Yeap,Intercept]           0.00959     0.0395  -
17 prior_Intercept                  0.000154    0.00303 -
18 prior_sd_slab                    0.000325    0.00789 -
19 z_1[1,1]                         0.00278     0.0420  -
20 z_1[1,2]                         0.00676     0.0374  -
21 z_1[1,3]                         0.00174     0.0765  -
22 z_1[1,4]                         0.0117      0.0127  -
23 z_1[1,5]                         0.00511     0.0396  -
24 z_1[1,6]                         0.00179     0.0497  -
25 z_1[1,7]                         0.00168     0.0442  -
26 z_1[1,8]                         0.0134      0.0377  -
27 z_1[1,9]                         0.00955     0.0424  -
28 z_1[1,10]                        0.0135      0.0250  -
29 z_1[1,11]                        0.0112      0.0413  -
30 z_1[1,12]                        0.00935     0.0231  -

I appreciate hearing from you.

I just reran the model, same code and same data.

I get the same error with trying powerscale_sensitivity.

I note the following:

lipo.smd.ls.brms.0$version

$brms

[1] ‘2.21.6’

$rstan

[1] ‘2.35.0.9000’

$stanHeaders

[1] ‘2.35.0.9000’

$cmdstanr

[1] ‘0.8.1’

$cmdstan

[1] ‘2.35.0’

It certainly appears that the lastest brms version is being used.

Nathan

If you access the Stan code of the model fit with stancode(lip.smd.ls.brms.0) does it contain the lines

 // priors including constants
  target += lprior;

?

Here is the STAN code:

// generated with brms 2.21.6
functions {
}
data {
int<lower=1> N; // total number of observations
vector[N] Y; // response variable
vector<lower=0>[N] se; // known sampling error
// data for group-level effects of ID 1
int<lower=1> N_1; // number of grouping levels
int<lower=1> M_1; // number of coefficients per level
array[N] int<lower=1> J_1; // grouping indicator per observation
// group-level predictor values
vector[N] Z_1_1;
int prior_only; // should the likelihood be ignored?
}
transformed data {
vector<lower=0>[N] se2 = square(se);
}
parameters {
real Intercept; // temporary intercept for centered predictors
vector<lower=0>[M_1] sd_1; // group-level standard deviations
array[M_1] vector[N_1] z_1; // standardized group-level effects
}
transformed parameters {
real sigma = 0; // dispersion parameter
vector[N_1] r_1_1; // actual group-level effects
r_1_1 = (sd_1[1] * (z_1[1]));
}
model {
real lprior = 0; // prior contributions to the log posterior
// likelihood not including constants
if (!prior_only) {
// initialize linear predictor term
vector[N] mu = rep_vector(0.0, N);
mu += Intercept;
for (n in 1:N) {
// add more terms to the linear predictor
mu[n] += r_1_1[J_1[n]] * Z_1_1[n];
}
target += normal_lupdf(Y | mu, se);
}
// priors not including constants
lprior += normal_lupdf(Intercept | 0, 1);
lprior += normal_lupdf(sd_1 | 0, 0.5);
target += lprior;
target += std_normal_lupdf(z_1[1]);
}
generated quantities {
// actual population-level intercept
real b_Intercept = Intercept;
}

So

target += lprior;

is present.

I still haven’t been able to reproduce this, even after ensuring I am using the same versions.
It’s strange that the Stan code shows lprior but it is not available in the fit.

Does "lprior" %in% posterior::variables(lipo.smd.ls.brms.0) return TRUE (I am guessing in your case it is FALSE)?

The error is occurring because the model has been specified with normalize = FALSE being passed to brms, which drops normalizing constants (using _lupdf distributions) and specifies lprior in the model block instead of transformed parameters (which is why it isn’t seen in the fit).

@nlpacestan check whether you’re specifying normalize = FALSE in the call to brms, or whether you have the brms.normalize global option set

1 Like

Good catch! This is related to this issue. Maybe it should be changed in brms, such that the lprior is then created in the generated quantities (using *_lpdf I suppose) if normalize = FALSE? @paul.buerkner

1 Like

Can you open an issue for that? I must admit, I am not sure we can support normalize = FALSE together with lprior but at least we can set an informative error if we cannot find a proper fix.

1 Like

Two questions were asked.

Yes

normalize = FALSE was set with options(brms.normalize = FALSE)

Yes

“lprior” %in% posterior::variables(lipo.smd.ls.brms.0) returns FALSE

Nathan

I reran my model with brms.normalize = TRUE.

powerscale_sensitivity(object,

  •                    # moment_match = TRUE,
    
  •                    variable = c('b_Intercept', 'sd_slab__Intercept'))
    

Sensitivity based on cjs_dist:

A tibble: 2 × 4

variable prior likelihood diagnosis

1 b_Intercept 0.0116 0.0390 -
2 sd_slab__Intercept 0.0466 0.228 -

From reading Kallioinen et al, I would say there is no sensitivity to the prior for either parameter, but there is likelihood sensitivity (likelihood domination) for sd_slab__Intercept. This is also the appearance of the powerscale_plot_dens and ecdf objects.

b_Intercept is the summary value of the SMD across studies. sd_slab__Intercept is the between study variability in this meta-analysis.

Questions:

  1. No diagnosis label was assigned for either variable. No error message requested additional packages. I did load testthat (3.2.1.1) in the session. Something else I need to do?

  2. When I run loo(model object) there is a warning message:
    Warning message:
    Found 4 observations with a pareto_k > 0.7 in model ‘lipo.smd.ls.brms.0’. We recommend to set ‘moment_match = TRUE’ in order to perform moment matching for problematic observations.

I rerun with moment_match = TRUE and the loo object is returned with all pareto k estimates < 0.7.

There are no warning messages in the call to priorsense functions. There are no dashed lines in the plot functions.

I unable to run powerscale_sensitivity with moment _match = TRUE as the iwmm package is needed.

The version of iwmm at github is not available for R 4.4.1.

I appreciate the help from the package developers via brms forum.