Simulation Based Calibration (SBC) for non-linear model

The upcoming SBC event led to me trying to implement an SBC approach to a non-linear model I’m working on.

The predictors are matrices of observed predictors over time relative to an observed outcome.

As the model is non-linear, the parameters are dependent on observed pedictor data, so I have coded the number of simulations as the number of observations. I’m curious as to whether or not SBC can/could/should be implemented to calibrating this kind of model.

Here is what I have coded as an initial attempt:

data {
  int<lower=1> N;  // total number of observations
  vector[N] y;  // response variable
  int<lower=1> N_Times; // Number of times in time series
  matrix[N, N_Times] X1;
  matrix[N, N_Times] X2;
  vector[N] X3;
        }
  transformed data {
  real<lower=0> a_sim = normal_rng(5,5);  
  real<lower=0> b_sim = normal_rng(5,5);  
  real<lower=0> c_sim = normal_rng(5,5);  
  real<lower=0> kappa_sim = exponential_rng(1);  
  vector[N] y_pred;
  vector<lower=0>[N] mu_sim;
    for (n in 1:N) {
      mu_sim[n] = sum(row(X1, n) .* (a_sim * exp(b_sim * row(X2, n)) + c_sim))  / X3[n];
          y_pred[n] = beta_rng(inv_logit(mu_sim[n]) * kappa_sim, (1 - inv_logit(mu_sim[n)) * kappa);
    }
       }
parameters {
  real<lower=0> a;  
  real<lower=0> b;  
  real<lower=0> c;  
  real<lower=0> kappa;  
}
model {
vector[N] mu;
// priors including constants
  a ~ normal(5,5);
  b ~ normal(5,5);
  c ~ normal(5,5);
  kappa ~ exponential(1);
    // initialize mu for beta likelihood
    for (n in 1:N) {
      mu[n] = sum(row(X1, n) .* (a * exp(b * row(X2, n)) + c))  / X3[n];
    }
    y_pred ~ beta(inv_logit(mu) * kappa, (1 - inv_logit(mu)) * kappa);
}
generated quantities {
        int<lower = 0, upper = 1> lt_sim[3]
              = { a < a_sim, b < b_sim, c < c_sim };
}

The code compiles, runs, and returns results and I will still need to tackle thinning and the additional steps, but I’m wondering if it’s a valid approach.

Any feedback would be greatly appreciated!

1 Like

SBC is fine, you just have to be careful about the interpretation.

Given a model pi(y, theta) SBC provides some evidence as to whether or not a given computational tool can accurately quantify the posteriors \pi(\theta | y) arising from prior predictive data

\theta \sim \pi(\theta) \\ y \sim \pi(y | \theta).

In the presence of covariates there are two ways to apply SBC. One is the condition on a particular observed covariate values. In this case SBC provides some evidence as to whether or not a given computational tool can accurately quantify the posteriors
\pi(\theta | y, x) arising from prior predictive data

\theta \sim \pi(\theta) \\ y \sim \pi(y | x, \theta).

In other words any information gleamed from SBC will be applicable only to that particular set of covariate values and no others! That can still be useful information, it just can’t be generalized to other covariate configurations.

In order to consider how robust a computational tool is to other covariate values one has to explicitly model the covariates,

\theta \sim \pi(\theta) \\ x \sim \pi(x | \theta) \\ y \sim \pi(y | x, \theta).

Here pi(x | \theta) can correspond to an actual generative process or even just some heuristic quantification of what kind of covariate values might be reasonable. In this case the SBC results apply only to the context of that assumed distribution.

1 Like

Thank you, @betanalpha!

In this particular application, I’m interested in checking calibration for the current data set (which is messy) and not really concerned with generalizing the model beyond the current data.

I’ll kindly invite you to try out the SBC package we developed (and will showcase at the event) with your model. We kind of wanted to advertise it only after the event, but since you explicitly asked :-)

It currently does not support models where you use transformed data block to generate the data. I’ve found it most practical to have the simulation code in R so that I can use the same Stan file for SBC and for final fitting.

The package is still somewhat early in the development, but I think it is already useful and will handle some useful stuff for you (paralellizing all the fits, the new neat ECDF plots from Teemu).

The basic_usage vignette shows how the code can look like in a very bare-bones setting. Other vignettes (SBC/vignettes at master · hyunjimoon/SBC · GitHub) show (somewhat underdocumented) use cases of using SBC to develop and diagnose models, so you are welcome to check those out as well.

Any feedback is welcome!

2 Likes

Just keep in mind that “calibration” as a general concept concerns behavior across some distribution of observations and not any one observation. SBC is a form of algorithmic calibration that can identify inconsistencies of a computational method across such a distribution of observations, which may be apply to your observation if it’s “typical” to that distribution.

If you really only care about how well you can quantify the posterior derived from one observation it may be more effective to study that distribution directly. For example with some of the methods discussed in Identity Crisis.