Performance issue with an outcome measurement error model with data transformation

I actually don’t think you need the loop at all, as this:

for (i in 1:N){
    theta[i] ~ normal(alpha+beta*d[i],sigma_model);
}

can certainly be changed to:

theta ~ normal(alpha+beta*d,sigma_model);

and even faster:

theta ~ normal_id_glm(d,alpha,beta,sigma_model);

Then, if you define your linear_interpolation_v function with a vector signature you can omit the temporary-variable creation (which I recently learned does slow things down):

...
transformed data{
  vector[N] cra_error_sq = pow(cra_error,2) ;
}
...
model {

  // Priors:
  alpha ~ normal(prior_intercept_mean,prior_intercept_sd);
  beta ~ normal(0,0.5);
  sigma_model ~ exponential(0.001);
  
  // Core Model:
  theta ~ normal_id_glm(d,alpha,beta,sigma_model);
  cra ~ normal(
    linear_interpolation_v(theta,calBP,c14BP)
    , sqrt( cra_error_sq + linear_interpolation_v(theta,calBP,c14_err)^2 )
  )

}
1 Like