Hi, I’m new to Stan and Bayesian Inference in general. I’m running a model on a relatively small dataset  vector of 40 elements and a 40x40 covariance matrix. My model involves doing a 1D integration, though. And its been running very slow. I would appreciate suggestions on how I can improve the model performance.
Gradient evaluation took 2.34406 seconds
1000 transitions using 10 leapfrog steps per transition would take 23440.6 seconds.
IIUC, this means it should take about 10 hours to run with a sampling of 1500 iterations (500 of which are warmup). But the code is still running for close to 24 hours now.
I have attached text files containing the stan model and data.
cov.txt (39.9 KB)
data.txt (2.0 KB)
lcdm.stan (2.4 KB)
Link to jupyter notebook to load the data and run the model. The notebook also describes the math to make it easier to understand what the model is doing.  Google Colab
Note that the notebook was run locally on my Mac. The cell outputs are from the local run.
Also posting the code here for convenience
functions {
//E(z)
real e_func(real z, real Om, real Ode, real Ok) {
real zp1, x;
zp1 = 1 + z;
x = zp1 * zp1 * (Ok + Om*zp1) + Ode;
return sqrt(x);
}
//integrand function
real integrand_fz(real z, real zc, array[] real theta,
array[] real z_r, array[] int z_i) {
real ef;
ef = e_func((zc > 0 && zc < z) ? zc : z, theta[1], theta[2], theta[3]);
return 1.0/ef;
}
//f(z; params)
real f_func(real z, real H, real Om, real Ode, real Ok, real c, data array[] real x_r, data array[] int x_i) {
return (c/H)*integrate_1d(integrand_fz, 0.0, z, {Om, Ode, Ok}, x_r, x_i, 1.0e8);
}
//r(z; params)  comoving distance for a given redshift and model params
real r_z(real z, real H, real Om, real Ode, real Ok, real c, data array[] real x_r, data array[] int x_i) {
real fz;
fz = f_func(z, H, Om, Ode, Ok, c, x_r, x_i);
return Ok > 0 ? sin(fz) : (Ok < 0 ? sinh(fz): fz);
}
//d_L(z; params)  luminosity distance for a given redshift and model params
real dl_z(real z, real H, real Om, real Ode, real Ok, real c, data array[] real x_r, data array[] int x_i) {
return (1 + z) * r_z(z, H, Om, Ode, Ok, c, x_r, x_i);
}
//mu(z; params)  predicted apparent magnitude for a given redshift and model params
real model_apparent_mag(real z, real H, real Om, real Ode, real M, real Ok, real c, data array[] real x_r, data array[] int x_i) {
real dlz, dist_mod;
dlz = dl_z(z, H, Om, Ode, Ok, c, x_r, x_i);
dist_mod = 5.0 * log10(dlz)  5.0;
return dist_mod + M;
}
}
data {
int N; //number of data points
vector[N] z; //redshifts
vector[N] m; //apparent magnitudes
matrix[N,N] C; //covariance matrix
}
transformed data {
array[0] real x_r;
array[0] int x_i;
}
parameters {
real H0; //hubble paramater today
real Om0; //matter density today
real Ode0; //real dark energy density today
real M0; //absolute supernova magnitude
}
transformed parameters {
real Ok0; #curvature density today
Ok0 = 1.0  Om0  Ode0;
}
model {
vector[N] mu; //model predicted apparent magnitudes
real c; //speed of light
c = 299792.458;
H0 ~ uniform(0.0, 100.0);
Om0 ~ uniform(0.0, 1.0);
Ode0 ~ uniform(0.0, 1.0);
M0 ~ uniform(50.0, 30.0);
for(n in 1:N){
mu[n] = model_apparent_mag(z[n], H0, Om0, Ode0, M0, Ok0, c, x_r, x_i);
}
m ~ multi_normal(mu, C);
}
Couple of pointers about the code which may provide useful

I use the argument xc in my integrated in place of x, when 0 < xc < x. Otherwise I was getting a lot of errors like
Error in function tanh_sinh<double>::integrate: The tanh_sinh quadrature evaluated your function at a singular point and got nan
. With this change, I still get the errors but they are less often. 
I set a relative tolerance of 1e8 because otherwise I was getting errors like
integration error of 2.3e9 is less than relative tolerance times the norm of the integral
.
Operating System: macOS Catalina 10.15.6
Interface Version: Pystan 3.4.0. This is the version I got when I installed via pip last week.
Happy to provide any more information. Any suggestions would be appreciated. Thanks in advance!