Thank you for you explanation, once again, this time on makefiles!
Compiling the program with -fsanitize=undefined
and -fsanitize=address
reveals the following output:
method = sample (Default)
sample
num_samples = 1000 (Default)
num_warmup = 1000 (Default)
save_warmup = 0 (Default)
thin = 1 (Default)
adapt
engaged = 1 (Default)
gamma = 0.050000000000000003 (Default)
delta = 0.80000000000000004 (Default)
kappa = 0.75 (Default)
t0 = 10 (Default)
init_buffer = 75 (Default)
term_buffer = 50 (Default)
window = 25 (Default)
algorithm = hmc (Default)
hmc
engine = nuts (Default)
nuts
max_depth = 10 (Default)
metric = diag_e (Default)
metric_file = (Default)
stepsize = 1 (Default)
stepsize_jitter = 0 (Default)
num_chains = 1 (Default)
id = 1 (Default)
data
file = tmp/data.json
init = 2 (Default)
random
seed = 2126629878 (Default)
output
file = output.csv (Default)
diagnostic_file = (Default)
refresh = 100 (Default)
sig_figs = -1 (Default)
profile_file = profile.csv (Default)
num_threads = 1 (Default)
Rejecting initial value:
Error evaluating the log probability at the initial value.
Exception: Error in function boost::math::lambert_w0<double>: Expected z >= -e^-1 (-0.367879...) but got -1.5587280086931916. (in 'tmp/model.stan', line 64, column 2 to column 80)
Rejecting initial value:
Error evaluating the log probability at the initial value.
Exception: Error in function boost::math::lambert_w0<double>: Expected z >= -e^-1 (-0.367879...) but got -1.2707744865573449. (in 'tmp/model.stan', line 64, column 2 to column 80)
Rejecting initial value:
Error evaluating the log probability at the initial value.
Exception: Error in function boost::math::lambert_w0<double>: Expected z >= -e^-1 (-0.367879...) but got -1.0979054103223893. (in 'tmp/model.stan', line 64, column 2 to column 80)
Rejecting initial value:
Error evaluating the log probability at the initial value.
Exception: Error in function boost::math::lambert_w0<double>: Expected z >= -e^-1 (-0.367879...) but got -2.6601930641890781. (in 'tmp/model.stan', line 64, co
However, has I was making a bug report this in the Stan’s math Github repository, I thought about doing a MWE, just for the sake of making it easy for others to understand the problem.
To my surprise, the MWE compiles and runs properly!
Here’s the MWE in Stan:
functions {
vector ode(real t, vector y, array[] real theta) {
real Omega_m = theta[1];
real Omega_r = theta[2];
real lambda = theta[3];
real u = y[1];
real f1 = exp(-lambda*u^2) / (lambda*u*(u^(-2)-2*lambda) - u^(-3));
real f2 = 1.5*(Omega_m)*(1+t)^2 + 2*Omega_r*(1+t)^3;
vector[1] uderiv;
uderiv[1] = f1*f2;
return uderiv;
}
}
data {
array[10] real t;
array[10] real uobs;
array[10] real error;
}
transformed data {
}
parameters {
real Omega_m;
real Omega_r;
real lambda;
}
transformed parameters {
// parameter array
array[3] real theta = {Omega_m, Omega_r, lambda};
// initial conditions
real t0 = 0;
vector[1] init;
init[1] = 1;
// obtain u(t) using ODE solver
array[10] real u;
array[10] vector[1] sol = ode_rk45(ode, init, t0, t, theta);
for (i in 1:10) {
u[i] = sol[i][1];
}
}
model {
// priors
Omega_m ~ normal(0.3, 10);
Omega_r ~ normal(0, 10);
lambda ~ normal(1, 10);
// likelihood
u ~ normal(uobs, error);
}
generated quantities {
}
The changes I made were defining a new parameter \Omega_m which is the sum of two other parameters \Omega_b + \Omega_c, I got rid of everything that wasn’t related to the ODE, which meant getting rid of the parameter h and the integral I was computing before.
And here’s how it’s being run:
# imports
from random import gauss, uniform
import numpy as np
#import arviz as az
#import pandas
import stan
# model file
with open("tmp/mwe.stan", "r") as file:
program = file.read()
# data file
data = {"t": np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]),
"uobs": np.array([0.58827315, 0.34228862, 0.22566855, 0.16229165, 0.1237196, 0.09827532, 0.08047538, 0.06745748, 0.05760086, 0.04992747]),
"error": np.array([0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001])}
# configure number of chains/samples/warmup
chains = 4
samples = 1000
warmup = 500
# initial conditions for each chain
init = []
for i in range(0, chains):
init.append({})
init[i]["Omega_m"] = float(0.3)
init[i]["Omega_r"] = float(0.0001)
init[i]["lambda"] = float(0.4)
# run!
posterior = stan.build(program, data=data)
fit = posterior.sample(num_chains=chains, num_samples=samples, num_warmup=warmup, init=init, save_warmup=True)
# do stuff after the program runs
from IPython import embed
embed()
I created the dummy dataset by solving the differential equation with scipy.integrate.odeint
and the parameter values of \Omega_m = 0.3, \Omega_r = 0.001 and \lambda = 0.4.
I should mention that I’m able to recover the values of the parameters I have used in creating the mock dataset, meaning that the ODE itself should be well-coded.
So, it seems that this quest isn’t over… there is something in my model which is causing, somehow, a reference to a null pointer…
Could it be that the integral was forcing the ODE solver to go somewhere it didn’t like and, by changing to a different ODE solver, it would go over that point and is somehow safe?
This doesn’t feel right, because if it were something like that, then most likely an infinity would should up or something like that, and Stan would reject that sample, instead of crashing with a segmentation fault due to a reference to a null pointer.