Speed of evaluating (gradients of) log probabilities in pystan 2.x vs 3

Hi,

Ok I had some time to test this.

For this application you probably should tap into httpstan directly

In the examples feel free to use different values for the input, I just used the first sampled draw as an example. (notice that param_constrained needs to have the parameters in the correct shape → ndim object needs to be in ndim shape)

posterior = stan.build(schools_code, data=schools_data)
fit = posterior.sample(num_chains=4, num_samples=1000)

param_constrained = {key: fit[key][:, 0].tolist() for key in fit.param_names}
param_unconstrained = posterior.unconstrain_pars(param_constrained)

pystan solution (call log_prob from pystan)

%timeit -n 1000 posterior.log_prob(param_unconstrained)
# 2.71 ms ± 38.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit -n 1000 posterior.grad_log_prob(param_unconstrained)
# 2.7 ms ± 46.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

httpstan solution

import httpstan
module = httpstan.models.import_services_extension_module(posterior.model_name)

%timeit -n 1000 module.log_prob(posterior.data, param_unconstrained, True)
# 38.5 µs ± 5.64 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit -n 1000 module.log_prob_grad(posterior.data, param_unconstrained, True)
# 43.3 µs ± 7.75 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
1 Like