Hi Sebastian,
Thanks for your message. I have tried the stiff solver and doesn’t seem to
make much difference (unfortunately I am having issues benchmarking it
because expose_stan_functions doesn’t seem to work in this case:
).
Reducing the error tolerances does make a difference to runtime – I see a
factor of three decrease when using 10^-3 compared to the default (10^-6
absolute and relative tolerances). However with lower error tolerances I do
see a visible difference in the solution; there are the wavy lines that are
often indicative of numerical instability. Perhaps around 10^-5 is best for
my case (which has a speed-up of around 1/2).
The vectorising of the likelihood, I missed, thank you. However don’t think
this is really the bottleneck here.
If there are any other ideas here, let me know!
Best,
Ben