Stan 2.25 release candidate!

I am very happy to announce that the latest release candidate of Cmdstan is now available on Github! This release cycle is dedicated to some nice “slice of life” features for Stan users and developers which I’ll go over below. You can find the release candidate for cmdstan here.

Vectorized binary functions

First, for users, we’ve started adding vectorized binary functions to the language. This means that users can now write code such as

 matrix[17, 93] u[12];
 matrix[17, 93] z[12];
 z = pow(u, 2.0);

which provides the same results as calling

for (k in 1:12) {
  for (j in 1:93) {
    for (i in 1:17) {
      z[i, j, k] = pow(u[i, j, k], 2.0);
    }
  }
}

The official docs are not updated from 2.24 until the full release of 2.25. Until then, the list of vectorized functions should suffice:

  • bessel_first_kind, bessel_second_kind
  • beta, lbeta
  • binary_log_loss
  • binomial_coefficient_log
  • choose
  • falling_factorial, rising_factorial, log_falling_factorial, log_rising_factorial
  • fdim, fmax, fmin, fmod
  • gamma_p, gamma_q
  • hypot
  • ldexp
  • lmgamma
  • log_diff_exp, log_inv_logit_diff
  • log_modified_bessel_first_kind, modified_bessel_first_kind, modified_bessel_second_kind
  • multiply_log
  • owens_t
  • pow

Improved reliability and minor cmdstan user-facing improvements

  • Allowing C0 in gaussian_dlm_obs_lpdf and gaussian_dlp_obs_rng to now be a positive semidefinite matrix.
  • binomial_lpmf now works more reliably when the probability parameter is 0.0 or 1.0.
  • We’ve added an option to control the number of significant figures in the Cmdstan output CSV as well as when working with stansummary.
  • Users can now download a specific version of stanc3, not only the most recent one.
  • We fixed a bug when building the Boost library on MacOS.

User controlled unnormalized distribution syntax for the target +=

As you are probably aware

target += normal_lpdf(x| mu, sigma);

and

x ~ normal(mu, sigma);

behave differently. The functional form and hence target += includes normalizing constants (like log√2π in normal_lpdf). The sampling statement form (with ~ ) drops normalizing constants and everything else not relevant for computing the autodiff gradient in the samplers and optimizers.

We have now added the option of using unnormalized distribution with the target += syntax as well. This can be done by using the _lupdf or _lupmf suffix. So for example

target += normal_lupdf(x| mu, sigma);

is now equivalent to the sampling statement above. Official documentation for this feature is still a work in progress, but in the meantime you can read more on this here.

This feature will especially be useful with reduce_sum where the sampling statements can not be used.

Simplified makefile acces to C++ compiler optimizations

The backend Stan Math library is in the middle of a large refactor. The details are given below. Due to some of the changes in the backend, users who utilize the ODE solvers in Stan may see a small performance decrease in some cases. To fix that you can add the STAN_COMPILER_OPTIMS flag to the make/local to turn on link-time optimization for Stan which should remove any performance issues. Turning these optimizations can actually lead to speedups in other models as well. We are still investigating where and when this is beneficial in order to handle these optimizations automatically in the next releases.

OpenCL support

Users can now use GLM functions with OpenCL on GPUs for cases where any argument is a parameter, we’ve rewritten them to accept parameters or data for any of their input arguments. With the newest release of brms which can use the cmdstan backend it should be easier for users to access these methods.

Changes in the Stan backend

The Stan Math backend is undergoing a lot of changes at the moment (we’ve had 99 PRs since the last release!). There are three larger projects that are in lead by Ben Bales, Steve Bronder and Tadej Ciglarič. These are:

  • Better handling and use of Eigen expressions

Almost all functions in the Stan Math library were refactored to handle Eigen expressions and use Eigen expressions internally. This will lead to better efficiency in the future but for some functions we have already observed significant speedups now.

  • More efficient matrix algebra

We have reworked some major parts of Stan so that we can be way more efficient at matrix algebra. This is still a work in progress, but you can read more on that in this thread. While this has not been exposed to Stan, we had to make some changes in the backend that are used in current Stan programs as well. We made sure there was not a serious performance hit to current Stan programs and that the fast stuff we are writing now gives the same numeric answers from our current methods.

  • Refactored reverse mode autodiff functions

Tadej figured out a wonderfully nice pattern for writing reverse mode autodiff functions which we call reverse_pass_callback(). reverse_pass_callback() breaks up the fact that reverse mode autodiff is

  1. Running the regular function
  2. Saving the data
  3. Adding a callback to a stack to calculate the adjoints in the reverse pass.

The pattern leads to some rather pretty code. It also leads to 15% speedup or so in some cases which is nice.

We would also like to note that we have put a lot of effort into testing these backend changes. We are running function level performance tests and also check all Math functions for leaks with an address sanitizer. But we still need your help in making sure none of these refactors affected your Stan models. So please try your models and report if you see any improvements or more importantly any performance regressions.

Please test the release candidate with your models and report back your findings. The Stan development team appreciates your time and help in making Stan more efficient while maintaining a high level of reliability.

If everything goes according to plan, the 2.25 version will be released next Thursday.

How to install?

Download the tar.gz file from the link above, extract it and use it the way you use any Cmdstan release. We also now have an online Cmdstan guide available at https://mc-stan.org/docs/2_24/cmdstan-guide/

If you are using cmdstanpy, make sure you point to the folder where you extracted the tar.gz file with

set_cmdstan_path(os.path.join('path','to','cmdstan'))

With cmdstanr you can install the release candidate using

install_cmdstan(release_url = "https://github.com/stan-dev/cmdstan/releases/download/v2.25.0-rc1/cmdstan-2.25.0-rc1.tar.gz", cores = 4)

And then select the RC with

set_cmdstan_path(file.path(Sys.getenv("HOME"), ".cmdstanr", "cmdstan-2.25.0-rc1"))
16 Likes

Oh neat I totally missed that this went in. Thanks @rok_cesnovar, @nhuurre, @wds15 and @seantalts and whoever else helped push that along.

4 Likes

Awesome, thanks everyone for all your hard work on this!

1 Like

Nice! Looking forward to test the _lupdf stuff.
Are the vectorized binary functions syntax improvements only or are they also faster (I guess not?).

Are the vectorized binary functions syntax improvements only or are they also faster (I guess not?).

Yeah they’re (more or less) just a concise way of writing the function, rather than any optimisations

2 Likes

Just to confirm that the release candidate installed and run successfully with a bernoulli_logit_glm and regularized horseshoe prior, on

R version 3.5.1 (2018-07-02)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.7 LTS
4 Likes

Wooof, Ubuntu 18.04 and I feel like I’m outdated. Time flies.

1 Like

Looks awesome! Thanks to all who contributed!

Is this exposed to the Stan language (yet)? I remember checking some time ago and found that it was not (don’t remember when I checked, though).

I mean having those functions vectorized only makes sense if they are expose, so I guess they all are.

2 Likes

Yes. The following signature are supported:

real log_inv_logit_diff(int, int)
real log_inv_logit_diff(int, real)
real log_inv_logit_diff(real, int)
real log_inv_logit_diff(real, real)
vector log_inv_logit_diff(int, vector)
vector log_inv_logit_diff(real, vector)
vector log_inv_logit_diff(vector, int)
vector log_inv_logit_diff(vector, real)
vector log_inv_logit_diff(vector, vector)
row_vector log_inv_logit_diff(int, row_vector)
row_vector log_inv_logit_diff(real, row_vector)
row_vector log_inv_logit_diff(row_vector, int)
row_vector log_inv_logit_diff(row_vector, real)
row_vector log_inv_logit_diff(row_vector, row_vector)
matrix log_inv_logit_diff(int, matrix)
matrix log_inv_logit_diff(real, matrix)
matrix log_inv_logit_diff(matrix, int)
matrix log_inv_logit_diff(matrix, real)
matrix log_inv_logit_diff(matrix, matrix)
real[...] log_inv_logit_diff(int, int[...])
real[...] log_inv_logit_diff(int, real[...])
real[...] log_inv_logit_diff(real, int[...])
real[...] log_inv_logit_diff(real, real[...])
real[...] log_inv_logit_diff(int[...], int)
real[...] log_inv_logit_diff(int[...], real)
real[...] log_inv_logit_diff(int[...], int[...])
real[...] log_inv_logit_diff(real[...], int)
real[...] log_inv_logit_diff(real[...], real)
real[...] log_inv_logit_diff(real[...], real[...])
vector[...] log_inv_logit_diff(int, vector[...])
vector[...] log_inv_logit_diff(real, vector[...])
vector[...] log_inv_logit_diff(vector[...], int)
vector[...] log_inv_logit_diff(vector[...], real)
vector[...] log_inv_logit_diff(vector[...], vector[...])
row_vector[...] log_inv_logit_diff(int, row_vector[...])
row_vector[...] log_inv_logit_diff(real, row_vector[...])
row_vector[...] log_inv_logit_diff(row_vector[...], int)
row_vector[...] log_inv_logit_diff(row_vector[...], real)
row_vector[...] log_inv_logit_diff(row_vector[...], row_vector[...])
matrix[...] log_inv_logit_diff(int, matrix[...])
matrix[...] log_inv_logit_diff(real, matrix[...])
matrix[...] log_inv_logit_diff(matrix[...], int)
matrix[...] log_inv_logit_diff(matrix[...], real)
matrix[...] log_inv_logit_diff(matrix[...], matrix[...])

Where [...] means an array of any dimension.

3 Likes

Thank you @rok_cesnovar! Your answer came quicker than my realization that I should have looked it up myself. Thanks! :)

If you are ever in doubt which signatures are supported and need a fast check: https://rok-cesnovar.github.io/stanc3js-demo/signatures.html

5 Likes

In which repo I should make an issue for the following bug in *_lupdf syntax?
This works

  z ~ std_normal();

and this works

  target += std_normal_lpdf(z);

but this doesn’t work

  target += std_normal_lupdf(z);

but gives an error

Semantic error in '/tmp/Rtmp3VVrhb/model-61026b91fcfe.stan', line 36, column 12 to column 31:
   -------------------------------------------------
    34:  model {
    35:    // half-t priors for lambdas and tau, and inverse-gamma for c^2
    36:    target += std_normal_lupdf(z);
                     ^
    37:    target += student_t_lupdf(lambda | nu_local, 0, 1);
    38:    target += student_t_lupdf(tau | nu_global, 0, scale_global*2);
   -------------------------------------------------

Probabilty functions with suffixes _lpdf, _lupdf, _lpmf, _lupmf, _lcdf and _lccdf, require a vertical bar (|) between the first two arguments.
2 Likes

Agh, this again. It’s a stanc3 bug.
This works

  target += std_normal_lupdf(z|);

Added an issue Error in *_lupdf syntax when there is just one argument · Issue #720 · stan-dev/stanc3 · GitHub

1 Like

We had to special case parsing this and as it happens with special-cases :) Thanks Aki! This should fix it: https://github.com/stan-dev/stanc3/pull/722

1 Like

I am guessing this does not require a 2.25.0-rc2? It’s a minor fix that does not affect other code.

1 Like

I think there might be a CmdStan makefile issue w/r/t pre-compiled headers - I just bumped into https://github.com/stan-dev/cmdstan/issues/932 - still investigating.

This one is weird. Precompiled headers logic for macs hasnt been touched for a few versions. We added g++ precompiled headers in 2.24. Clang ones are there since 2.21 I think.

can’t recreate - will update issue accordingly.