 # Generalized Ziggurat Algorithm for Unimodal and Unbounded Probability Density Functions with Zest

#1

This might be a useful for adding more univariate rng’s, maybe efficient truncated distribution rngs and way to generate random number given just the user defined _lpdf.

https://arxiv.org/abs/1810.04744

Generalized Ziggurat Algorithm for Unimodal and Unbounded Probability Density Functions with Zest
by Jalalvand and Charsooghi

We present a modified Ziggurat algorithm that could generate a random number from all unimodal and unbounded PDFs. For PDFs that have unbounded density and/or unbounded support we use a combination of nonlinear mapping function and rejection sampling to generate a random number from the peak and/or the tail distribution. A family of mapping functions and their corresponding acceptance probability functions are presented (along with the criteria for their use and their efficiency) that could be used to generate random numbers from infinite tails and unbounded densities.
The Zest library which is a C++ implementation of this algorithm is also presented. Zest can efficiently generate normal, exponential, cauchy, gamma, Weibull, log-normal, chi-squared, student’s t and Fisher’s f variates. The user can also define their custom PDF as a class and supply it as a template argument to our library’s class without modifying any part of the library. Performance of Zest is compared against performance of random modules of (GCC’s implementation of) Standard Template Library (STL) and Boost. The presented results show that Zest is faster than both in most cases, sometimes by a factor of more than 10.
We also present a C++ implementation of a uniform floating-point random number generator (RNG) which is capable of producing all representable floating-point numbers in [0,1) which will be used in the Ziggurat algorithm near unbounded peaks and tails. The common method of dividing a random integer by the range of the RNG can not produce random floating-point numbers with fully random fraction bits and very small random numbers. The presented uniform floating-point RNG is very efficient and in the case of producing double precision floating-point numbers it’s even faster than simply multiplying a 64-bit integer by 2−64.

#2

It is GPL so it would have to go as an rstan add-on. Also,

Finally it should be mentioned that although the Ziggurat algorithm is very fast, it has a long setup times. This is not a problem for Cauchy, normal and exponential distributions as every distribution of these kinds can be generated with shifting and scaling the corresponding
distribution with standard parameters. But for applications requiring log-normal, gamma,
Weibull, student’s t or Fisher’s f variates with frequently changing shape parameter, the
Ziggurat algorithm is not a suitable choice.

We could possibly implement the uniform PRNG off of the text of Section 4.

#3

How efficient would it be to define inverse CDFs via our algebraic solvers? Then we could do all the unimodal ones pretty easily.

Even if we can’t use the code due to GPL, we could use the algorithms if they are useful.

#4

Not efficient enough. However, the Boost root-finders would work nicely in the cases where the Boost distributions do not already have a quantile method (which we are not exposing currently).

#5

We’d expose the inverse CDFs if we could figure out how to compute the derivatives.

#6

With respect to the distribution’s parameters? That is the same logic as in the algebraic solvers, but the Boost solvers are more appropriate because they are

1. Unidimensional
2. Guard that the solution is bounded to the appropriate interval
3. Can utilize second or third derivatives with respect to the unknown when finding the roote

#7

Yes, w.r.t. all the arguments. If we can do that, let’s add them. Could you open a math issue with a hint as to how to proceed?

#8

Not sure if this got answered elsewhere, but derivatives of quantile functions are easy if you can compute derivatives of the CDF. Quick derivation follows:

If inverse_cdf(cdf(x; θ); θ) = x, then by total differentiation
∂inverse_cdf(cdf(x; θ); θ)/∂θ + ∂inverse_cdf(cdf(x; θ); θ)/∂u * ∂cdf(x; θ)/∂θ = 0
∂inverse_cdf(cdf(x; θ); θ)/∂θ + pdf(x; θ)^{-1} * ∂cdf(x; θ)/∂θ = 0
∂inverse_cdf(cdf(x; θ); θ)/∂θ = –pdf(x; θ)^{-1} * ∂cdf(x; θ)/∂θ

This uses the identity ∂inverse_cdf(u; θ)/∂u = (∂cdf(x; θ)/∂x)^{-1} = pdf(x; θ)^{-1}.