I implemented generalized linear models on GPU. Some discussion on GPU GLMS and graphs with speedups are in issue 1184. Then the idea came up, that GPU implementation can use float calculations instead of double to be even faster. As a prototype I implemented poisson_log_glm_lpmf with floats. Here are the speedups compared to implementation with doubles (K is number of attributes and N is number of instances):

An important thing to consder are also numerical errors due to reduced precision. So I compared both floatand double implementations with CPU implementation. On next graph there are maximum relative errors among logposterior and all derivatives. I generated three test cases for each size. y is generated between 0 and 100, all other inputs between -1 and 1.

@syclik, @seantalts, @Bob_Carpenter whatâ€™s your take on this? This has broader implications, because it can be done for all GLMs and possibly other types of models.

Does anyone have an idea how these errors would affect HMC and if they would translate into errors in parameter estimates?

What would be the criteria for adding this to Stan - if it passes the testâ€™s, thatâ€™s it? Or do we need more extensive experiments.

We definitely need more tests. Based on what youâ€™re reporting, running with CPU vs GPU with floats would provide different results conditioned on the same seed. So far, we havenâ€™t dealt with this sort of behavior (if you set the seed, itâ€™s reproducible on the same machine with the same compiler, etcâ€¦ it still falls within that definition, but given any machine / compiler combination, weâ€™d assume that it was ok.).

Iâ€™m not sure yet what would be enough testing. (I want to set some expectationsâ€¦ I canâ€™t prespecify whatâ€™s enough here; as we learn more, we can be more precise.)

What Iâ€™d like to see is an argument / justification to explain the trade-off between performance and correctness in this case, how itâ€™ll affect users and how we (the developers) will address misconceptions about floating point errors, and essentially something well thought through and one proposal for how we would pull off this change in expected behavior. If itâ€™s well justified and well thought through, I can support it (if the results are rightâ€¦ if theyâ€™re wrong, thatâ€™s a hard pass from me).

Does that make sense? Those are just suggestionsâ€¦ Iâ€™m just trying to think through what would make the decision easier. Right now, Iâ€™ve been told that itâ€™s faster and thereâ€™s error. Thatâ€™s not quite enough.

GPU will in general not reproduce results from CPU with same seed even if it is doubles, no? So thatâ€™s not an argument (just) against floats.

I donâ€™t think there is a way to comprehensively test everything that could go wrong. And tests would have to include how HMC interacts with these errors.

What if using floats is just an (unsafe) option on the GPU?

Definitely true. Opencl uses other compiler and math library (made by device vendor). Also there are some small diferences in code, such as order of summation when calculating sum across values calculated by different threads,

@rok_cesnovar or @tadej can probably give more background on why, but yes, youâ€™ll get different results. From Tadejâ€™s plots above you can already see that doubles also have an error (relative to CPU).

Iâ€™ve been running a lot of end-to-end tests recently and I can confirm that you get numerically different parameter estimates on different hardware (even different GPUs give different results). But the differences are not beyond what you would expect from using a different seed (that is, itâ€™s within MCMC error).

You also get different parameter estimates on the same CPU using the same seed if you use a GLM primitive or if you do your own linear term. For example, bernoulli_logit_lpmf( do the linear term yourself) vs bernoulli_logit_glm_lpmf(â€¦).

A bunch of other people are relevant when the discussion is numerics for HMC, but Iâ€™d like to also hear from @anon75146577, @yizhang, and @betanalpha

We definitely need the double-precision for Cholesky decomposition, but we might be able to get away without it for matrix multiplies.

If models were just Poisson GLMs, I think itâ€™d be fine, as 10^{-5} is good enough for HMC when the geometryâ€™s simple, as it would be in a GLM prior with independent predictors. In our ODE solvers, we have found issues when precision gets down around 10^{-4} or 10^{-5} (is that right, @charlesm93 or @yizhang?). But for simple sampling from normals, Iâ€™ve found even 10^{-2} accuracy is sufficient (in that the error on expectation estimates is undistinguishable from 10^{-14} precision).

But Iâ€™m not sure what increasing relative error from 10^{-14} to 10^{-5} is going to do in general when that GLM is just one component of a model. I would imagine that unless something interacted directly with it, like a hierarchical prior, then it shouldnâ€™t matter.

This is the easiest possible test case, with independent, unit-scale, centered predictors. Itâ€™d be interesting to see what happens in more structured prediction matrices. One way to do that is generate different scales, different centerings, and different degress of correlation among the parameters.

Another test would be to add hierarchical structure to priors and see if that induces any bad numerical interactions. Thatâ€™s the situation we normally use these in.

Are there other contexts in which a matrix multiply is just one operation in a sequence of operations that might degrade if the matrix multiply is too imprecise? I think thatâ€™s the real question we need to answer.

In my experiences Iâ€™ve seen problems arise for errors as low as 10^{-8} in complex models, so I think that the general issue is that there will be no universal thresholds unless the error in the calculation farmed off to the GPU can be uniformly bounded across all inputs.

The characteristic sign of numerical problems for the gradients is HMC is the acceptance probability proxy statistic dropping despite changes to the step size (equivalently for the Stan adaptation, changes to adapt_delta). See https://arxiv.org/abs/1502.01510 for more details.

The challenge with general numerical accuracy questions is that the log target density will an approximation in addition to the gradient, so the acceptance probability proxy statistic might itself not be accurate enough to capture drifts in the numerical integration.

Thatâ€™s super helpful. I must not have read that paper on subsampling carefully enough!

With really easy models like high-dimensional standard Gaussians, 10^{-2} is sufficient (at least as evaluated by the distribution of square errors across runs). But the real question is how this scales to harder problems.

Making easy problems fast isnâ€™t my highest priority for Stanâ€”Iâ€™d like to focus on making hard problems tractable.

Do you know if itâ€™s possible to use float with specific sum etc algorithms, where floating point error is not propagated or is there too much overhead for just using double?

The amount of code would be doubled. Also I donâ€™t think we can somehow automatically select which implementaqtion to use for a particular model so that falls on the user.

Double versions are already implemented (but not jet merged into stan math). Some more programming effort is needed to implement float versions of the remaining 3 models. Maintenece effort doubles.