Question on spline versus Gaussian process

I understand a smoothing spline is a special case to Gaussian process regression, and in practice is often used as a finite-basis approximation when the problem scale is too big to fit a more flexible Gaussian process. What I wonder is whether there is a rule-of-thumb way to compare the computing cost of a smoothing spine and a full rank Gaussian process regression in Stan? i.e., a full rank squared kernel gp would be O(X) times more expensive than fitting a B-spline with K knots and sample size n, where X= X(n ,k) ?


Hi @yuling, this here is a good summary concerning the similarities between GPs and other models. However, I don’t recollect if they discuss computational costs…

1 Like

We discuss computation cost of splines, basis function GPs and covariance matrix GPs in Practical Hilbert space approximate Bayesian Gaussian processes for probabilistic programming. arXiv preprint arXiv:2004.11408. In Stan there is big difference between the theoretical O() cost and practical cost as the autodiff can have a big effect on the finite case multipliers. E.g. the current Stan is really bad with covariance matrix GP presentation, but will get faster at some point, but then the basis function splines and GPs will likely get faster, too.