# Restricted cubic splines in Stan?

#1

A year ago in the Stan Google, a Marise asked to check an implementation of restricted cubic splines, a method made popular (I think) by Frank Harrell. She apparently didn’t do a good job coding this and was told to learn the basics before trying to develop her own implementation, so I wasn’t helped much by the post.

Has this been implemented in Stan since then? My group is looking to transfer some of our R code, that relies on the R implementation of RCS, to Stan, if possible.

#2

There is no function in the Stan Math library that “does” a restricted cubic spline.
You can use pretty much any kind of spline supported by the mgcv package using stan_gamm4 in rstanarm or brm in brms. Or there is a case study about splines at
http://mc-stan.org/users/documentation/case-studies/splines_in_stan.html

#3

A while ago I implemented Natural Cubic Splines (and their derivative) in Stan, as defined in Royston & Parmar.

Haven’t tested it very thoroughly, but it could be a starting point. Any feedback is highly welcome. Below comes a snippet that should be enough to get started…

functions {
/**
* Return natural cubic spline basis function at a specific knot evaluated
* at a set of x-values
*
* @param xs An array of reals of x-values for which the basis function should be evaluated
* @param l A real denoting \lambda_j for the j'th basis function in Royston & Parmar
* @param k A real denoting k_j, i.e. the position of the j'th knot
* @param kmax A real denoting the position of the right boundary knot
* @param kmin A real denoting the position of the left boundary knot
* @return A vector corresponding to the values of the spline basis function evaluated at x
*/
vector basis_function_v(real[] xs,  real l, real k, real kmax, real kmin) {
vector[size(xs)] vs;
for(i in 1:size(xs))
vs[i] = pow(max({0, xs[i] - k}),3) - l*pow(max({0, xs[i] - kmin}),3) - (1-l)*pow(max({0, xs[i] - kmax}),3);
return vs;
}
/**
* Return the derivative of the natural cubic spline basis function at a specific knot evaluated
* at a set of x-values
*
* @param xs An array of reals of x-values for which the basis function should be evaluated
* @param l A real denoting \lambda_j for the j'th basis function in Royston & Parmar
* @param k A real denoting k_j, i.e. the position of the j'th knot
* @param kmax A real denoting the position of the right boundary knot
* @param kmin A real denoting the position of the left boundary knot
* @return A vector corresponding to the values of the derivative of the spline basis function evaluated at x
*/
vector deriv_basis_function_v(real[] xs, real l, real k, real kmax, real kmin) {
vector[size(xs)] vs;
for(i in 1:size(xs))
vs[i] = 3*pow(max({0, xs[i] - k}),2) - 3*l*pow(max({0, xs[i] - kmin}),2) - 3*(1-l)*pow(max({0, xs[i] - kmax}),2);
return vs;
}
}
/************************************************************************************************************************/
data {
int<lower=1> N;                                                 // number of data points
int<lower=0> m;                                                 // number of internal knots
ordered[m+2] knots;                                             // location of knots
int<lower=1> NC;                                                // number of covariates
matrix[N,NC] X;                                                 // design matrix
int<lower=0, upper=1> is_censored[N];                           // delta in the paper
vector<lower=0>[N] times;                                       // t in the paper
}
/************************************************************************************************************************/
transformed data {
vector[N] log_times;                                            // x in the paper
vector[m] lambdas;
matrix[m+1,N] basis_evals;
matrix[m,N] deriv_basis_evals;

log_times = log(times);
for(i in 1:m)
lambdas[i] = knots[m+2] - knots[1+i];
lambdas /= (knots[m+2] - knots[1]);

basis_evals[1,] = log_times';                                    // relative to coefficient \gamma_1
print(basis_evals[1,1], ",", log_times[1],",", basis_evals[1,2],",", log_times[2]);
for(j in 1:m) {
basis_evals[j+1,] = basis_function_v(to_array_1d(log_times),
lambdas[j], knots[j+1], knots[m+2], knots[1])';

deriv_basis_evals[j,] = deriv_basis_function_v(to_array_1d(log_times),
lambdas[j], knots[j+1], knots[m+2], knots[1])';
}
}


#4

I’d like to learn what is the best medium for adding new easy-to-use spline fitting capabilities, i.e. should this become part of brms, rstanarm, or base stan?

I’ve liked the restricted cubic spline function because it’s easy to implement and you obtain a simple prediction equation at the end. WIth the growing use of Bayes and thinking more about entire posterior distributions and not just point estimates I wonder if the value of having a simply stated point estimate is still as high.

#5

What precisely do you mean?

#6

@harrelfe A case could be made for including spline related functions in the Stan Math C++ library, so that they could be used by all interfaces to Stan. This is somewhat involved but there plenty of people who are capable of implementing it. The main reason why I think no one has done anything on this front is that there are so many types of splines it is not obvious which ones should be implemented and with what priority.

What brms and rstanarm do is to use functions in the mgcv R package to create design matrices that correspond to various types of splines. Then we pass those design matrices to the Stan programs and it proceeds just like a GLM that has a non-spline design matrix. This is easier (for us) because it pushes the responsibility for the implementation onto mgcv, but its maintainer Simon Wood has been doing this for decades and has two editions of a textbook about it.

So, I guess for me the question is: What (if anything) to you find to be lacking about how mgcv does splines? If there is something, then maybe it would be easiest to add it to mgcv. Or is there some function in another R package that creates a design matrix for a type of spline that brms or rstanarm should have an option to call?

#7

Seconding what Ben said but also: to get adoption outside of the R universe I’m supportive of getting (the right) splines implemented in the Stan Math C++ library. Some arguments from your end about which splines to implement (whether they’re in mgcv or not) would be helpful.

#8

I think that implementing a variety of splines deep within stan is a good idea, for efficiency and precision.

I have nothing against using mgcv as long as I can fit a natural spline (restricted cubic spline). How does one obtain the design matrix or such a spline function from mgcv without using the package’s gam function? This is as far as I’ve gotten, but I’m not seeing how to get the natural spline—I’m getting 5 columns in the design matrix and should have 4.

require(mgcv)
set.seed(1)
n <- 30
x <- exp(rnorm(n))
kn <- quantile(x, c(.05, .2, .5, .8, .95))

z <- s(x, bs=‘cr’, fx=TRUE, k=5, m=2)
X <- smooth.construct(z, data=data.frame(x), knots=list(x=kn))
X\$X

If someone can answer a background question it would shed some light on the path forward: the easy-to-use truncated power basis for the linear tail-restricted cubic spline function has terms in the design matrix that can be very co-linear. Does this create any convergence problems in Stan? If so I’d be quicker to abandon this basis and would spend time writing back-ends that take Stan results and rephrases the regression model using the ordinary cubic spline notation, including linearly redundant terms as needed to bring about the linear tail constraints.

#9

Whether it creates problems in a simple model or not it does make sampling harder for the algorithm so if other parts of the model introduce further problems it might make the overall model fail whereas a less colinear spline implementation would work fine. Model problems like collinearity add up in a non-linear fashion so I don’t know if you’d be able to tell ahead of time in general.

#10

Could something like the QR decomposition strategy for design matrices with collinearity problems also work in this case?

#11

I think that can definitely help. The user just has to transform back and forth, and prior specification can be tricky as mentioned in the first link you provided.

#12

For natural cubic splines (aka restricted cubic splines) I used the ns method from the core R splines package e.g. for a recent Royston&Parmar implementation):

https://www.rdocumentation.org/packages/splines/versions/3.5.0/topics/ns

Would that work?

#13

That works, although I think it is significantly slower than my truncated basis function rcs in the R rms package. But the earlier discussion expressed a desire to rely on the mgcv package, not the splines package.

#14

Either one is fine, particularly since they are both among CRAN’s Recommended packages.

#15

mgcv mentions in the help file that it has a cyclic natural spline, so I’m wondering why we can’t find a natural non-cyclic (regular) spline.

For large n (e.g., 200,000) ns takes about 4 times longer than Hmisc::rcspline.eval (ns takes about 0.46s).

When n=1,000,000 ns takes only twice as long as the simpler truncated power basis, so I don’t think computing time is as much of an issue unless computations have to be repeated hundreds of times.

I would like to find how to construct the matrix multiplication that converts between natural B-spline and truncated power basis.

#16

On the topic of QR decomposition & splines, see here: Spline fitting demo inc comparison of sparse vs non-sparse

#17

If we want to use any old basis function such as the convenient truncated power basis, we can easily convert back and forth to an orthonormal basis. Don’t we want SVD instead of QR?