Generalised Additive Modelling (GAMs) in Stan

Dear all,

I hope this message finds you well. I am quite a new RStan user. Please, I have a general question - I was wondering if there’s a full-on example of a simple Generalised Additive Model (GAM) written in Stan code instead of in INLA or brms?

I found it quite difficult to find a GAM example in this context which makes me wonder if it’s possible to execute a GAM with Stan code? If so - can someone please provide a motivating example?

Thank you very much.

1 Like

Hi, and welcome!

brms uses Stan on the back end, so any model that brms fits is most certainly expressible in Stan code. You can see the Stan code that brms is using with the function brms::stancode. The resulting code is consistently very elegant and generally pretty efficient. However, this code can also often a bit dense and opaque, particularly because it uses variable names for parameters and data that are constructed internally by brms and aren’t obviously related to the formula terms and data column names that you pass as input or receive as output. In addition to renaming variables, by default brms also rescales some variables internally and then unscales the output at the very end, which can add a layer of opacity (but to be clear, is a very smart thing for brms to be doing). Despite these challenges, examining brms code is one of the very best ways to understand how to code up a startlingly wide array of models in Stan, and it’s the approach I’d recommend.

When dealing with splines, the other challenge is to understand conceptually how the GAM is parameterized and fit. brms treats splines as random effects, and the best accessible treatment of this that I’m aware of is here:

You might also get something out of this discourse post: Better priors (non-flat) for GAMs (brms) - #4 by ucfagls

I find this stuff pretty challenging conceptually, and I’d be happy to do my best to understand specific snippets of brms Stan code with you as followup.

2 Likes

Hello Jacob,

Thank you very much for replying to my post. Again, thank you for pointing me out to the brms::stancode code and other links on the subject. I never realised this can be done.

This is much appreciated - this solution is spot on!

Best wishes,
Anwar.

You don’t have to represent the GAM in random effect form; it would likely be easier code to implement if you start from the unpenalized GAM (you’re just forming basis functions, then the model is a GLM with a standard likelihood). Then you can add the penalty on wiggliness to the likelihood to get a penalised likelihood

\mathcal{L}_p(\boldsymbol{\beta}) = \mathcal{L}(\boldsymbol{\beta}) - \frac{1}{2 \phi} \lambda \boldsymbol{\beta}^{\mathsf{T}}\mathbf{S}\boldsymbol{\beta}

where \mathcal{L}_p is the penalised log-likelihood and \mathcal{L} is the unpenalized log-likelihood. (Not sure why my inline math is broken - sorry!)

Doing this requires adding the smoothing parameter \lambda, and the penalty itself, which is \boldsymbol{\beta}^{\mathsf{T}}\mathbf{S}\boldsymbol{\beta} (for a single smooth with a single penalty/smoothing parameter), with \mathbf{S} being the penalty matrix for the basis expansion you created. Then you can add priors on the smoothing parameter \lambda (you want to constraint this to be non-negative) alongside other priors on the \boldsymbol{\beta}. \phi is the dispersion parameter of whatever distribution you are using (it’s \sigma for the Gaussian, 1 for Poisson, Binomial, etc) — I’m not sure how you would use that with Stan’s functions.

You can generate the basis expansion (with identifiability constraints) and penalty matrix for that expansion using mgcv::smoothCon().

As you can tell, I’ve never actually done this myself (otherwise there would be code). But it is something I’m sure someone has done, and is something I want to do for courses I teach on GAMs so I don’t have to learn JAGS to mess with the code that mgcv::jagam() produces. but starting from a Gaussian GLM, then building up the pieces should get you a long way towards something workable that will demonstrate what these models are doing. It’s unlikely to be the most efficient of Stan functions/models but does that matter at this stage?

3 Likes

Hello Gavin, thank you very much for sharing this information - your question on efficient Stan functions, honestly, at the moment its not of huge deal at this stage. I am implementing an incredibly simple (1+1 difficulty tier) GAM in brms and looking at its coding style in Stan.

Best wishes,
Anwar.