Thanks will check it. While implementing Royston & Parmar, I realised that Bayesian formulation might run into problems with their choice of natural cubic splines, in particular have a look at the following

Here l is the factor to the likelihood which comes from a specific observation. The problem I see is that in case of an uncensored observation, one might have a value of l that is negative, due to the derivative of s factor. I presume this is no problem for an MLE treatment, since optimisation doesn’t care whether \ell is positive or negative, but when building our likelihood in Stan with `target+=`

increments we have to use the log of l.

Not sure how to guarantee that l will be positive, either by using very tight priors or the \gamma parameters or one should think about monotone splines, for which I presume the derivative s should be non-negative.

Actually Royston & Parmar state

The estimate of g[S_0(t)] must theoretically be monotone in t, whereas natural cubic splines, which are constrained to be linear beyond certain extreme observations, are not globally monotone. However, the linearity constraint imposes monotonicity in the tail regions where the observed data are sparse, whereas in regions where data are dense (and provided the sample size is not too small), monotonicity is effectively imposed by the data themselves.

Note that with x=\log t

g[S_0(t)] = s(x;\gamma)

but then they go on:

The use of monotone splines, as in Shen [14], makes the computational problem much more difficult and awkward, a cost we do not feel is in general justified.

I was wondering whether the idea to define the vector of derivative’s (over all data points which are uncensored) to be positive constrained in the transformed parameters block, would solve this problem here (+not cause too much complexity in the posterior to complicate inference using HMC)?