Using stan_jm for parametric proportional hazards regression only

I haven’t seen a rstanarm or brms implementation of a parametric proportional hazards regression, and thus was wondering whether the one implemented by @sambrilleman in stan_jm could be used completely separate from the longitudinal sub-models (I know that completely misses the point of the function)?

I read that you can choose to use no association structure, thus making the event model independent from the longitudinal sub-models. Would you still have to call stan_jm with at least one longitudinal model or is there a way to omit it entirely?

  • If the latter holds, wouldn’t this potentially give us a rstanarm implementation of a parametric hazards regression, similar to coxph maybe even leading to stan_coxph?

  • If the former holds, would it make sense to call stan_jm with a dummy longitudinal model and data?

How about using a semi-parametric piece-wise constant proportional hazard model which you can cast into the form of a poisson regression?

That’s tedious to code up from the data side (messing with time-offsets), but other than that; it does work nicely.

1 Like

follwing up on the previous comment, there is a discussion about proportional hazard models via poisson regression in brms at https://github.com/paul-buerkner/brms/issues/230.

1 Like

This is an awesome idea! Just to make sure I understand correctly (and for future readers) let me try and formulate the idea in my own words (or mathematical notation):

  • W.l.o.g. set the origin of time to 0.
  • Suppose the event (either 0 or 1) for individual i has occurred at time t_i.
  • Suppose there are n_i unique event times before t_i and denote them by t_1 < t_2 < \dots <t_{n_i}.
  • For individual i we construct n_i+1 observations of the form (t_1, 0), ( t_2 - t_1, 0), (t_3-t_2, 0),…, (t_{n_{i}} - t_{n_{i-1}},0) and (t_i - t_{n_{i}},1). The first value in the tuples is the length of the interval and the second the number of events from individual i in the corresponding interval.
  • We model these induced (or transformed) observations for individual i through n_{i} + 1 independent Poisson regressions with rates \lambda(t) = \lambda_0(t) e^{x_i' \beta} and additional offset accounting for different interval lengths (aka “exposure times” in the Poisson regression language). Here t is any reference point in the corresponding interval (e.g. the start or the end of the interval) and the functional form of the baseline hazard rate \lambda_0 can be modelled through a smoothing spline. [Here x_i is a vector containing covariates for individual i and \beta a latent parameter column vector].

If my understanding is correct one could actually bunch induced observations (from different individuals) for any given induced interval, by using the following nice property of Poisson random variables:

If X_i \sim \operatorname{Pois}(\lambda_i)\, i=1,\dotsc,n are statistical independent, and \lambda=\sum_{i=1}^n \lambda_i, then Y = \left( \sum_{i=1}^n X_i \right) \sim \operatorname{Pois}(\lambda).

For each interval under consideration if there are k induced observations you could sum the covariate vectors \sum x_i and just consider one covariate vector for this interval and further replace the count of events by the number of the k induced fractions that actually had an event.

I think you got it all right… and sure, you can summarize per time interval all the subjects which have the same covariate set and then put on the lhs the accumulated events and on the rhs the offset must sum to the total exposure time.

I thought this is fairly standard… anyway; a cool thing about this approach is that you can use spline regression for the covariates if you think that the impact of a regressor on the outcome needs transformation. So this can be done using GAMMs… however, having played with these I think they should be used with caution.

Overall, this is very powerful, but the data-fiddling is tedious.

Might be, but I wasn’t aware of it. Thanks anyways.

I think I oversaw something here: This works if the Poisson regression is used with identity link and doesn’t work with logarithmic link, unless you find a way to write

\log{\left[e^{x_1' \beta} + \cdots + e^{x_n'\beta}\right]} = f(x_1,\cdots,x_n) \beta

for any choice of \beta, n and x_i, and f is a (row-)vector valued function.

@ermeel Yeah, specifying assoc = NULL in stan_jm means that the likelihood for the longitudinal and survival submodels are effectively independent, so in theory you should get estimates for the survival submodel that are equivalent to fitting the survival model alone. But, you must provide at least one longitudinal measurement per individual, so you would have to supply dummy data, and the joint posterior of all parameters is being used so there is a lot of unnecessary work happening (maybe even influencing the quality of the estimates?). In short, it would be an undesirable way to fit a standard survival model!

Re Poisson time splitting, yeah, you can get an equivalent model to the Cox model by splitting at all event times as you suggested. Here is another reference on this approach. It’s been around a long time, but the splitting means you have lots of rows of data, and so the computations weren’t really feasible until more recently I guess. But I find it is a pretty clunky approach, having to split/expand your data like that.

Royston-Parmar flexible parametric models are much better, I think. They model the log cumulative baseline hazard as a smooth function using cubic splines, and they don’t require the splitting of your data. I’ve wanted to implement them (and standard parametric survival models) either in rstanarm or a new package, but haven’t got started yet. I think I am tending towards a separate package from rstanarm, because there is scope to include competing risks, multi-state models, etc, and incorporating all of it in rstanarm starts to seem infeasible.

2 Likes

Either way is fine by me. If it doesn’t jive nicely with your code already in place for jm and mvmer then a separate package could certainly be more feasible. But if you end up wanting to add them to rstanarm that’s fine too.

Hmm… you are right. I recall that this approach is somehow “between the lines” in the text books, but never explicitly discussed (at least in my text books on the matter).

Interesting approach. However, while it is common practice to model hazards, these are conditional on post-baseline risk sets. Thus, their causal interpretation can be challenged.

A package like this is really missing in the Stan universe! If you have the means for it, then go for it… it’s probably not a huge user base, but those would be very thankful for this.

I’d be interested in this and would be happy to collaborate/contribute.

What precisely do you mean with post-baseline risk sets? Sorry in advance if this is again an effectively well-known nomenclature/concept which I’m not aware of.

The hazard is defined based on conditioning on the risk set at some time point t past baseline. We all know that messing with post-baseline stuff is not a good idea usually. It can be OK, but it does rely on assumptions - and I would argue that most of us keep forgetting this fact in TTE analyses. It is a bit like using post-baseline time-varying covariates in your analysis which makes your analyses rest on assumptions (assumptions are fine as long as you know them and know their limitations, etc.).

OK. So you are actually referring to something like the partial likelihood in (9.38) chapter 9.4 page 145 in Computer Age Statistical Inference, which is based on the Lemma above Eq. 9.37? See also The Efficiency of Cox’s Likelihood Function for Censored Data.

You are right. Sorry for any confusion (the parametric approaches above should not be affected by this…but they are heavily criticized for being parametric, usually).

Yes! I’ve been lamenting the lack of user-friendly survival models in Stan. There are a lot of potential models to add (parametric AFT and PH, flexible baseline hazard PH models, competing risks, semicompeting risks/multistate models, etc) and enough machinery shared among them that shoving it all into rstanarm seems infeasible. I wish there were a better way to borrow awesome recentering and other tricks from rstanarm without adding a new model there.

@ermeel, if you’re willing to consider non-Stan options and want an answer today, the BayesSurv_HReg function of the SemiCompRisks package implements a Bayesian Cox-like proportional hazards model with a piecewise exponential baseline hazard. (Gory details on the flexibility: it’s a time-homogenous poisson process, meaning it assumes the number of intervals for chunking the baseline hazard time scale is poisson-distributed. Then the values of the log-baseline hazard on those intervals are MVN-ICAR, which smoothes everything a bit by making intervals closer together in time have more similar baseline hazard values than intervals farther apart in time. I do not think Stan could fit this model because of the discrete parameter for the number of intervals, but splines for the cumulative hazard could give similar answers.)

@wds15, are you referring to the “hazard of hazard ratios”? I see no inherent problem with using hazard models, even for causal contrasts. The problem is interpreting hazard ratios causally because they have selection bias baked in through that troublesome conditioning-on-survival in the definition. Even so, you can still use a hazard model to estimate other (causal) quantities.

Funny way of saying this! So yes, this selection bias thing is what I am concerned about. That does not defeat a causal interpretation if you get to estimate them right.

A Stan survival package would be awesome, but a major effort!

This has been on my to-do list for a while but, realistically, I probably won’t have time to do it anytime soon. brms can actually already fit some survival models, but I was hoping to have precompiled versions at some point. The functions in the survival package can be used to parse formulas and get the data prepared, and these models can be precompiled, so that all fits well with the rstanarm paradigm. Someone just needs to add it to rstanarm or copy the rstanarm strategy in a separate package. If anyone is interested in working on that we can definitely provide guidance along the way.

Until then there are a few options that don’t do exactly this but are related:

Actually I would be interested to work on it (as a side project). It would be a nice way for me to give something back to the Stan community and learn more about survival models. And I’d really appreciate any guidance!

Just to add that survHE is also available as a development version from https://github.com/giabaio/survHE. The version on CRAN is OK for the purposes of solely estimating the survival parameters. There are other functions (that are specific to health economic evaluation) that have been updated slightly in the development version, though.

In fact, we’re working to expanding survHE so comments and/or suggestions more than welcome!

1 Like