Manual absolute predictions from Cox model in brms?

Dear all,

I’m interested in replicating a frequentist analysis using Bayesian methods (first using essentially non-informative priors, second using informative priors and comparing). I am planning to use the brms package, but open to other options if necessary.

In brief, the original analysis uses a Cox proportional hazards model (without violations of the proportional hazards assumption). The model is adjusted, but uses no time-transformations or other advanced features.
While I am aware of the possibility of using parametric survival models, this is not what I am interested in here. I am ware that the Cox PH model models relative differences (using partial MLE and ignoring the baseline hazards); however, it seems that the frequentist survival R package includes the capability of predicting absolute risks for new patients at specific time-points; I’ve looked at the code, but have not been able to dis-entangle the relevant parts to manually do this using, e.g., brms, which supports the Cox model, but does not include functionality to generate posterior predictions.
I’ve searched the forums and found no previous answer that matches this question, so I hope that somebody on here can help.

Here’s a simple reproducible example showing how it’s done using the survival package (and fitting of the model using brms):

library(survival)

# Simple randomly generated dataset - in the real use-case, adjustments would be needed, but no time-varying effects, etc.
set.seed(12345)
dt <- data.frame(trt = 1:200 > 100,
             time = sample(1:90, size = 200, replace = TRUE),
             event = c(1:100 < 35, 1:100 < 25))

fit <- coxph(Surv(time, event) ~ trt, data = dt)
summary(fit)
# Predictions for new patients at the maximum follow-up time - works
predict(fit, type = "survival", newdata = data.frame(trt = c(F, T), time = c(90, 90), event = c(T, T)))


# Using brms
library(brms)
bfit <- brm(time | cens(1-event) ~ trt, data = dt, family = cox(), cores = 4, backend = "cmdstanr")
summary(bfit)
# Model works, but predictions are not supported,  i.e., the following leads to an error:
#posterior_epred(bfit, newdata = data.frame(trt = c(F, T), time = c(90, 90), event = c(T, T)))

If anybody has figured out a way to similarly generate predictions from the Bayesian model, it’d be greatly appreciated, as would any suggestions for how this could possibly be approached.

So far, my best solution has been to fit the model using both brms and survival::coxph with identical data/formulas, and then for each set of posterior draws overwrite the coefficients of the frequentist model and use that for predictions; however, in examples similar to that above, all patients in the reference group get similar values as the intercepts from the brms model cannot be used to overwrite any parameters in the frequentist model, so it doesn’t solve the issue of obtaining full posteriors for each group (and if I use an informative prior, e.g. a neutral but relatively narrow one for the treatment effect, it will only pull the non-reference group towards the reference, instead of pulling each two closer to each other, as normally happens).

  • Operating System: Windows 10 x64
  • brms Version: 2.17.0
  • R version: 4.2.1
  • cmdstan version: 2.29.2

Thanks in advance and best regards,
Anders

Bumping this in case anybody should see it and have a suggestion.

I’m not a survival analysis specialist, but I remember from my MSc courses (looong ago) that there is a nonparametric estimator for the baseline hazard function. A quick search in Google landed me on stackexchange, where the so-called Nelson-Aalen estimator is mentioned. Maybe you can consider using that?

Thanks for the suggesting. I’m no survival expert either and unfortunately, I cannot work out a way to use the Nelson-Aalen estimator that allows a prior on e.g. an experimental treatment to affect estimates on both groups.

Could you clarify your intentions when you say you want to include a prior? For what other than the model parameters? The Nelson-Aalen estimator provides a means to calculate the baseline hazard, given the data and (a posterior draw of) the model parameters. Repeat this draw-by-draw and you get a posterior distribution for the Nelson-Aalen estimator with which you can do as you please. Maybe I’m missing something…

I primarily want to include informative priors on the treatment effect (it is a two-group RCT); for all other parameters, vague, neutral priors would be used. If the simple example above (no additional adjustments) could work, I would be able to extend. But unfortunately, I seem unable to get it to work - my attempts to understand the manual calculation of the Nelson-Aalen estimator are unforunately unsuccesful. Any help with that (if possible, based on the above example) would be greatly appreciated.
Best regards,
Anders

Sorry, it’s still not clear to me. These priors you mention, do they pertain to the model you fit to the observed data? If so, just fit that model to your data first. Or are you first fitting a model to some other dataset and do you then want to simulate some sort of RCT? Or is the problem making sense of the math notation? Either way, maybe it’s good to involve a statistician at your institute who can sit with you, unless someone else here has some example code.

After a quick Google search I found this blog (top hit on Google search for Stan cox regression prediction) which seems to take a parametric approach to the baseline hazard, which is not what you were looking for. However, the author acknowledged @avehtari , so maybe he can chip in?

1 Like

Thanks and sorry if previous posts were unclear. I’ll try to summarise and explain again:

I want to replicate a Cox proportional hazards model analysis originally conducted using conventional frequentist methods using Bayesian methods. For simplicity, we can assume that the situation corresponds to a randomised trial with two groups - for simplicity of the example, assume that I only want to model the difference between treatment groups with no additional variables in the model (and assume that the PH assumption holds). I want to repeat the analysis using various priors, from very vague to neutral (i.e., centred on hazard ratios of 1) and somewhat more sceptical of large treatment effects.
This, in itself, is easy using brms. However, I want to use the model to not only estimate relative differences (hazard ratios), but also absolute differences, i.e. estimated survival (or mortality) rates at a certain time point in both treatment groups.

Using frequentist methods and the survival R package, this is relatively easy:

library(survival)

# Simple randomly generated dataset
set.seed(12345)
dt <- data.frame(trt = 1:200 > 100,
                 time = sample(1:90, size = 200, replace = TRUE),
                 event = c(1:100 < 35, 1:100 < 25))
# Fit model
fit <- coxph(Surv(time, event) ~ trt, data = dt)
summary(fit)
# Predictions for new patients at the maximum follow-up time - works
predict(fit, type = "survival", newdata = data.frame(trt = c(F, T), time = c(90, 90), event = c(T, T)))

This predicts 0.2441168 and 0.3743333.

If I want to replicate this using brms, I can do it using the following code:

library(brms)
bfit <- brm(time | cens(1-event) ~ trt, data = dt, family = cox(), cores = 4, backend = "cmdstanr")
summary(bfit)

In the above example, no priors are set, but different priors would be used in the actual use-case. Using different priors obviously leads to different HRs (or estimates for the treatment effect), so this part works, but no posterior_epred method is provided in brms.

I can obtain the baseline hazards from the conventionally fitted Cox model using:

basehaz(fit)

Or alternatively the Nelson-Aalen estimator using the full dataset (i.e., both groups) using:

 basehaz(coxph(Surv(time, event) ~ 1, data = dt))

No problems so far. I can then use the baseline hazards from the conventionally fitted Cox model and the posterior distribution of draws to get similar predictions from the Bayesian model:

bh <- basehaz(coxph(Surv(time, event) ~ trt, data = dt))
# Predictions for patient in the control group at day 90 (reduces to exp(- baseline hazard))
exp(-bh$hazard[bh$time == 90] * exp(FALSE * as_draws_df(bfit)$b_trtTRUE)) |> 
  quantile(probs = c(0.025, 0.5, 0.975))
#      2.5%       50%     97.5% 
# 0.2441168 0.2441168 0.2441168

# Predictions for patient in the treatment group at day 90
exp(-bh$hazard[bh$time == 90] * exp(TRUE * as_draws_df(bfit)$b_trtTRUE)) |> 
  quantile(probs = c(0.025, 0.5, 0.975))
#      2.5%       50%     97.5% 
# 0.1938197 0.3789486 0.5599472

This works, and replicates the predictions from the conventional Cox model in this example using default, flat priors in brms. The problem appears when I repeat this using an informative prior, as in the example below (the used prior is quite strong, but illustrates the point):

# Fit model using normal(0, 0.2) prior for the treatment effect
bfit2 <- brm(time | cens(1-event) ~ trt, data = dt, family = cox(), prior = prior(normal(0, 0.2), coef = "trtTRUE"), cores = 4, backend = "cmdstanr")

# Baseline hazard
bh <- basehaz(coxph(Surv(time, event) ~ trt, data = dt))

# Predictions for patient in the control group at day 90 (reduces to exp(- baseline hazard))
exp(-bh$hazard[bh$time == 90] * exp(FALSE * as_draws_df(bfit2)$b_trtTRUE)) |> 
  quantile(probs = c(0.025, 0.5, 0.975))

#      2.5%       50%     97.5% 
# 0.2441168 0.2441168 0.2441168

# Predictions for patient in the treatment group at day 90
exp(-bh$hazard[bh$time == 90] * exp(TRUE * as_draws_df(bfit2)$b_trtTRUE)) |> 
  quantile(probs = c(0.025, 0.5, 0.975))

#     2.5%       50%     97.5% 
# 0.1882971 0.2913698 0.4072011 

So, the problem here is as follows: the above predictions, from the Bayesian models using informative priors are not as expected. The control group predictions essentially only use the baseline hazard, and thus are identical to the predictions from the model using flat priors. However, the treatment group predictions are massively influenced by the prior.

This is the problem - normally, using an informative prior like this in e.g. a logistic regression in brms would pull predictions in both groups closer to their average, which means that if predictions are made for all patients in both groups taking the mean of all the predictions would lead to an overall survival estimate from the entire trial population that replicates the raw overall survival estimate. Here, that would not be the case if using an informative prior, as only the predictions in one group are affected.

So essentially, what I’d want ideally would be for the informative prior to affect both groups. Not sure if this is possible, but that’s what I’m looking for.

the way you code the model it is not surprising that the prior is only having an influence on the treated - that’s how it’s written. I’d suggest trying out the sum-to-zero contrast parametrisation for the trt factor. Then each group is parametrised as mu + Delta and mu - Delta. Throwing now a prior on Delta has an effect on both groups obviously. This is done with the contrasts facility in R. I hope this puts you in the right direction.

2 Likes

Perfect, of course, that both makes sense and worked as intended, and it even seems obvious now - centering the treatment variable does the trick. Thanks both, really appreciate the help!

would a model along the lines of

brm(time | cens(1-event) ~ 0 + trt ...

which removes the intercept also work?
Or is there something special about cox regressions that invalidates such an approach?

1 Like

Thanks for explaining again. I would argue that if you use more sceptical priors, you would expect both the hazard ratios and the baseline hazard function to change. After all, you are forcing them to be more similar with a sceptical prior. The “problem” or doscrepancy with the calculation of the absolute differences is due to always using the baseline hazard function from a frequentist model:

If you would use a baseline hazard function calculated based on the Bayesian model (on a draw-by-draw basis), you wouldn’t run into those discrepancies you are reporting.

Now, the contrast trick is clever because it removes the dependency of the baseline hazard function on the prior choices. However, because you are still using the frequentist baseline hazard, you are actually ignoring posterior uncertainty in the baseline hazard function. To propagate all uncertainty properly, you’d still have to calculate the Nelson-Aalen estimator on a draw by draw basis.

Thanks. Including the uncertainty in the baseline hazard would be ideal, however, how to do that using the posterior draws from the brms-fit is unclear to me - the Nelson-Aalen estimator seems to just use the raw event/time data, and adapting it to use the parameterisation of the baseline hazard used in brms does not seem straightforward to me. Out of interest, I would be happy to see a solution using the baseline hazard from the brms model, if somebody here is able to do that. However, I believe the approach above (frequentist baseline hazard + contrast coded treatment effect) is acceptable for this case if thoroughly explained.
Thanks again for the help, it’s much appreciated!

Agreed, there is no way that brms can calculate the N-A estimator for you, but you could do it yourself by manually extracting the posterior draws from the brms object and calculating the N-A estimator and absolute differences after fitting your model.

The posterior draws are located within the slot “fit” of your brms.fit object my_brms_fit. This slot is of the class “stan.fit”, so you can you use rstan::extract(my_brms_fit$fit) to extract the posterior draws.

That AND the posterior draws for parameters (\beta) and the covariates (x, in your case: trial arm membership):

h_0(t_i) = \frac{d_i }{ \Sigma_{j:t_j \geq t_i} exp(x_j' \beta)}

(as per survival - Prediction in Cox regression - Cross Validated)

To get your draw-specific estimate of the baseline hazard, just plug in a draw of your model coefficients \beta

1 Like

It’s probably the notation that I get lost at. Trying to replicate just the frequentist estimate of the Nelson-Aalen estimator manually (then it’s easy to plug in the full posterior for the treatment effect afterwards).

Full reproducible example:

# Create dataset
set.seed(12345)
dt <- data.frame(trt = 1:220 > 100, # Slight imbalance, more in trt = TRUE group
                 time = sample(1:90, size = 220, replace = TRUE),
                 event = c(1:100 < 35, 1:120 < 25))

# Calculate using the survival packge
library(survival)
bh <- basehaz(coxph(Surv(time, event) ~ trt, data = dt))
bh$hazard[bh$time == 90]
# 1.433977 cumulative baseline hazard at time = 90

# Calculate manually
trt <- coxph(Surv(time, event) ~ trt, data = dt)$coef # Coefficient, can be replaced with full posterior later
t <- sort(unique(dt$time)) # Unique times in increasing order
d <- sapply(t, function(t) sum(dt$event & dt$time == t)) # Number of events at exactly each time point
ar <- sapply(t, function(t) sum(dt$time >= t))  # Number at risk at each time point
xj <- sapply(t, function(t) mean(dt$trt[dt$time >= t])) # See comment below
haz <- d/(ar * exp(xj * trt)) # Baseline hazard at individual times
cumsum(haz)[t == 90] # Cumulative baseline hazard at time = 90
# 1.472424

Results do not match - I’d suspect that I somehow get the math notation for x'j wrong - I read this as the values for the trt-variable for the patients having at risk at each time point, but I am not sure and would suppose that the error is here - can you spot where I’ve gotten this wrong?

I think your code is bugged from line xj <- sapply(t, ... onwards.
You are averaging the predictor variable, ignoring that the product of the predictor and coefficient have to be exponentiated first, and only then are summed up for the denominator (sum over all individuals still at risk up to time point i).

I would code this as (see new lines of code at the bottom):

# Create dataset
set.seed(12345)
dt <- data.frame(trt = 1:220 > 100, # Slight imbalance, more in trt = TRUE group
                 time = sample(1:90, size = 220, replace = TRUE),
                 event = c(1:100 < 35, 1:120 < 25))

# Calculate using the survival packge
library(survival)
fit <- coxph(Surv(time, event) ~ trt, data = dt)
bh <- basehaz(fit)
bh$hazard[bh$time == 90]
# 1.433977 cumulative baseline hazard at time = 90

# Calculate manually
trt <- fit$coef # Coefficient, can be replaced with full posterior later
t <- sort(unique(dt$time)) # Unique times in increasing order
d <- sapply(t, function(t) sum(dt$event & dt$time == t)) # Number of events at exactly each time point
ar <- sapply(t, function(t) sum(dt$time >= t))  # Number at risk at each time point
xj <- sapply(t, function(t) mean(dt$trt[dt$time >= t])) # See comment below
haz_AG <- d/(ar * exp(xj * trt)) # Baseline hazard at individual times
cumsum(haz_AG)[t == 90] # Cumulative baseline hazard at time = 90
# 1.472424

haz_LC <- d / sapply(t, function(t) sum(exp(dt$trt[dt$time >= t] * trt))) # Baseline hazard at individual times
cumsum(haz_LC)[t == 90] # Cumulative baseline hazard at time = 90
# 1.429792

plot(stepfun(x = t, y = c(0, bh$hazard)))
lines(stepfun(x = t, y = c(0, cumsum(haz_AG))), col = "red")
lines(stepfun(x = t, y = c(0, cumsum(haz_LC))), col = "blue")

My estimate (blue) is still a bit off, but is closer to the estimate by the survival package (black) than yours (red). I haven’t looked into the details why this difference remains, but I imagine it has to do with floating point precision and numerical error associated with taking sums over exponents. Or maybe I’m missing something. Best if you test this code a bit more across different data simulation exercises!

3 Likes

Dear Luc
Thanks once more for your time and patience. Your notation appears right, and I see where I misunderstood the notation. I am unable to find further errors, however, upon simulating this with various simulated datasets, the differences between the manual approach and the estimates from the survival package functions are sometimes quite large, i.e., absolute differences in survival of ~8 percentage points at time = 90. This does not seem to be a code error (at least not one I have been able to spot), and the differences seem to large to be caused by rounding error or floating point precision. See example code with results from multiple simulations below.

###### Load survival
library(survival)

##### Function to randomly generate data
get_dta <- function(seed = NULL, size = 1000, balance = 0.5, max_t = 90, event_probs = NULL) {
  set.seed(seed)
  nc <- round(size * balance)
  event_probs <- if (is.null(event_probs)) runif(2, 0.1, 0.5) else event_probs
  data.frame(trt = 1:size > nc,
             time = sample(1:max_t, size = size, replace = TRUE),
             event = c(rbinom(nc, 1, event_probs[1]) == 1, rbinom(nc, 1, event_probs[2]) == 1))
}

##### Functions to calculate from Cox models
calc_cox <- function(fit, time = 90) {
  1 - predict(fit, type = "survival", newdata = data.frame(time = time, event = FALSE, trt = FALSE))
  # Alternatively the code below, same results
  # bh <- basehaz(fit)
  # 1 - exp(-bh$hazard[bh$time == time]) # Risk of event at time
}

calc_nae <- function(fit, dta, time = 90) {
  form <- fit$formula
  event <- dta[[as.character(form[[2]][3])]]
  times <- dta[[as.character(form[[2]][2])]]
  trt <- dta[[as.character(form[[3]])]]
  unique_times <- sort(unique(times))
  d <- sapply(unique_times, function(t) sum(event & times == t))
  haz <- d / sapply(unique_times, function(t) sum(exp(trt[times >= t] * coef(fit)[1])))
  1 - exp(-cumsum(haz)[unique_times == time])
}

##### Simulations with calculations using the survival package and manually
sim_test <- function(seed = NULL) {
  dta <- get_dta(seed)
  fit <- coxph(Surv(time, event) ~ trt, data = dta)
  data.frame(seed = seed,
             survival = calc_cox(fit),
             nelson_aalen = calc_nae(fit, dta))
}

do.call(rbind, lapply(1:50, sim_test)) |> 
  transform(diff = survival - nelson_aalen)

##### Output:
#     seed  survival    nelson_aalen    diff
# 1     1 0.5729654    0.5679050 0.0050603233
# 2     2 0.7324079    0.7005255 0.0318824367
# 3     3 0.6880817    0.6547165 0.0333651917
# 4     4 0.7819967    0.7785881 0.0034086340
# 5     5 0.5984666    0.5811715 0.0172950885
# 6     6 0.8995700    0.8634437 0.0361262547
# 7     7 0.8822016    0.8761644 0.0060372060
# 8     8 0.8268986    0.8208787 0.0060199192
# 9     9 0.5041118    0.5032322 0.0008795653
# 10   10 0.7744118    0.7634039 0.0110079107
# 11   11 0.7181825    0.7091562 0.0090262770
# 12   12 0.4059241    0.4003875 0.0055366545
# 13   13 0.9122029    0.9034929 0.0087100201
# 14   14 0.6102733    0.5998099 0.0104634040
# 15   15 0.8160141    0.8092760 0.0067381273
# 16   16 0.8628279    0.8490485 0.0137794136
# 17   17 0.4867286    0.4744240 0.0123045692
# 18   18 0.9435792    0.9254033 0.0181758572
# 19   19 0.7048238    0.6203411 0.0844827029
# 20   20 0.9257964    0.9119220 0.0138743198
# 21   21 0.9264407    0.9097801 0.0166606652
# 22   22 0.6192829    0.6084873 0.0107956620
# 23   23 0.9147363    0.8846803 0.0300559431
# 24   24 0.7806142    0.7605782 0.0200360095
# 25   25 0.7681215    0.7538061 0.0143154541
# 26   26 0.5080235    0.4996319 0.0083915890
# 27   27 0.9523529    0.9379575 0.0143953618
# 28   28 0.4417029    0.4313329 0.0103699590
# 29   29 0.6923493    0.6619929 0.0303564784
# 30   30 0.5349814    0.5243542 0.0106272230
# 31   31 0.7377403    0.7204950 0.0172453298
# 32   32 0.7651011    0.7525915 0.0125095748
# 33   33 0.7899948    0.7755813 0.0144135700
# 34   34 0.7382634    0.7155380 0.0227254230
# 35   35 0.9426385    0.9203588 0.0222796726
# 36   36 0.8518752    0.8287083 0.0231668754
# 37   37 0.8582087    0.8458571 0.0123516075
# 38   38 0.7555741    0.7420642 0.0135098466
# 39   39 0.6639134    0.6582877 0.0056256864
# 40   40 0.8412042    0.8196126 0.0215916099
# 41   41 0.5975342    0.5767426 0.0207915485
# 42   42 0.9821084    0.9463167 0.0357917736
# 43   43 0.7658123    0.7348417 0.0309705919
# 44   44 0.8835535    0.8742500 0.0093034983
# 45   45 0.8585326    0.8483687 0.0101638555
# 46   46 0.6270914    0.6207103 0.0063811309
# 47   47 0.9810101    0.9558311 0.0251789462
# 48   48 0.8261984    0.7975613 0.0286371584
# 49   49 0.7334295    0.7169254 0.0165040955
# 50   50 0.8746381    0.8501026 0.0245355485

# Largest difference in these simulations:
.Last.value[which.max(.Last.value$diff), ]
#     seed  survival nelson_aalen      diff
# 19   19 0.7048238    0.6203411 0.0844827

I quickly investigated your freak simulation #19:

dta <- get_dta(seed = 19)
fit <- coxph(Surv(time, event) ~ trt, data = dta)
bh <- c(0, basehaz(fit, centered = FALSE)$hazard)

unique_times <- sort(unique(dta$time))
d <- sapply(unique_times, function(t) sum(dta$event & dta$time == t))
haz <- d / sapply(unique_times, function(t) sum(exp(dta$trt[dta$time >= t] * coef(fit)[1])))
bh_na <- c(0, cumsum(haz))

plot(stepfun(x = unique_times, y = bh))
lines(stepfun(x = unique_times, y = bh_na), col = "blue")

Turns out the problem happens right at the end at the very last time point. That’s why I don’t like to stare blindly at one number and rather the whole timeseries. Then, looking at the documentation of the basehaz function in R, I noticed that the default value for the argument “centered” is TRUE, meaning that basehaz does not even return a baseline hazard but some kind of mean for the entire population. So, I went back to the interwebs, and see what people have to say about basehaz: see the third answer to this thread on statexchange here. Here, AdamO writes:

“For further silliness, the default setting is centered=TRUE which a) is not a baseline hazard function (as the name would suggest), and b) employs prediction-at-the-means which is wildly discredited as valid in any practical sense”.

This seems to further decrease my confidence that basehaz is doing what we expect it to do. So if I were in your place, I would be happy to go with the manual calculation of the baseline hazard using that N-A estimator.

EDIT: you may want to closely inspect the documentation of survfit.coxph(). basehaz() is simply an alias that calls survfit.coxph(). The documentation explains what type of estimators are used to calculate the survival curves. None of them are the N-A method according to the docs :).

EDIT2: this paper provides a nice overview and comparison of different estimators, including references to the original papers describing the estimators.

3 Likes

Thanks. I’ve looked into it further; the main issue does not seem to be center = TRUE, but actually that the simulated dataset is not very thoughtfully simulated - it has a very (unrealistically) high proportion of censoring before the event time of interest. This causes discrepancies due to the handling of ties - basehaz/survfit/predict using Cox PH models uses the Efron method for handling ties, whereas the Nelson-Aalen estimator corresponds to the Breslow method, which is inferior but under more normal use cases very similar to the Efron method - however, in the previous simulations, with unrealistically high proportions of early censored individuals, this causes a discrepancy. The manual calculation could probably be expanded to use the Efron method, but given more sane simulated data, differences are too small to matter (example code below).
I think the final solution for me will be to use the Nelson-Aalen estimator as calculated above with the full posterior draws, and as an extra check to ensure that ties/early censoring and the choice of method for handling ties has limited influence, I’ll just check by using the frequentist model with Efron/Breslow handling of ties, and if differences in predictions from these are negligible (as they most likely will be), this will be reassurance enough.

Thanks again, really appreciate the help here. The solution here works great for my use case.

Code:

###### Load survival
library(survival)

##### Function to randomly generate data
get_dta <- function(seed = NULL, size = 1000, balance = 0.5, max_t = 90, event_probs = NULL, early_cens = 0.30) {
  set.seed(seed)
  nc <- round(size * balance)
  event_probs <- if (is.null(event_probs)) runif(2, 0.1, 0.5) else event_probs
  data.frame(trt = 1:size > nc,
             #time = sample(1:max_t, size = size, replace = TRUE),
             event = c(rbinom(nc, 1, event_probs[1]) == 1, rbinom(nc, 1, event_probs[2]) == 1)) |> 
    transform(time = ifelse(event, sample(1:max_t, size = size, replace = TRUE),
                                          ifelse(runif(size) < early_cens,
                                                 sample(1:max_t, size = size, replace = TRUE),
                                                 max_t)))
}

##### Functions to calculate from Cox models
calc_cox <- function(fit, time = 90) { # Efron method for handling ties
  1 - predict(fit, type = "survival", newdata = data.frame(time = time, event = FALSE, trt = FALSE))
  # Alternatively the code below, same results
  # bh <- basehaz(fit)
  # 1 - exp(-bh$hazard[bh$time == time]) # Risk of event at time
}

calc_notie <- function(fit, time = 90) { # Breslow method for handling ties, corresponds to Nelson-Aalen
 sf <- survfit(fit, se.fit = FALSE, ctype = 1)
 1 - exp(-sf$cumhaz[sf$time == time])
}

calc_nae <- function(fit, dta, time = 90) {
  form <- fit$formula
  event <- dta[[as.character(form[[2]][3])]]
  times <- dta[[as.character(form[[2]][2])]]
  trt <- dta[[as.character(form[[3]])]]
  unique_times <- sort(unique(times))
  d <- sapply(unique_times, function(t) sum(event & times == t))
  haz <- d / sapply(unique_times, function(t) sum(exp(trt[times >= t] * coef(fit)[1])))
  1 - exp(-cumsum(haz)[unique_times == time])
}


##### Simulations with calculations using the survival package and manually
sim_test <- function(seed = NULL) {
  dta <- get_dta(seed)
  fit <- coxph(Surv(time, event) ~ trt, data = dta)
  data.frame(seed = seed,
             survival = calc_cox(fit),
             notie = calc_notie(fit),
             nelson_aalen = calc_nae(fit, dta))
}

do.call(rbind, lapply(1:50, sim_test)) |> 
  transform(diff = survival - nelson_aalen,
            diff_nt = notie - nelson_aalen)

##### Output:
    # seed  survival     notie  nelson_aalen      diff       diff_nt
# 1     1 0.2272067 0.2268746    0.2268746 0.0003320670  6.661338e-16
# 2     2 0.2175603 0.2170762    0.2170762 0.0004840492 -5.551115e-16
# 3     3 0.1895550 0.1890357    0.1890357 0.0005192467 -6.661338e-16
# 4     4 0.3849542 0.3842591    0.3842591 0.0006950608 -1.776357e-15
# 5     5 0.2352376 0.2345973    0.2345973 0.0006402320  3.330669e-16
# 6     6 0.4200722 0.4188762    0.4188762 0.0011959891 -2.220446e-16
# 7     7 0.5446433 0.5432957    0.5432957 0.0013476732  0.000000e+00
# 8     8 0.3392467 0.3387477    0.3387477 0.0004990064  1.887379e-15
# 9     9 0.2274650 0.2271854    0.2271854 0.0002795840 -1.554312e-15
# 10   10 0.3505938 0.3499729    0.3499729 0.0006208717 -2.442491e-15
# 11   11 0.2104352 0.2101886    0.2101886 0.0002466026  9.992007e-16
# 12   12 0.1419023 0.1415093    0.1415093 0.0003930113  3.330669e-16
# 13   13 0.4199726 0.4191796    0.4191796 0.0007929868  2.331468e-15
# 14   14 0.2399319 0.2394671    0.2394671 0.0004647941 -5.551115e-16
# 15   15 0.4176189 0.4168578    0.4168578 0.0007611433 -2.886580e-15
# 16   16 0.4448734 0.4439910    0.4439910 0.0008823909 -1.665335e-15
# 17   17 0.1854611 0.1848599    0.1848599 0.0006011900  3.330669e-16
# 18   18 0.4852144 0.4840048    0.4840048 0.0012096037  9.992007e-16
# 19   19 0.1849566 0.1846082    0.1846082 0.0003484550  6.661338e-16
# 20   20 0.5061114 0.5046419    0.5046419 0.0014695038  1.887379e-15
# 21   21 0.4714157 0.4703959    0.4703959 0.0010198016 -2.775558e-15
# 22   22 0.2103897 0.2100139    0.2100139 0.0003757436 -5.551115e-16
# 23   23 0.3938060 0.3930882    0.3930882 0.0007177359 -2.220446e-16
# 24   24 0.2315272 0.2312066    0.2312066 0.0003205806  0.000000e+00
# 25   25 0.3067449 0.3060553    0.3060553 0.0006895437 -1.110223e-15
# 26   26 0.1519517 0.1517902    0.1517902 0.0001615212  1.110223e-15
# 27   27 0.5763238 0.5748188    0.5748188 0.0015049784  1.221245e-15
# 28   28 0.1337175 0.1336208    0.1336208 0.0000966612  9.992007e-16
# 29   29 0.1820044 0.1817667    0.1817667 0.0002376622  9.992007e-16
# 30   30 0.1747095 0.1743440    0.1743440 0.0003655876  6.661338e-16
# 31   31 0.3539410 0.3529207    0.3529207 0.0010203116 -1.110223e-15
# 32   32 0.3387115 0.3380319    0.3380319 0.0006796018  1.332268e-15
# 33   33 0.3074352 0.3069118    0.3069118 0.0005233613  4.440892e-16
# 34   34 0.3926664 0.3916131    0.3916131 0.0010533157 -1.998401e-15
# 35   35 0.4977650 0.4965184    0.4965184 0.0012465475 -1.110223e-15
# 36   36 0.3994452 0.3984969    0.3984969 0.0009482335 -2.109424e-15
# 37   37 0.3309733 0.3305066    0.3305066 0.0004667473 -3.774758e-15
# 38   38 0.3092816 0.3088037    0.3088037 0.0004778650  2.331468e-15
# 39   39 0.3064260 0.3059316    0.3059316 0.0004943968  2.553513e-15
# 40   40 0.4355615 0.4343174    0.4343174 0.0012441114 -2.220446e-16
# 41   41 0.2529690 0.2521956    0.2521956 0.0007734606  8.881784e-16
# 42   42 0.4811521 0.4797763    0.4797763 0.0013757764  2.331468e-15
# 43   43 0.3549747 0.3539785    0.3539785 0.0009961989  1.110223e-16
# 44   44 0.3992406 0.3985025    0.3985025 0.0007380988  3.330669e-15
# 45   45 0.3691174 0.3685292    0.3685292 0.0005882463  2.442491e-15
# 46   46 0.2003780 0.2001619    0.2001619 0.0002161166  2.220446e-16
# 47   47 0.5787639 0.5771760    0.5771760 0.0015879684 -2.664535e-15
# 48   48 0.3818801 0.3812023    0.3812023 0.0006777908  1.554312e-15
# 49   49 0.2835038 0.2830311    0.2830311 0.0004727184 -2.220446e-16
# 50   50 0.4164943 0.4156953    0.4156953 0.0007990132 -2.553513e-15

# Largest difference in these simulations:
.Last.value[which.max(.Last.value$diff), ]
#     seed  survival    notie   nelson_aalen      diff       diff_nt
# 47   47 0.5787639 0.577176     0.577176 0.001587968 -2.664535e-15
2 Likes