I am dealing with a 1000-patient clinical trial with daily measurements of an ordinal outcome on days 1, 2, …, 28 after randomization where the outcome has between 4 and 100 levels. I’ve spent a lot of time trying to understand how to model an autocorrelation structure on the linear predictor scale using regular time spacing AR(1) and general continuous time with an Ornstein-Uhlenbeck process. That would allow me to directly obtain a marginally interpretable treatment effect (e.g., odds ratio in a proportional odds model). The data exhibits extremely strong autocorrelation that is difficult to capture. This has led me to consider direct modeling using a first-order Markov process:
\Pr(Y_t \geq y | X, Y_{t-1}) = \text{expit}(\alpha_y + X \beta + g(Y_{t-1}))
where g(Y) is a general function and may have multiple parameters, e.g., to allow state transition probabilities to be customized for every level of Y_{t-1} by having indicators for all but the first possible level and having regression coefficients for each of those. One of the baseline covariates X is treatment, and I need to get marginal treatment effects. For example I would want to compute the posterior distribution of the absolute risk difference for \Pr(Y_{28} \geq y | X) for treatment B vs. treatment A. for given covariate settings.
Because of the large number of distinct Y values along the way from t=1, 2, \dots, 27 the computation of the unconditional distribution is messy. It is probably easier to get it through a posterior predictive distribution where a very large (say 40,000) patients are simulated after say 4000 posterior draws are available for the autoregressive model parameters. I’d appreciate feedback on the proposed workflow:
- Take one of the posterior draws
- Conditional on X and the initial state Y_0 simulate from this posterior draw a 1000 \times 28 matrix of Y values (see below for details) under treatment A
- For the same posterior draw and X (except for treatment) and Y_0 simulate a matrix for treatment B
- Compute any derived quantity from these two matrices, e.g., absolute reduction in \Pr(Y_{28} > 4) or the difference in number of days t it takes to satisfy Y(t) \leq 6 or some other cutoff.
- Repeat this same process using the same posterior draw 4 more times to get 5 \times 2 = 10 (2 treatments) datasets
- Move on to the next posterior draw
To simulate each dataset, take a single initial state Y_0 and covariate settings X, compute the cell probabilities for all possible values of Y and simulate a single Y(1) from the multinomial distribution. Then for this Y(1) as the new state get the conditional probability distribution of Y(2) and simulate a single Y(2). Use this to simulate a single value of Y(3), etc.
Is this a valid procedure for computing unconditional (on previous Y) quantities? Is there a more efficient way? Thanks for any thoughts.
Addendum: I figured out how to get exact unconditional probabilities fairly easily when computing them for a single covariate combination, using simple matrix operations sequentially through time. I have R code for that if anyone needs it. But the simulation approach is still a good exercise and will allow one to make inference about complex quantities.