@nhuurre Thank you! Makes complete sense.

My last question related to marginalization adds some complexity. In a traditional MVN model with missing data we can do what Section 3.5 of the manual does and model the marginal distribution of the observed data if some values are missing in the response vector. This can become quite tedious if the response vector is larger in dimension and further, inefficient because it requires the use of the full covariance instead of the Cholesky (I am unsure of any way to reduce the Cholesky factor in ways that allow for missing data?).

What I am trying to do is model a Gaussian copula with normal marginals to start. You’ll see in the code I attach I use the method recommended by the manual and what I initially posted about. That is, filling in the response vector with a combination of known data and `ymiss`

parameters. This works, fits, and is okay. But I was curious if I could marginalize the marginals here much like your example and the marginalization in an MVN.

I know there are a quite few complexities here but most of all are 1) the use of the cholesky instead of the full covariance and 2) the input to the lpdf (`centered_gaussian_copula_cholesky`

) requires a matrix of marginal responses and the cholesky matrix and because the missing are not distributed equally across responses, the matrix would be ragged (e.g., some responses may have 10 complete observations while some have 20). Thus, I am unsure how I can get a ‘complete’ matrix of marginals to input into the lpdf function if I marginalize missing data much like your example.

I hope all this makes sense. I also fully recognize it may not be analytically possible to integrate out the missing data here, but I figured I would ask! In theory, the copula construction makes it easier because it deals with the marginals one response at a time. On the other hand, the nature of the lpdf function may make this more difficult than coding it from scratch in R? I draw my copula example from here.

```
functions{
real multi_normal_cholesky_copula_lpdf(matrix U, matrix L) {
int N = rows(U);
int J = cols(U);
matrix[J, N] G = mdivide_left_tri_low(L, U') ;
return -N * sum(log(diagonal(L))) - 0.5 * ( sum( G^2 ) - sum(U^2) );
}
real centered_gaussian_copula_cholesky_(matrix[,] marginals, matrix L, int[] order) {
// Extract dimensions of final outcome matrix
int N = rows(marginals[1][1]);
int J = rows(L);
matrix[N, J] U;
// Iterate through marginal arrays, concatenating the outcome matrices by column
// and aggregating the log-likelihoods (from continuous marginals) and jacobian
// adjustments (from discrete marginals)
real adj = 0;
int pos = 1;
for (m in 1 : size(marginals)) {
int curr_cols = cols(marginals[m][1]);
U[ : , pos : (pos + curr_cols - 1)] = marginals[m][1];
adj += sum(marginals[m][2]);
pos += curr_cols;
}
// Return the sum of the log-probability for copula outcomes and jacobian adjustments
return multi_normal_cholesky_copula_lpdf(U[ : , order] | L) + adj;
}
matrix[] normal_marginal(matrix y, matrix mu_glm, matrix sigma) {
int N = rows(mu_glm);
int J = cols(mu_glm);
matrix[N, J] rtn[2];
// Initialise the jacobian adjustments to zero, as vectorised lpdf will be used
rtn[2] = rep_matrix(0, N, J);
for (j in 1 : J) {
rtn[1][ : , j] = (y[ : , j] - mu_glm[ : , j]) ./ sigma[ :,j];
rtn[2][1, j] = normal_lpdf(y[ : , j] | mu_glm[ : , j], sigma[ :,j]);
}
return rtn;
}
}
data{
int J; // # of outcomes
int<lower=0> Nrow;
int<lower=0> Ncol;
int<lower=0> Ncomp; // Number of non-missing values
int<lower=0> Nmiss; // Number of missing values
real dat_complete[Ncomp]; // Vector of non-missing values
int ind_pres[Ncomp, 2]; // Matrix (row, col) of non-missing value indices
int ind_miss[Nmiss, 2];// Matrix (row, col) of missing value indices
real X[Nrow]; // predictor (age)
}
transformed data{
int order[J]; // order of responses for copula -> ordinal (binary) first
for(i in 1:J){
order[i] = i;
}
}
parameters{
real ymiss[Nmiss]; // missing data parameter - UNSURE
vector<lower=0>[J] c1; // first parameter of continuous mean function
vector<lower=0>[J] c2; // second parameter of continuous mean function
vector[J] c3; // third parameter of continuous mean function
vector<lower=0>[J] k1; // first parameter of continuous noise function
vector<lower=0>[J] k2; // second parameter of continuous noise function
cholesky_factor_corr[J] L; // cholesky factor of correlation matrix
}
transformed parameters{
matrix[Nrow,J] mu_J; // continuous
matrix[Nrow,J] sd_J; // continuous
matrix[Nrow,Ncol] y; // The "data" with interpolated missing values
// Fill y with non-missing values
for(n in 1:Ncomp) {
y[ind_pres[n,1]][ind_pres[n,2]] = dat_complete[n];
}
// Fill the rest of y with missing value "parameters"
for(n in 1:Nmiss){
y[ind_miss[n,1]][ind_miss[n,2]] = ymiss[n];
}
for(i in 1:Nrow){
for(j in 1:J){
mu_J[i,j] = c2[j]*X[i]^c1[j] + c3[j];
sd_J[i,j] = k1[j]*(1+k2[j]*X[i]);
}
}
}
model{
// Mean and SD Functions for Likelihood
for (n in 1:Nmiss){
ymiss[n] ~ normal(mu_J[ind_miss[n,1]][ind_miss[n,2]],sd_J[ind_miss[n,1]][ind_miss[n,2]]);
}
// Priors
c1 ~ normal(0,1);
c2 ~ normal(0,1);
c3 ~ normal(0,1);
k1 ~ cauchy(0,1);
k2 ~ normal(0,1);
L ~ lkj_corr_cholesky(1); //
// Likelihood
target += centered_gaussian_copula_cholesky_({normal_marginal(y,mu_J,sd_J)},L,order);
}
generated quantities{
corr_matrix[J] Omega = multiply_lower_tri_self_transpose(L);
vector[Nrow] log_lik;
for(n in 1:Nrow){
log_lik[n] = centered_gaussian_copula_cholesky_({normal_marginal(to_matrix(y[n,],1,J),to_matrix(mu_J[n,],1,J),to_matrix(sd_J[n,],1,J))},L,order);
}
}
```