# Multivariate, non-normal, outcome regression

I have a mess on my hands. The bottom line is that I have a multivariate outcome that consists of a couple count variables and a few positive-continuous variables; in other words, multivariate but not multi-normal. My basic plan of attack is to tie the means of these outcomes in a multivariate normal. This is proving more difficult than I first thought: 1) Since the mean of the gamma is alpha/beta, I’m not sure what/how to combine this into a MVN with the other means; 2) the logistics of how to nest defined quantities such that the means end up tied together as MVN; in the code below, I have used mu_ = exp(a_) > a_ ~ MNV( nu_ ,…) > nu_ = XB.

There is an additional difficulty. Each animal was treated with both a placebo (saline) and actual treatment (LPS) such that each animal can be used as its own control. However, half of the animals previously received a complementary treatment (CCL) that is hypothesized to alter the effect of LPS. Thus, unlike for saline/LPS, in which each subject is tested under both conditions, a CCL parameter is aliased if animal is included in the model. To overcome this, my plan is to estimate unique LPS parameters for each animal, directly compare the distribution of LPS coefficients between CCL animals vs non-CCL animals in a generated quantities block.

Any help greatly appreciated!

data {
int n; # Number of subjects = 18
int conditions; # w/LPS and w/o LPS = 2
int m; # Number of outcomes in Dark (or Light) phase = 5
int Animal[n];
int LPS[conditions];

int<lower=0> nNREMS_Dark[n * conditions];
real<lower=0, upper=43200> dNREMS_Dark[n * conditions];
int<lower=0> nREMS_Dark[n * conditions];
real<lower=0, upper=43200> dREMS_Dark[n * conditions];
real<lower=0> SWA_Dark[n * conditions];
}
parameters {
matrix[n, conditions] a_LPS_nNREMS_Dark;
matrix[n, conditions] a_LPS_dNREMS_Dark;
matrix[n, conditions] a_LPS_nREMS_Dark;
matrix[n, conditions] a_LPS_dREMS_Dark;
matrix[n, conditions] a_LPS_SWA_Dark;

corr_matrix[m] Rho_Sleep_Dark;
real<lower=0> Sigma_Sleep_Dark;

real<lower=0> scale_nNREMS_Dark;
real<lower=0> scale_dNREMS_Dark;
real<lower=0> scale_nREMS_Dark;
real<lower=0> scale_dREMS_Dark;
real<lower=0> scale_SWA_Dark;
}
model {
a_LPS_nNREMS_Dark ~ dnorm(log(48), 0.5); # based on idea that 1/3 spent in NREMS
a_LPS_dNREMS_Dark ~ dnorm(log(300), 0.5); # w/avg dNREMS ~ 5 min
a_LPS_nREMS_Dark ~ dnorm(log(48), 0.5); # based on idea that 1/10 spent in REMS
a_LPS_dREMS_Dark ~ dnorm(log(90), 0.5); # w/avg dREMS ~ 1.5 min
a_LPS_SWA_Dark ~ dnorm(log(100000), 0.5);

``````      matrix[n, conditions] nu_nNREMS_Dark;
matrix[n, conditions] nu_dNREMS_Dark;
matrix[n, conditions] nu_nREMS_Dark;
matrix[n, conditions] nu_dREMS_Dark;
matrix[n, conditions] nu_SWA_Dark;
nu_nNREMS_Dark = a_LPS_nNREMS_Dark[Animal, LPS];
nu_dNREMS_Dark = a_LPS_dNREMS_Dark[Animal, LPS];
nu_nREMS_Dark = a_LPS_nREMS_Dark[Animal, LPS];
nu_dREMS_Dark = a_LPS_dREMS_Dark[Animal, LPS];
nu_SWA_Dark = a_SWA_nNREMS_Dark[Animal, LPS];

nu_Dark = [nu_nNREMS_Dark, nu_dNREMS_Dark, nu_nREMS_Dark, nu_dREMS_Dark, nu_SWA_Dark];
Rho_Sleep_Dark ~ lkj_corr(1);
Sigma_Sleep_Dark ~ exponential(1);

matrix[n, conditions] a_nNREMS_Dark;
matrix[n, conditions] a_dNREMS_Dark;
matrix[n, conditions] a_nREMS_Dark;
matrix[n, conditions] a_dREMS_Dark;
matrix[n, conditions] a_SWA_Dark;
a_Sleep_Dark = [a_nNREMS_Dark, a_dNREMS_Dark, a_nREMS_Dark, a_dREMS_Dark, a_SWA_Dark];

matrix[n, conditions] mu_nNREMS_Dark;
matrix[n, conditions] mu_dNREMS_Dark;
matrix[n, conditions] mu_nREMS_Dark;
matrix[n, conditions] mu_dREMS_Dark;
matrix[n, conditions] mu_SWA_Dark;
mu_nNREMS_Dark = exp(a_nNREMS_Dark);
mu_dNREMS_Dark = exp(a_dNREMS_Dark);
mu_nREMS_Dark = exp(a_nREMS_Dark);
mu_dREMS_Dark = exp(a_dREMS_Dark);
mu_SWA_Dark = exp(a_SWA_Dark);

scale_nNREMS_Dark ~ exponential(1);
scale_dNREMS_Dark ~ exponential(1);
scale_nREMS_Dark ~ exponential(1);
scale_dREMS_Dark ~ exponential(1);
scale_SWA_Dark ~ exponential(1);
``````

nNREMS_Dark ~ neg_binomial_2(mu_nNREMS_Dark, scale_nNREMS_Dark);
dNREMS_Dark ~ gamma(mu_dNREMS_Dark/scale_dNREMS_Dark, 1/scale_dNREMS_Dark);
nREMS_Dark ~ neg_binomial_2(mu_nREMS_Dark, scale_nREMS_Dark);
dREMS_Dark ~ gamma(mu_dREMS_Dark/scale_dREMS_Dark, 1/scale_dREMS_Dark);
SWA_Dark ~ gamma(mu_SWA_Dark/scale_SWA_Dark, 1/scale_SWA_Dark);

Sorry if this doesn’t address everything - just had a quick read and spotted a solution for at least one thing:

For your gamma-distributed outcome, given the mean from your MVN distribution, define the gamma’s rate parameter as a local or transformed parameter and assign it the mean (from the MVN) divided by the shape. I assume that you are or will be exponentiating the MVN-distributed parameter to ensure that the mean of the gamma distribution is >0.

Thank you! I think I understand; so it might look like:

``````parameters{
mu_gamma_var
...
}
transformed parameters{
mean_gamma_var = gamma_var_alpha / gamma_var_beta
...
}
model{
..., mu_gamma_var,... ~ multi_normal(...,mean_gamma_var,...)
}
``````

yes?

This is exactly the case for copulas which generalize multivariate distributions to different univariate marginal distributions.

There is code Gaussian Copulas · Issue #1317 · paul-buerkner/brms · GitHub that has mixed discrete and continuous marginals for poisson and binomial discrete outcomes. The discrete case needs to be handled by introducing a latent continuous parameter and the cdf for the discrete distribution. You can follow the logic in the linked issue to write it for negative binomial since the cdf is in Stan.

Note that this is a gaussian copula and so there is 0 dependency in the tails of the distribution. Other copulas, such as the student t, will allow you to capture different dependency types.

1 Like

Hi, I think you need it the other way around:

1. in the parameter block you define a parameter vector that represents the mean (or a function thereof) for each of your outcomes that you want to model
2. in the model block, you define a MVN distribution for said vector
3. in the transformed parameters block, you define gamma_var_beta as a function of the mean for the gamma (which I imagine will be the exponent of one the MVN-distributed elements of the vector under point 1) and gamma_var_alpha (the shape of the gamma distribution).

This is a different approach from a copula, which is what @spinkney is suggesting. But maybe I’m misunderstanding what you mean when you say that one outcome is gamma-distributed. I assumed that you meant that the measurement error around that observation is gamma-distributed.

It turns out that the gamma distributions are not actually gamma. My data is a bit wonky but it turns out to be Gaussian. So that problem is avoided.

I haven’t ventured into copulas yet, as I’m hoping to get something I’m more familiar with to work. I’ve got the following model to run:

``````data{
vector power_Light;
vector REMS_Light;
vector NREMS_Light;
vector NREMS_Dark;
int LPSxAnimal;
int CCL;
int LPS;
int Animal;
int nNREMS_Dark;
vector dNREMS_Dark;
int nREMS_Dark;
vector dREMS_Dark;
vector SWA_Dark;
int nNREMS_Light;
vector dNREMS_Light;
int nREMS_Light;
vector dREMS_Light;
vector SWA_Light;
vector REMS_Dark;
vector power_Dark;
int LPSxCCL;
}
parameters{
vector<lower=0> a_SWA_Dark;
vector<lower=0> a_dREMS_Dark;
vector<lower=0> a_nREMS_Dark;
vector<lower=0> a_dNREMS_Dark;
vector<lower=0> a_nNREMS_Dark;
corr_matrix Rho_Dark;
vector<lower=0> Sigma_Dark;
real<lower=0> scale_nNREMS_Dark;
real<lower=0> SE_dNREMS_Dark;
real<lower=0> scale_nREMS_Dark;
real<lower=0> SE_dREMS_Dark;
real<lower=0> SE_SWA_Dark;
vector<lower=0> a_SWA_Light;
vector<lower=0> a_dREMS_Light;
vector<lower=0> a_nREMS_Light;
vector<lower=0> a_dNREMS_Light;
vector<lower=0> a_nNREMS_Light;
corr_matrix Rho_Light;
vector<lower=0> Sigma_Light;
real b_DarkPower_nNREMS;
real<lower=0> sigma_DPnN;
real b_DarkREMS_nNREMS;
real<lower=0> sigma_DRnN;
real b_DarkPower_dNREMS;
real<lower=0> sigma_DPdN;
real b_DarkREMS_dNREMS;
real<lower=0> sigma_DRdN;
real b_DarkPower_nREMS;
real<lower=0> sigma_DPnR;
real b_DarkREMS_nREMS;
real<lower=0> sigma_DRnR;
real b_DarkPower_dREMS;
real<lower=0> sigma_DPdR;
real b_DarkREMS_dREMS;
real<lower=0> sigma_DRdR;
real b_DarkPower_SWA;
real<lower=0> sigma_DPS;
real b_DarkREMS_SWA;
real<lower=0> sigma_DRS;
real<lower=0> scale_nNREMS_Light;
real<lower=0> SE_dNREMS_Light;
real<lower=0> scale_nREMS_Light;
real<lower=0> SE_dREMS_Light;
real<lower=0> SE_SWA_Light;
}
model{
vector lambda_nNREMS_Dark;
vector mu_dNREMS_Dark;
vector lambda_nREMS_Dark;
vector mu_dREMS_Dark;
vector mu_SWA_Dark;
vector lambda_nNREMS_Light;
vector mu_dNREMS_Light;
vector lambda_nREMS_Light;
vector mu_dREMS_Light;
vector mu_SWA_Light;
SE_SWA_Light ~ exponential( 1 );
SE_dREMS_Light ~ exponential( 1 );
scale_nREMS_Light ~ exponential( 1 );
SE_dNREMS_Light ~ exponential( 1 );
scale_nNREMS_Light ~ exponential( 1 );
sigma_DRS ~ exponential( 1 );
b_DarkREMS_SWA ~ normal( 0 , sigma_DRS );
sigma_DPS ~ exponential( 1 );
b_DarkPower_SWA ~ normal( 0 , sigma_DPS );
sigma_DRdR ~ exponential( 1 );
b_DarkREMS_dREMS ~ normal( 0 , sigma_DRdR );
sigma_DPdR ~ exponential( 1 );
b_DarkPower_dREMS ~ normal( 0 , sigma_DPdR );
sigma_DRnR ~ exponential( 1 );
b_DarkREMS_nREMS ~ normal( 0 , sigma_DRnR );
sigma_DPnR ~ exponential( 1 );
b_DarkPower_nREMS ~ normal( 0 , sigma_DPnR );
sigma_DRdN ~ exponential( 1 );
b_DarkREMS_dNREMS ~ normal( 0 , sigma_DRdN );
sigma_DPdN ~ exponential( 1 );
b_DarkPower_dNREMS ~ normal( 0 , sigma_DPdN );
sigma_DRnN ~ exponential( 1 );
b_DarkREMS_nNREMS ~ normal( 0 , sigma_DRnN );
sigma_DPnN ~ exponential( 1 );
b_DarkPower_nNREMS ~ normal( 0 , sigma_DPnN );
Sigma_Light ~ exponential( 1 );
Rho_Light ~ lkj_corr( 1 );
{
vector YY;
vector MU;
MU = [ log(48) , 600 , log(48) , 180 , 125000 ]';
for ( j in 1:4 ) YY[j] = [ a_nNREMS_Light[j] , a_dNREMS_Light[j] , a_nREMS_Light[j] , a_dREMS_Light[j] , a_SWA_Light[j] ]';
YY ~ multi_normal( MU , quad_form_diag(Rho_Light , Sigma_Light) );
}
for ( i in 1:36 ) {
mu_SWA_Light[i] = a_SWA_Light[LPSxCCL[i]] + b_DarkPower_SWA * power_Dark[i] + b_DarkREMS_SWA * REMS_Dark[i];
}
for ( i in 1:36 ) {
mu_dREMS_Light[i] = a_dREMS_Light[LPSxCCL[i]] + b_DarkPower_dREMS * power_Dark[i] + b_DarkREMS_dREMS * REMS_Dark[i];
}
for ( i in 1:36 ) {
lambda_nREMS_Light[i] = a_nREMS_Light[LPSxCCL[i]] + b_DarkPower_nREMS * power_Dark[i] + b_DarkREMS_nREMS * REMS_Dark[i];
lambda_nREMS_Light[i] = exp(lambda_nREMS_Light[i]);
}
for ( i in 1:36 ) {
mu_dNREMS_Light[i] = a_dNREMS_Light[LPSxCCL[i]] + b_DarkPower_dNREMS * power_Dark[i] + b_DarkREMS_dNREMS * REMS_Dark[i];
}
for ( i in 1:36 ) {
lambda_nNREMS_Light[i] = a_nNREMS_Light[LPSxCCL[i]] + b_DarkPower_nNREMS * power_Dark[i] + b_DarkREMS_nNREMS * REMS_Dark[i];
lambda_nNREMS_Light[i] = exp(lambda_nNREMS_Light[i]);
}
SWA_Light ~ normal( mu_SWA_Light , SE_SWA_Light );
dREMS_Light ~ normal( mu_dREMS_Light , SE_dREMS_Light );
nREMS_Light ~ neg_binomial_2( lambda_nREMS_Light , scale_nREMS_Light );
dNREMS_Light ~ normal( mu_dNREMS_Light , SE_dNREMS_Light );
nNREMS_Light ~ neg_binomial_2( lambda_nNREMS_Light , scale_nNREMS_Light );
SE_SWA_Dark ~ exponential( 1 );
SE_dREMS_Dark ~ exponential( 1 );
scale_nREMS_Dark ~ exponential( 1 );
SE_dNREMS_Dark ~ exponential( 1 );
scale_nNREMS_Dark ~ exponential( 1 );
Sigma_Dark ~ exponential( 1 );
Rho_Dark ~ lkj_corr( 1 );
{
vector YY;
vector MU;
MU = [ log(48) , 300 , log(48) , 90 , 1e+05 ]';
for ( j in 1:4 ) YY[j] = [ a_nNREMS_Dark[j] , a_dNREMS_Dark[j] , a_nREMS_Dark[j] , a_dREMS_Dark[j] , a_SWA_Dark[j] ]';
YY ~ multi_normal( MU , quad_form_diag(Rho_Dark , Sigma_Dark) );
}
for ( i in 1:36 ) {
mu_SWA_Dark[i] = a_SWA_Dark[LPSxCCL[i]];
}
for ( i in 1:36 ) {
mu_dREMS_Dark[i] = a_dREMS_Dark[LPSxCCL[i]];
}
for ( i in 1:36 ) {
lambda_nREMS_Dark[i] = a_nREMS_Dark[LPSxCCL[i]];
lambda_nREMS_Dark[i] = exp(lambda_nREMS_Dark[i]);
}
for ( i in 1:36 ) {
mu_dNREMS_Dark[i] = a_dNREMS_Dark[LPSxCCL[i]];
}
for ( i in 1:36 ) {
lambda_nNREMS_Dark[i] = a_nNREMS_Dark[LPSxCCL[i]];
lambda_nNREMS_Dark[i] = exp(lambda_nNREMS_Dark[i]);
}
SWA_Dark ~ normal( mu_SWA_Dark , SE_SWA_Dark );
dREMS_Dark ~ normal( mu_dREMS_Dark , SE_dREMS_Dark );
nREMS_Dark ~ neg_binomial_2( lambda_nREMS_Dark , scale_nREMS_Dark );
dNREMS_Dark ~ normal( mu_dNREMS_Dark , SE_dNREMS_Dark );
nNREMS_Dark ~ neg_binomial_2( lambda_nNREMS_Dark , scale_nNREMS_Dark );
}
``````

but at up to 10,000 iterations, Rhat is >>1.1, neff pretty low, treedepth exceeded ~100%, and 2 of 4 chains have E-BFMI < 0.2 (though few divergent transitions). I think the sampling problems come from the multivariate a_ parameters (tying together the multiple outcomes) since if I run only a partial model with only the Dark outcomes and without the b_ parameters, the problem persists. Do I need to reparametrize a MVN a_ parameters (but I don’t know how)? run 10x more iterations (I haven’t yet since its ties up my laptop from working on other projects)? or something else?