# Subset of Covariates for Group Level Parameters

Hello,

I am running a hierarchical model with covariates at the individual level, as well as the group level–somewhat similar to Gelman’s Red State Blue State example. However, the difference is I only have group level covariates for a few parameters. Is it possible to let some group level parameters be constant, and others depend on covariates? Only two should be modeled with covariates. E.g., \mu_{\gamma_{1,j}} = \alpha_0 + \beta_1 x whereas the others are just a group average, e.g. \mu_{\gamma_2} = \alpha_0 The errors I am getting are about indexing the mu_gamma. Specifically, the line

    Beta[j] = mu_gamma[j] + diag_pre_multiply(tau, Lcorr_b)*Delta[j];
}


I realize j is indexing the row, but I am getting an error the rows of mu_gamma and Beta do not match. Any insight would be much appreciated. I feel like I’m getting something off with the indexing and class type to subset the mu_gamma, but I am not sure what exactly. Below is my Stan model code,

data {
int<lower=1> N; //the number of observations
int<lower=1> J; //the number of groups
int<lower=1> K; //number of columns in the model matrix
int<lower=1,upper=J> id[N]; //vector of group indices
matrix[N,K] X; //the model matrix for covariates at individual level
vector[J] u; //group level covariate
vector[N] y; //the response variable
}

parameters {
cholesky_factor_corr[K] Lcorr_b;
vector<lower=0> [K] tau;
real<lower=0> sigma;
vector[K] Delta[J];
}

transformed parameters{
vector[K] Beta[J]; //individual parameters
vector[K] mu_gamma[J]; //group level parameters
vector[2] alpha;
vector[2] gamma;
u_gamma[2] = alpha[1] + gamma[1]*u; //two group level parameters with covarites
u_gamma[3] = alpha[2] + gamma[2]*u;

for(j in 1:J){
Beta[j] = mu_gamma[j] + diag_pre_multiply(tau, Lcorr_b)*Delta[j];
}
}

model {
vector[N] mu;
for(n in 1:N){
mu[n] =  X[n]*Beta[id[n]];
}
y ~ normal(mu, sigma);

//priors
Lcorr_b ~ lkj_corr_cholesky(1);
tau ~ student_t(4,0,2.5);
sigma ~ student_t(4, 0, 2.5);
alpha ~ normal(0,5);
gamma ~ normal(0,5);

for(j in 1:J){
Delta[j] ~ std_normal();
}

}



Thanks,
Colin

As far as I can see that particular line shouldn’t be an issue, but there are some parts of the transformed parameters that would be causing errors:

  vector[K] mu_gamma[J]; //group level parameters
vector[2] alpha;
vector[2] gamma;
u_gamma[2] = alpha[1] + gamma[1]*u; //two group level parameters with covarites
u_gamma[3] = alpha[2] + gamma[2]*u;


Here you have u_gamma, but I’m guessing that should be mu_gamma? You also don’t assign any values to the first vector of mu_gamma, only the second and third, so the first vector would be initialised to NaN values.

Additionally, u is a vector of size J (i.e., vector[J] u), but you’re assigning their transformation to a vector of size K:

u_gamma[2] = alpha[1] + gamma[1]*u;


Because mu_gamma[j] gives a vector of size K

Additionally, you’ve declared alpha and gamma as transformed parameters, but they’re being sampled in the model block, so they should instead be declared in parameters.

Thanks @andrjohns for your help. Yes, I think I have fixed the indexing issue (see below for working code). There might be a more efficient way to write the loop for the \mu_\gamma parameters, but the model seems to be running quite fast.

One thing I am wondering, however, is if having the same covariate in intercept and mean for group level parameters is generating a linear relationship by construction? When I plot the posterior median of \mu_{\gamma_7} or \mu_{\gamma_8} versus the group covariate u[j], the relationship is almost perfectly linear in both cases. However, if I run a model without group level covariates and plot the individual level parameters \beta_j verus u[j] the correlation is quite low, e.g., r=0.10. I realize this is a broader question than Stan code, but does this make sense to you?

~Colin

// The input data is a vector 'y' of length 'N'.
data {
int<lower=1> N; //the number of observations
int<lower=1> J; //the number of groups
int<lower=1> K; //number of columns in the model matrix
int<lower=1,upper=J> id[N]; //vector of group indeces
matrix[N,K] X; //the model matrix
vector[J] u; //group predictors, town republican
vector[N] y; //the response variable
}

parameters {
cholesky_factor_corr[K] Lcorr_b;
vector<lower=0> [K] tau;
real<lower=0> sigma;
vector[K] alpha;
vector[3] gamma;
vector[K] Delta[J];
}

transformed parameters{
vector[K] Beta[J]; //individual parameters
vector[K] mu_gamma[J]; //group parameters
for(j in 1:J){
mu_gamma[j,1] = alpha[1] + u[j]*gamma[1];
mu_gamma[j,2] = alpha[2];
mu_gamma[j,3] = alpha[3];
mu_gamma[j,4] = alpha[4];
mu_gamma[j,5] = alpha[5];
mu_gamma[j,6] = alpha[6];
mu_gamma[j,7] = alpha[7] + u[j]*gamma[2];  //group covariate;
mu_gamma[j,8] = alpha[8] + u[j]*gamma[3]; //group covariate;
}

for(j in 1:J){
Beta[j] = mu_gamma[j] + diag_pre_multiply(tau, Lcorr_b)*Delta[j];
}
}

model {
vector[N] mu;
//beta_grp ~ multi_normal_cholesky(mu_beta, diag_pre_multiply(tau, Lcorr_b)); //top level

for(n in 1:N){
mu[n] =  X[n]*Beta[id[n]];
}
y ~ normal(mu, sigma);

//priors
Lcorr_b ~ lkj_corr_cholesky(1);
tau ~ student_t(4,0,2.5);
sigma ~ student_t(4, 0, 2.5);
alpha ~ normal(0,5);
gamma ~ normal(0,5);
for(j in 1:J){
Delta[j] ~ std_normal();
}

}