Hello,
I would like to ask whether the following is a correct way to perform a reparameterization in a hierarchical model?
I want to code a hierarchical model of reinforcement learning as follows (the actual model is more complex so I am leaving in here just the relevant parts I want to ask about). Here the w is a weight between (0,1); the betas should be positive, but I am also putting an upper bound because they typically do not go higher than that.
parameters{
//hyperparameters
real mu_beta;
real mu_w;
real<lower = 0> sigma_beta;
real<lower = 0> sigma_w;
// Subject level raw parameters
vector[N] beta_raw;
vector[N] w_raw;
}
transformed parameters{
vector<lower = 0, upper = 10>[N] beta;
vector<lower = 0, upper = 1>[N] w;
for (i in 1:N){
beta[i] = Phi_approx(mu_beta + sigma_beta * beta_raw[i])* 10;
w[i] = Phi_approx(mu_w + sigma_w * w_raw[i]);
}
}
model{
//Hyper parameters
mu_beta ~ normal(0,10);
mu_w ~ normal(0,10);
sigma_beta ~ cauchy(0,2.5);
sigma_w ~ cauchy(0,2.5);
// Individual parameters (raw)
beta_raw ~ normal(0,1);
w_raw ~ normal(0,1);
// For each subject
for (i in 1:N){
real Qsum;
for (t in 1:numtrials){
Qsum = beta[n]*w[n]*Q1[i,t] + beta[n]*(1-w[n])*Q2[i,t];
choice[i,t] ~ categorical_logit(Qsum);
// code to update Q1 and Q2
}
}
}
Now I would like to try a reparameterization, such that theta1 = betaw, theta = beta(1-w). The individual beta and w would be recovered as beta = theta1 + theta2, w = theta1/(theta1+theta2). I don’t know if I can recover the group level mu_beta and mu_w.
The main parameters I am interested in are the individual beta and w, and also the group level mu_beta, mu_w. I am wondering if it is possible to reparameterize and be able to obtain these information.
The following is the code I have come up with, and I would appreciate if someone can point out if I am doing anything wrong or if there are other ways to do it. Thanks!
parameters{
//hyperparameters
real mu_beta;
real mu_w;
real mu_theta1;
real mu_theta2;
real<lower = 0> sigma_beta;
real<lower = 0> sigma_w;
real<lower = 0> sigma_theta1;
real<lower = 0> sigma_theta2;
// Subject level raw parameters
vector[N] beta_raw;
vector[N] w_raw;
vector[N] theta1_raw;
vector[N] theta2_raw;
}
transformed parameters{
vector<lower = 0>[N] theta1;
vector<lower = 0>[N] theta2;
vector<lower = 0, upper = 10>[N] beta;
vector<lower = 0, upper = 1>[N] w;
for (i in 1:N){
theta1[i] = Phi_approx(mu_theta1 + sigma_theta1 * theta1_raw[i])* 10;
theta2[i] = Phi_approx(mu_theta2 + sigma_theta2 * theta2_raw[i])* 10;
beta[i] = theta1[i] + theta2[i];
w[i] = theta1[i]/(theta1[i] + theta2[i]);
}
}
model{
//Hyper parameters
mu_theta1 ~ normal(0,10);
mu_theta2 ~ normal(0,10);
sigma_theta1 ~ cauchy(0,2.5);
sigma_theta2 ~ cauchy(0,2.5);
// Individual parameters (raw)
theta1_raw ~ normal(0,1);
theta2_raw ~ normal(0,1);
// For each subject
for (i in 1:N){
real Qsum;
for (t in 1:numtrials){
Qsum = beta[n]*w[n]*Q1[i,t] + beta[n]*(1-w[n])*Q2[i,t];
choice[i,t] ~ categorical_logit(Qsum);
// code to update Q1 and Q2
}
}
}