Difficulties in Capturing Individual Differences with a Hierarchical Model and Balancing Priors in Stan

Hi everyone,

I’m currently working on a hierarchical reinforcement learning model (specifically using the Pearce-Hall learning rule) in Stan. While the model fits my data reasonably well, I’ve encountered two main issues:

1. Lack of Individual Differences:

In a non-hierarchical version of my model, I can easily distinguish individual differences in parameters (like alpha and gamma). However, after introducing hierarchical structures, these individual differences nearly disappear, and all participants’ parameters end up being very similar. I suspect this is due to shrinkage inherent in hierarchical modeling.

And here is my model:

data {
  int<lower=1> nSubjects;
  int<lower=1> nTrials;
  int<lower=1,upper=2> choice[nSubjects, nTrials];     
  real<lower=0, upper=10> reward[nSubjects, nTrials]; 
}

transformed data {
  vector[2] initV;  // initial values for V
  initV = rep_vector(0.5, 2);
}

parameters {

 // Hyper(group)-parameters
  vector[4] mu_pr;
  vector<lower=0>[4] sigma;
  
  // Subject-level raw parameters (for Matt trick)
  vector[nSubjects] alpha_pr;    // learning rate
  vector[nSubjects] gamma_pr;    // 
  vector[nSubjects] c_pr;    // 
  vector[nSubjects] tau_pr; // inverse temperature
 
}

transformed parameters {
  // subject-level parameters
  vector<lower=0, upper=1>[nSubjects] alpha;
  vector<lower=0, upper=1>[nSubjects] gamma;
  vector<lower=0, upper=1>[nSubjects] c;
  vector<lower=0, upper=20>[nSubjects] tau;


  for (i in 1:nSubjects) {
    alpha[i]   = Phi_approx(mu_pr[1]  + sigma[1]  * alpha_pr[i]);
    gamma[i]   = Phi_approx(mu_pr[2]  + sigma[2]  * gamma_pr[i]);
    c[i]   = Phi_approx(mu_pr[3]  + sigma[3]  * c_pr[i]);
    tau[i] = Phi_approx(mu_pr[4] + sigma[4] * tau_pr[i]) * 20;
  }
}

model {
  
    // Hyperparameters
  mu_pr ~ normal(0, 1.0);
  sigma ~ normal(0, 0.2);
  
  // individual parameters
  alpha_pr ~ normal(0, 1.0);
  gamma_pr ~ normal(0, 1.0);
  c_pr ~ normal(0, 1.0);
  tau_pr ~ normal(0, 1.0);
  
   for (s in 1:nSubjects) {
    vector[2] v;    // value estimates
    real pe;        // prediction error
    real k;     // 
    v = initV;      // initialize value estimates
    k = alpha[s];
    

    for (t in 1:nTrials) {        
      choice[s,t] ~ categorical_logit( tau[s] * v );  // choice made based on softmax
      
      // Compute prediction error
      pe = reward[s,t] - v[choice[s,t]];
      
      // Update learning rate using Pearce-Hall rule
      k = gamma[s] * fabs(pe) + (1 - gamma[s]) * k; 
      
      // Update value estimates
      v[choice[s,t]] = v[choice[s,t]] +  k * pe * c[s]; 
      
      }
    }
  }
  
generated quantities {

  real log_lik[nSubjects];
  int y_pred[nSubjects, nTrials];
  
  y_pred = rep_array(-999, nSubjects, nTrials);  // Initialize prediction array

  { // local section, this saves time and space
    for (s in 1:nSubjects) {
      vector[2] v;    // value estimates
      real pe;        // prediction error
      real k;     // dynamic learning rate
      v = initV;
      k = alpha[s];
      log_lik[s] = 0;

      for (t in 1:nTrials) {    
      // 
        log_lik[s] += categorical_logit_lpmf(choice[s, t] | tau[s] * v);  
        y_pred[s, t] = categorical_rng(softmax(tau[s] * v));
        
        // Compute prediction error
        pe = reward[s,t] - v[choice[s,t]];

        // Update learning rate using Pearce-Hall rule
        k = gamma[s] * fabs(pe) + (1 - gamma[s]) * k;
        
        // Update value estimates
        v[choice[s,t]] = v[choice[s,t]] +  k * pe * c[s]; 
      }
    }    
  }
}

2. Challenges with Prior Selection:

To increase individual variability, I tried using more diffuse priors (for example, a Cauchy(0,2) for sigma).

  // Hyperparameters
  mu_pr  ~ normal(0, 1);
  sigma ~ cauchy(0,2);

However, this led to poor model convergence: R-hat values rose above 1.1, and the sampler struggled to mix properly. On the other hand, using tighter priors forced the parameters to cluster around a certain value, making it difficult to recover distinct individual differences.

I’m caught in a dilemma:

  • If I use priors that are too tight, parameters shrink too much, masking individual differences.
  • If I loosen the priors, the model becomes hard to fit, with poor convergence indicators.

I’d greatly appreciate any suggestions or insights on how to address these challenges. For instance, are there alternative parameterizations, prior choices, or modeling strategies that can help my hierarchical model better reflect individual differences while maintaining good convergence properties?

Thank you very much for your help!

This kind of shrinkage can happen if there’s not enough information from each participant to inform their estimates, such that most of the information is coming from the group.

How many trials per person and how many participants do you have?

Do you get good parameter recovery from the non-hierarchical model? I would be suspicious of those results given the amount of shrinkage in the hierarchical model.

One way to get some useful individual variation, and which is a better approach given the situation anyway, is to modify the model to estimate your individual differences in the model. See Improving the Reliability of Computational Analyses: Model-Based Planning and Its Relationship With Compulsivity - ScienceDirect and Sufficient reliability of the behavioral and computational readouts of a probabilistic reversal learning task | Behavior Research Methods.

Vanessa