Multiple likelihoods

Hi Everyone,

I am trying to understand whether it is possible and resnoable to fit a model with two dependent vars using two different likelihoods. Would something like this make sense to you?

model {
log_lik += categorical_logit_lpmf(y1 | eta );
log_lik += exp_mod_normal_lpdf(y2 | mu, sigma, lambda)
} 

Thanks,
Nitzan.

This is certainly possible and makes sense, but as written there’s not enough information to say whether it’s a good idea. As written, there’s nothing to indicate that this model doesn’t decompose to two entirely different models that can be fit independently. If that’s the case, you’re probably better off fitting them independently; that way if there are any issues with sampling (e.g. due to nasty posterior geometry) in just one of the two models, you’ll immediately know which model is a problem and you’ll still get good inference on the unproblematic model. (One caveat here would be if you have some super slick trick that lets you vectorize part of the computation across both models together, yielding a performance gain, but nothing in your question suggests that’s the case).

On the other hand, it’s possible that there’s more model structure that you aren’t showing us, such that some of the parameters depend on parameters from the other model either deterministically or probabilistically. In that case, fitting the two models jointly as you outline above is definitely the way to go.

1 Like

Thats great - thank you so much. Yes, the two models are dependent (I was trying to make my question shorter - sorry for that). What I’m trying to do is fit choices and reaction-times simultaneity using RL Q-learning.

transformed parameters {
//declare variables and parameters
  vector<lower=0, upper=1>[Nsubjects]  alpha;
  vector<lower=0, upper=10>[Nsubjects] beta;
   

  for (subject in 1:Nsubjects) {
    alpha[subject]  = Phi_approx(mu_aux[1]  + sigma_aux[1]  * alpha_individual_aux[subject]);
    beta[subject]  = Phi_approx(mu_aux[2] + sigma_aux[2] * beta_individual_aux[subject]) * 10;
    mu[subject]  = Phi_approx(mu_aux[3] + sigma_aux[3] * mu_individual_aux[subject]) ;
    sigma[subject]  = Phi_approx(mu_aux[3] + sigma_aux[3] * sigma_individual_aux [subject]) ;
    lambda[subject]  = Phi_approx(mu_aux[3] + sigma_aux[3] * lambda_individual_aux[subject]) ;
  }
}

model { 

  // population level priors (hyper-parameters)
  mu_aux  ~ normal(0, 1);            
  sigma_aux ~ normal(0, 0.2);        


  // individual level priors (subjects' parameters)
  alpha_individual_aux~ normal(0, 1);
  beta_individual_aux ~ normal(0, 1);
  mu_individual_aux ~ normal(0, 1);
  sigma_individual_aux ~ normal(0, 1);
  lambda_individual_aux ~ normal(0, 1);

  //Likelihood function per subject per trial

  for (subject in 1:Nsubjects){
    vector[Narms] Qcard;  
    Qcard=Qvalue_initial;

      for (trial in 1:Ntrials_per_subject[subject]){

        //likelihood function 
        log_lik += categorical_logit_lpmf(action[subject, trial] | beta[subject] * Qcard);
        log_lik += exp_mod_normal_lpdf(rt | mu+beta2*(Qcard[1]-Qcard[2)], sigma, lambda)

        //Qvalues update
        Qcard[action[subject,trial]] += alpha[subject] * (reward[subject,trial] - Qcard[action[subject,trial]]);
      } 
  }
}
'''

Yes, very much so. For some detailed discussion see (What's the Probabilistic Story) Modeling Glory?.

3 Likes