# Diffusion drift model (ddm) fitting issues with rstan

Dear Stan experts,

I’m now trying to apply the classical diffusion drift model (ddm) to my data (a binary-choice task; accept or reject), by using the hierarchical Bayesian estimation via rstan.

My model is based on the existent ddm examples (e.g., hBayesDM toolbox; and the wiener distribution doc, see: https://github.com/stan-dev/stan/issues/1576). The only modification I did here is that the drift rate (v) is modulated by trial-wise values (i.e., v = alphaP_S + betaP_O; see the model below for details).

Unfortunately, the estimation is quite poor (i.e., the rhat of the estimated parameters is much larger than 1; with n_sample = 3000, n_warmup = 2000, n_chain = 4). Also, there are always warnings about rejecting initial values at the beginning of the sampling.

Please find my Stan model code below:

``````data {
int <lower=1> S; // number of subjects
int <lower=1> T; // number of maxtrials among all sbjs
int <lower=1, upper=T> Tsbj[S]; // numbers of valid trials per sbj
int <lower=0> P_S[S,T]; // trial-wise payoff for S
int <lower=0> P_O[S,T]; // trial-wise payoff for O
int <lower=0,upper=1> IsAccept[S,T]; // 1 for accept, 0 for reject
real <lower=0> DT[S,T]; // trial-wise decision time (in seconds)
real minDT[S];       // minimum RT for each subject of the observed data
}

parameters {
// Declare all parameters as vectors for vectorizing
// Hyper(group)-parameters
vector mu_p;
vector<lower=0> sigma;

// Subject-level raw parameters (for Mathematics trick)
vector[S] alpha_pr;
vector[S] beta_pr;
vector[S] bound_pr; // boundary for acceptance
vector[S] nonDT_pr; // non-decision time (in s)
vector[S] SP_pr; // starting point or initial bias

}

transformed parameters {
// Transform subject-level raw parameters
vector<lower=-20,upper=20>[S] alpha; // weights on the payoff for S [-20,20]
vector<lower=-20,upper=20>[S] beta; // weights on the payoff for O [-20,20]
vector<lower=0,upper=100>[S] bound; // boundary [0,20]
vector<lower=0.1,upper=max(minDT)>[S] nonDT; // non-decision time [0.1,max(minRT)]; 0.1 s as the bilogical wall
vector<lower=0,upper=1>[S] SP; // starting point or initial bias [0,1]

for (i in 1:S) {
alpha[i] = -20 + Phi_approx( mu_p + sigma * alpha_pr[i] )*40;
beta[i] = -20 + Phi_approx( mu_p + sigma * beta_pr[i] )*40;
bound[i] = Phi_approx( mu_p + sigma * bound_pr[i] )*100;
nonDT[i]  = Phi_approx( mu_p + sigma * nonDT_pr[i] )*(minDT[i]-0.1) + 0.1;
SP[i]  = Phi_approx( mu_p + sigma * SP_pr[i] );
}
}

model {
// Hyperparameters
mu_p  ~ normal(0, 1);
sigma ~ cauchy(0, 5);

// individual parameters
alpha_pr ~ normal(0, 1);
beta_pr ~ normal(0, 1);
bound_pr ~ normal(0, 1);
nonDT_pr ~ normal(0, 1);
SP_pr ~ normal(0, 1);

for (i in 1:S) {
// Define values
real U_Accept; // U_Accept is the utility for the accept option
real U_Reject; // U_Reject is the utility for the reject option

for (t in 1:(Tsbj[i])) {

// Calculating utility (served as drift rate)
U_Accept = alpha[i]*P_S[i,t] + beta[i]*P_O[i,t];
U_Reject = 0;
if (IsAccept[i,t] == 1) {
DT[i,t] ~ wiener(bound[i], nonDT[i], SP[i], (U_Accept - U_Reject));
} else {
DT[i,t] ~ wiener(bound[i], nonDT[i], 1 - SP[i], -1*(U_Accept - U_Reject));
}

} // end of trial
} //end of subjects
} // end of model section

``````

**
I would like to make sure whether my code is correct (esp., the new drift rate). Thanks a lot for your help!

Best,
Yang

Dear Yang,

I hope I am not too late :D. The sampling issues you encounter have been solved by:

Ahn, Woo-Young, Nathaniel Haines, and Lei Zhang. 2016. “Revealing Neuro-Computational Mechanisms of Reinforcement Learning and Decision-Making with the hBayesDM Package.” bioRxiv . Cold Spring Harbor Laboratory. doi:10.1101/064287.

Check out their package and how they solved it.

I extended their framework to include trial-level estimates for inter-trial-variabilities. You can find my package here: https://github.com/Seneketh/StanDDM . You can also find my masters thesis here: http://fse.studenttheses.ub.rug.nl/19786/ , where I explain more details.

Let me know if you have any questions.

All the best,

Marco

5 Likes

This looks like awesome work @seneketh, thanks for your contributions!

Hey @mike-lawrence !

Many thanks for the kind words. At the time, I was pressed for results and also quite lost as I was self-learning the techniques. Nowadays I would include following improvements:

I was really unhappy with the priors, but well, learn today for modelling a 100 models better in the future.

4 Likes

Have you ever tried this? Would very much interested in to hear about it.

Thanks for the work you have put into implementing these models in Stan.

Maybe this warning is caused by outliers? The llf is too small almost near zeros for the outlier trials.

Just in case: Be aware that the original post is two years old, and the last one, one year old. I don’t think @tiagocc is still working on this…