Need help with evaluating the log probability at the initial value for drift diffusion model

Good afternoon I’m a a complete newbie to stan and I’m trying to run a drift diffusion model whilst calculating log probability with wiener_lpdf. Despite adjusting parameters and initialization, I keep getting this error:

Chain 1: Rejecting initial value:
Chain 1:   Error evaluating the log probability at the initial value.
Chain 1: Exception: wiener_lpdf: Random variable  = 0.241989, but must be greater than nondecision time = 0.509386 (in 'string', line 85, column 6 to column 82).

Eventually the error ended with:

Chain 1: 
Chain 1: Initialization between (-3, 3) failed after 100 attempts. 
Chain 1:  Try specifying initial values, reducing ranges of constrained values, or reparameterizing the model.
[1] "Error : Initialization failed."
[1] "error occurred during calling the sampler; sampling not done"

Line 85 in my stan file is the following:

alpha_pr ~ normal(0, 1);

Underneath is my code:

data {
  int<lower=1> N;      // Number of subjects
  int<lower=0> Nr_max; // Max (across subjects) number of upper boundary responses
  int<lower=0> Ns_max; // Max (across subjects) number of lower boundary responses
  int<lower=0> risk[N];  // Number of upper boundary responses for each subj
  int<lower=0> safe[N];  // Number of lower boundary responses for each subj
  real<lower=0> RTr[N, Nr_max];  // Risky choice response times (NAs counted as 0)
  real<lower=0> RTs[N, Ns_max];  // Safe choice response times (NAs counted as 0)
  real minRT[N];       // minimum RT for each subject of the observed data
  real RTbound;        // lower bound or RT across all subjects (e.g., 0.1 second)
  int<lower=0> RTrn_missing; // number of instances of missing data in RTr
  int<lower=0> RTsn_missing; // number of instances of missing data in RTs
  int RTrindices_of_missing[RTrn_missing,2] ; // matrix of row and column of missing data in RTr
  int RTsindices_of_missing[RTsn_missing,2]; // matrix of row and column of missing data in RTs
}

parameters {
  // Declare all parameters as vectors for vectorizing
  // Hyper(group)-parameters
  vector[4] mu_pr;
  vector<lower=0>[4] sigma;

  // Subject-level raw parameters (for Matt trick)
  vector[N] alpha_pr; // alpha (a): Boundary separation
  vector[N] beta_pr; // beta (b): Initial Bias
  vector[N] delta_pr; // delta (v): Drift Rate
  vector[N] tau_pr; // Nondecision time (Can't be faster than 0.1 seconds)
  //imputed: vector of imputed values for missing values
  vector<lower=0>[RTrn_missing] RTr_imputed;
  vector<lower=0>[RTsn_missing] RTs_imputed;
}

transformed parameters {
  //RTr_with_imputed: matrix of RTr observations with imputed values
  real<lower=0> RTr_with_imputed[N, Nr_max];
  //add imputed values to non-missing
  RTr_with_imputed = RTr;
  for (i in 1:RTrn_missing){
    RTr_with_imputed[
      RTrindices_of_missing[i,1]
      , RTrindices_of_missing[i,2]
    ] = RTr_imputed[i];
  }
  //RTs_with_imputed: matrix of RTs observations with imputed values
  real<lower=0> RTs_with_imputed[N, Ns_max];
  //add imputed values to non-missing
  RTs_with_imputed = RTs;
  for (i in 1:RTsn_missing){
    RTs_with_imputed[
      RTsindices_of_missing[i,1]
      , RTsindices_of_missing[i,2]
    ] = RTs_imputed[i];
  }
  
  // Transform subject-level raw parameters
  vector<lower=0>[N]                         alpha; // boundary separation
  vector<lower=0, upper=1>[N]                beta;  // initial bias
  vector[N]                                  delta; // drift rate
  vector<lower=RTbound, upper=max(minRT)>[N] tau; // nondecision time

  for (i in 1:N) {
    beta[i] = Phi_approx(mu_pr[2] + sigma[2] * beta_pr[i]);
    tau[i]  = Phi_approx(mu_pr[4] + sigma[4] * tau_pr[i]) * (minRT[i] - RTbound) + RTbound;
    alpha[i] = exp(mu_pr[1] + sigma[1] * alpha_pr[i]);
    delta[i] = mu_pr[3] + sigma[3] * delta_pr[i];
    }
}

model {
  // Hyperparameters
  mu_pr  ~ normal(0, 1);
  sigma ~ normal(0, 0.2);

  // Individual parameters for non-centered parameterization
  alpha_pr ~ normal(0, 1);
  beta_pr  ~ normal(0, 1);
  delta_pr ~ normal(0, 1);
  tau_pr   ~ normal(0, 1);

  // Begin subject loop
  for (i in 1:N) {
      // Response time distributed along wiener first passage time distribution
      RTr_with_imputed[i, :risk[i]] ~ wiener(alpha[i], tau[i], beta[i], delta[i]);
      // Response time distributed along wiener first passage time distribution
      RTs_with_imputed[i, :safe[i]] ~ wiener(alpha[i], tau[i], 1-beta[i], -delta[i]);
  }
}

generated quantities {
  // For group level parameters
  real<lower=0>                         mu_alpha; // boundary separation
  real<lower=0, upper=1>                mu_beta;  // initial bias
  real                                  mu_delta; // drift rate
  real<lower=RTbound, upper=max(minRT)> mu_tau; // nondecision time

  // For log likelihood calculation
  real log_lik[N];
  
  // Assign group level parameter values
  mu_alpha = exp(mu_pr[1]);
  mu_beta  = Phi_approx(mu_pr[2]);
  mu_delta = mu_pr[3];
  mu_tau   = Phi_approx(mu_pr[4]) * (mean(minRT)-RTbound) + RTbound;

  { // local section, this saves time and space
    // Begin subject loop
    for (i in 1:N) {
      log_lik[i] = wiener_lpdf(RTr_with_imputed[i, :risk[i]] | alpha[i], tau[i], beta[i], delta[i]);
      log_lik[i] += wiener_lpdf(RTs_with_imputed[i, :safe[i]] | alpha[i], tau[i], 1-beta[i], -delta[i]);
    }
  }
}

The model might be really familiar to many of you because it’s an edit based on choiceRT_ddm.stan code in hBayesDM package. But I have added in “imputed” because the data I am running has missing NA values therefore I imputed the values using the guide from the youtube link below:

Thank you for the helps in advance! Sorry if the the question was unclear!

Welcome to the forums! Numerical issues at initialization can happen sometimes and can be quite annoying to troubleshoot. Often the solution is to supply plausible initial values yourself, but these errors can also occur due to bugs in your code. If you’re confident that the problem appeared due to including the imputed values, it might be worthwhile to page @mike-lawrence himself and ask if he has any tips specific to this situation.

Hm, I don’t think you need to be doing missing value imputation at all; possibly you’ve added that because you have ragged data and aren’t familiar with how to convert HBayesDM’s wide-format data design to long-format? Missing value imputation is only really necessary when the likelihood is multivariate.