Error in object@stan_args[[1]] : subscript out of bounds

Hi there,
I am trying to fit a reference point model inspired by the prospect theory on some agents doing the risky choice task (e.g.choosing lottery 100 with p = 0.5 or guaranteed 10). The stan model appears to be syntactically correct, but it kept showing Error in object@stan_args[[1]] : subscript out of bounds. I couldn’t quite figure out why. Below is my model. Any help is appreciated, many thanks!

// simple model fitting the individual as prospect-theory agent with a reference point
functions {
  real subj_utility(real kappa, real alpha, real ref, real x){
    real u; // subjective utility
    if (ref == 0) {
      u = x ^ alpha;
    } else if (x > ref) {
      u = (x - ref) ^ alpha;
    } else if (x < ref) {
      u = -kappa * (ref - x) ^ alpha;
    }
    return u;
  }
  real choiceprob(real kappa, real alpha, real beta, real ref, real lott_value, real lott_prob, real sb_value, real rew_multi) {
    real y;
    real v_sb; // subjective value of sb 
    real v_lott; // subjective value of lottery
    
    v_sb = subj_utility(kappa, alpha, ref, sb_value*rew_multi);
    v_lott = subj_utility(kappa, alpha, ref, lott_value*rew_multi) * lott_prob + subj_utility(kappa, alpha, ref, 0) * (1 - lott_prob);
    y = beta * (v_lott - v_sb);
    return y;
  }
}
data {
  int<lower=0> N; // Number of trials we have
  vector[N] lott_value; // Lottery value for that choice
  vector[N] lott_prob; // Lottery probabilities for that choice
  vector[N] sb_value; // Surebet values for that choice
  vector[N] rew_multi; // rew_multi values
  vector[N] prev_reward; // previous reward values
  int<lower=0,upper=1> y[N]; // Choices we observe (1 if they pick lottery)
}
parameters {
  real<lower=0> kappa; // loss aversion
  real<lower=0> alpha; // exponent of utility function
  real<lower=0> beta; // perceptual sensitivity, the same beta as in beta-rho
  real<lower=0> ref_0; // reference point value if reward(t-1) == 0
  real<lower=0> ref_1; // reference point value if reward(t-1) != 0
}
model {
  vector[N] thetas;
  // kappa ~ normal(1,2);
  // alpha ~ normal(0.5,1);
  // beta ~ normal(0.8,1);
  // ref_0 ~ normal(15,10);
  // ref_1 ~ normal(15,10);
  for(n in 1:N){
    if (prev_reward[n] == 0) {
      thetas[n] = choiceprob(kappa, alpha, beta, ref_0, lott_value[n], lott_prob[n], sb_value[n], rew_multi[n]);
    } else if (prev_reward[n] > 0) {
      thetas[n] = choiceprob(kappa, alpha, beta, ref_1, lott_value[n], lott_prob[n], sb_value[n], rew_multi[n]);
    }
  }
  y ~ bernoulli_logit(thetas);
}
generated quantities {
  vector[N] log_lik;
  for(n in 1:N) {
    if (prev_reward[n] == 0) {
      log_lik[n] = bernoulli_logit_lpmf(y[n] | choiceprob(kappa, alpha, beta, ref_0, lott_value[n], lott_prob[n], sb_value[n], rew_multi[n]));
    } else if (prev_reward[n] > 0) {
      log_lik[n] = bernoulli_logit_lpmf( y[n] | choiceprob(kappa, alpha, beta, ref_1, lott_value[n], lott_prob[n], sb_value[n], rew_multi[n]));
    }
  }
}

That usually means there is some underlying error that prevents it from sampling. Call it with chains = 1 to see all the error messages.

I encounter the same issue. Has this been solved or are there any ideas what the issue might be?