Absolutely, and thanks a lot for the input! I am trying to model data from an experiment in which participants saw two consecutive stimuli and made a decision about each, and then rated their confidence. The second decision as well as confidence incorporate information from both stimuli, and the weighting of the information from stimulus 1 is the parameter I care about (m). This weighting is captured separately for the second decision (m1) and for confidence (m2). The stimuli each lead to an internal signal that we cannot measure, so I essentially have two latent variables on every trial (r1 and r2), and the second decision (probability of choosing right, âprobRâ) as well as confidence both depend on these in complex ways. So, unfortunately, the whole thing has gotten quite messy and slow. I compute the probability of choosing ârightâ in the second decision with the integration.

Sorry if this is too much detail, I find it tricky to know what is relevant to include.

```
functions{
real probR_density_R(real x, // Function argument
real xc, // Complement of function argument
// on the domain (defined later)
real[] theta, // parameters
real[] x_r, // data (real)
int[] x_i ) { // data (integer)
real m = theta[1];
int L = x_i[1];
int max_val = 24*L;
int min_val = 1;
real temp = theta[2];
int range = (max_val - min_val+1)/2;
int mid_pt = min_val + range;
int out;
real s1;
real s2;
while(range > 0) {
if(temp == mid_pt){
out = mid_pt;
range = 0;
} else {
// figure out if range == 1
range = (range+1)/2;
mid_pt = temp > mid_pt ? mid_pt + range: mid_pt - range;
}
}
s1 = x_r[out];
s2 = x_r[(24*L)+out];
return erf((x/m + s2)/sqrt(2)) * exp(-0.5*((x-s1))^2);
}
real probR_density_L(real x, // Function argument
real xc, // Complement of function argument
// on the domain (defined later)
real[] theta, // parameters
real[] x_r, // data (real)
int[] x_i ) { // data (integer)
real m = theta[1];
int L = x_i[1];
int max_val = 24*L;
int min_val = 1;
real temp = theta[2];
int range = (max_val - min_val+1)/2; // We add 1 to make sure that truncation doesn't exclude a number
int mid_pt = min_val + range;
int out;
real s1;
real s2;
while(range > 0) {
if(temp == mid_pt){
out = mid_pt;
range = 0;
} else {
// figure out if range == 1
range = (range+1)/2;
mid_pt = temp > mid_pt ? mid_pt + range: mid_pt - range;
}
}
s1 = x_r[out];
s2 = x_r[(24*L)+out];
return erf((s2 - x/m)/sqrt(2)) * exp(-0.5*((x-s1))^2);
}
real partial_sum(int [] trials, int start, int end, int [] choice, real [] conf, vector probR, vector model_conf){
return bernoulli_lpmf(choice[start:end] | probR[start:end]) + normal_lpdf(conf[start:end] | model_conf[start:end], 0.025);
}
}
data {
int<lower=0> N; // total number of trials
int<lower=1> L; // number of participants
int<lower=0,upper=L> ll[N];
int<lower=0,upper=L> intMap[24*L];
real stim1[N]; // coherence of stimulus 1
real stim2[N]; // coherence of stimulus 2
int<lower=0,upper=1> choice1[N]; //participants' responses to decision 1
int<lower=0,upper=1> choice[N]; // participants' responses to decision 2
real<lower=0.5,upper=1> conf[N]; // participants' confidence ratings
int trials[N];
real stim1Int[24*L];
real stim2Int[24*L];
int levels[N];
int<lower=1> grainsize;
}
transformed data{
int x_i[1] = {L};
real x_r[2*24*L] = append_array(stim1Int, stim2Int);
}
parameters {
real<lower=0> m1_mu; // weighting of stimulus 1 in decision
real<lower=0> m2_mu; // weighting of stimulus 2 in confidence
real<lower=0> b_mu; // confidence bias
real<lower=0> m1_sd;
real<lower=0> m2_sd;
real<lower=0> b_sd;
// define subject-level parameters
real<lower=0> m1[L];
real<lower=0> m2[L];
real<lower=0> b[L];
vector[N] r1; // internal representation for stimulus 1
vector[N] r2; // internal representation for stimulus 2
}
transformed parameters{
vector[N] probR;
vector[N] model_conf;
vector[24*L] integrals_R;
vector[24*L] integrals_L;
real<lower=0,upper=1> confPrior;
real<lower=0,upper=1> confR;
real<lower=0,upper=1> confL;
real<lower=0> m1_subj;
real ind;
real temp;
for (i in 1:(24*L)){
m1_subj = m1[intMap[i]];
ind = i;
integrals_R[i] = 0.5 + (1/(2*Phi(stim1Int[i]))) * (1/(sqrt(2*pi()))) *
integrate_1d(probR_density_R, 0,positive_infinity(),{m1_subj, ind}, x_r , x_i, 1e-8); //
integrals_L[i] = 0.5 + (1/(2*(1-Phi(stim1Int[i])))) * (1/(sqrt(2*pi()))) *
integrate_1d(probR_density_L,
negative_infinity(),
0,
{m1_subj, ind}, x_r, x_i, 1e-8); //
}
for (i in 1:N){
if (choice1[i]==1){
probR[i] = integrals_R[levels[i]];
}else{
probR[i] = integrals_L[levels[i]];
}
confPrior = Phi_approx(fabs(r1[i])/(b[ll[i]]*m2[ll[i]]));
temp = r2[i]/(b[ll[i]]*sqrt(2));
if (temp <= -4){
temp = -4;
}
confR = (confPrior*(1+erf(temp)))/(1 + (2*confPrior-1)*(erf(temp)));
confL = 1 - confR;
if (choice[i]==1){
model_conf[i] = confR;
}else{
model_conf[i] = confL;
}
}
}
model {
// priors
m1_mu ~ lognormal(1,1);
m2_mu ~ lognormal(1,1);
b_mu ~ lognormal(1,1);
r1 ~ normal(stim1, 1);
r2 ~ normal(stim2, 1);
m1_sd ~ lognormal(1,1);
m2_sd ~ lognormal(1,1);
b_sd ~ lognormal(1,1);
m1 ~ normal(m1_mu,m1_sd);
m2 ~ normal(m2_mu,m2_sd);
b ~ normal(b_mu,b_sd);
target += reduce_sum(partial_sum, trials, grainsize, choice, conf, probR, model_conf);
}
```

Ok, great thanks. I tried a first attempt at a Kfold-CV but leaving one trial out, which lead to very high variance. I havenât tried LOGO-CV yet, so I will look into this more and give that a try.

Great, thanks, that makes sense, I will do that.