Hi,
I’m working on extending a Linear Ballistic Model (LBA) to accomodate trial-level regressors. I am currently running into a strange issue, being that 2 models that should (in my view) be completely equivalent, produce different outputs. I have no idea why this is the case and feel like I might be missing something obvious. Hopefuly someone could help and clarify this issue.
Starting with the original, fully-functional model, focusing on a particular parameter of the model, tau:
parameters {
...
real<lower=0> mu_tau[N_cond]; // group-level parameter
real<lower=0> sigma_tau[N_cond];
real<lower=0> tau[N, N_cond]; //individual parameter
...
}
model {
...
for (j in 1:N_cond) {
mu_tau[j] ~ normal(1, .5)T[0,];
sigma_tau[j] ~ gamma (1,1)
}
for (i in 1:N) {
for (j in 1:N_cond) {
tau[i,j] ~ normal(mu_tau[j], sigma_tau[j])T[0,];
// Likelihood
RT[i, j, , 1:n_trials] ~ lba(B, A[i], v[i, j,], s[i], tau[i,j]);
}
}
}
The likelihood function looks like this:
real lba_lpdf(matrix RT, real B, real A, vector v, real s, **real tau**) {
real t;
real b;
real cdf;
real pdf;
vector[cols(RT)] prob;
real out;
real prob_neg;
b = B;
for (i in 1:cols(RT)) {
**t = RT[1, i] - tau;** **//only relevant part for our issue**
if (t > 0) {
cdf = 1;
for (j in 1:num_elements(v)) {
if (RT[2, i] == j) {
pdf = lba_pdf(t, b, A, v[j], s);
} else {
cdf = lba_cdf(t, b, A, v[j], s) * cdf;
}
}
prob_neg = 1;
for (j in 1:num_elements(v)) {
prob_neg = Phi(-v[j]/s) * prob_neg;
}
prob[i] = pdf * (1-cdf);
prob[i] = prob[i]/(1-prob_neg);
if (prob[i] < 1e-10) {
prob[i] = 1e-10;
}
} else {
prob[i] = 1e-10;
}
}
out = sum(log(prob));
return out;
}
Now, let’s say I want the value of tau to vary per trial, as a linear function of a different variable. To start, I’ve set up a model with an additional parameter taut, which I want to use to substitute tau with. For testing purposes, taut just takes on the values of individual tau values. It is defined like follows:
transformed parameters {
real taut [N, N_cond, Max_tr];
for (i in 1:N) {
for (j in 1:N_cond) {
int n_trials;
n_trials = N_tr_cond[i, j];
for (t in 1:n_trials) {
**taut[i,j,t] = tau[i,j];**
}
for (tX in n_trials:Max_tr) {
taut[i,j,tX] = -2; // this is kind of ugly, but I since the number of trials can vary per //condition it just serves as a meaningless placeholder, just not to have NaNs in the output
}
}
}
}
All I’ve done here is substitute a single real value of tau with an array of this value repeated t times (no. of trials).
The likelihood now changes to (changes bolded):
for (i in 1:N) {
for (j in 1:N_cond) {
RT[i, j, , 1:n_trials] ~ lba(B, A[i], v[i, j,], s[i], **taut[i,j,1:n_trials]**);
}
}
And the likelihood function, correspondingly, to:
real lba_lpdf(matrix RT, real B, real A, vector v, real s, **real [] taut**) {
real t;
real b;
real cdf;
real pdf;
vector[cols(RT)] prob;
real out;
real prob_neg;
b = B;
for (i in 1:cols(RT)) {
**t = RT[1, i] - taut[i]**;
if ( t> 0) ) {
cdf = 1;
for (j in 1:num_elements(v)) {
if (RT[2, i] == j) {
pdf = lba_pdf(t, b, A, v[j], s);
} else {
cdf = lba_cdf(t, b, A, v[j], s) * cdf;
}
}
prob_neg = 1;
for (j in 1:num_elements(v)) {
prob_neg = Phi(-v[j]/s) * prob_neg;
}
prob[i] = pdf * (1-cdf);
prob[i] = prob[i]/(1-prob_neg);
if (prob[i] < 1e-10) {
prob[i] = 1e-10;
}
} else {
prob[i] = 1e-10;
}
}
out = sum(log(prob));
return out;
}
In case 1, I input a single real value of tau to the likelihood function, which is then reused for each iteration. In case 2, I input an array which should consist of the same value repeated by the number of trials. So it still should be the same value for each iteration.
From my understanding, these models should perform basically identically. However, when I run these models, the first does well, while the second one can never converge and usually has difficulties in finding more than 1 efficient sample per parameter. Can you help me understand what am I missing?
Best,
Wojciech