# Aggregated and conditional log likelihood

Hello forum. I have a question about explicit implementation of MLE in Stan.
I have a simple detection task where subjects either detect or do not detect a change. Since there is no generic likelihood function that I use here, I calculate the probability of (detection | change) in every trial:

``````model {
for (t in 1:Tr) {
real p;

if (choice[t] == 1 && change[t] > 0) {
p = 1 - theta;
}
if (choice[t] == 0 && change[t] > 0) {
p = theta;
}
if (choice[t] == 1 && change[t] == 0) {
p = 1 - pow(theta,2);
}
if (choice[t] == 0 && change[t] == 0) {
p = pow(theta,2);
}
target += p;
}
}
``````

Is this the right way of thinking about it in Stan? I get completely different `theta` from matlab and from Stan, so obviously something in my model is wrong.

`target` is the log probability so you need

``````target += log(p);
``````

Other than that, your code looks fine.

1 Like

Thanks! I thought target += is already increment log density.

I marked @nhuurre 's answer as solution.

ok, I guess my previous message is wrong. Thanks so much!

Oops, I think I misread your post! I thought youâ€™d also considered this as solution. Sorry about that. To be clear, I think Niko was pointing out the `log` (which is missing in `log(p)`) and not the `+=` notation (which you used correctly), so I guess this is still the correct solution (please confirm if everythingâ€™s running as expected).

Sorry, for the confusion!

I think @nerpa thought `target +=` takes the logarithm automatically. (It does not.)

2 Likes

Ok, this makes sense, thanks! When I add` log(p)`, I get the following:

`Log probability evaluates to log(0), i.e. negative infinity.`

So I am trying to solve this now.

Sounds like `theta` is either 0 or 1, or very close?
Might not help but Iâ€™d write the loop like this to avoid rounding

``````for (t in 1:Tr) {
if (choice[t] == 1 && change[t] > 0) {
target += log1m(theta);
}
if (choice[t] == 0 && change[t] > 0) {
target += log(theta);
}
if (choice[t] == 1 && change[t] == 0) {
target += log1m(square(theta));
}
if (choice[t] == 0 && change[t] == 0) {
target += 2 * log(theta);
}
}
``````
1 Like

Thatâ€™s a good suggestion, but why you calculate `target` differently in every line?

Itâ€™s just `log(p)`? Maybe itâ€™s more explicit if I write it like this

``````for (t in 1:Tr) {
real log_p;
if (choice[t] == 1 && change[t] > 0) {
log_p = log1m(theta); // log(1-theta)
}
if (choice[t] == 0 && change[t] > 0) {
log_p = log(theta); // log(theta)
}
if (choice[t] == 1 && change[t] == 0) {
log_p = log1m(square(theta)); // log(1-theta^2)
}
if (choice[t] == 0 && change[t] == 0) {
log_p = 2 * log(theta); // log(theta^2)
}
target += log_p;
}
``````
2 Likes

Oh sorry I didnâ€™t see the link to my original code! Iâ€™ll try this.

wow now my pystan crashes after finishing sampling with this error:
`Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)`

and other models that were running before, crushing!

Oh woah, a segfault? Thatâ€™sâ€¦unexpected. You might want to start a new thread with more detail.

Yeah - I just had a dialogue with myself in a separate thread :) It was solved with increasing chains and iterations. Thanks!