Glad it helped! Per your model, I don’t intuitively get the distinction between lambda, theta, and delta, nor why you want 62:N to be constant. If you could provide some explanation of what you’re trying to do with the model then that might help.
Either way, let me make sure I understand what’s happening…
- You have a vector of probabilities
lambda
, one for each observationN
- On one end, you’re transforming the [0,1] probabilities
lambda
into (-Inf, Inf) real numberslogitlambda
and you are putting normal priors onlogitlambda
- On the other end, you are taking a cumulative sum of the probabilities
lambda
and storing them astheta
- Finally, you take the exponentiation of the negative cumulative sum (
exp(-theta)
) and storing them asdelta
. These are then the probabilities that you’re using to model your data
So there is something about the cumulative probabilities that relate to the outcome.
Based on what I’m seeing, I think I have a few reasonable suggestions on how to get what you’re looking for.
First, you should put logitlambda
in the parameters section and then transform them to lambda
(and so on). In Stan, you need to adjust by the Jacobian if you make distributional assumptions of non-linear transforms of parameters (see here). I am not an expert on this, but I think it applies to this case, where lambda
is the parameter and logitlambda
is the non-linearly transformed parameters. Note that your model might run without this change and appear to work, but it might end up giving you the wrong answer because it is sampling from the wrong posterior.
Second, on further inspection, temp
isn’t doing anything in your model. To put it one way, it never works its way backwards to affect the data model. (bad description, but best way I can describe it). So that’s just to say that it isn’t achieving what you want in creating a constant probability from 62 onwards.
Finally, I would suggest only specifying 62 elements of lambda
/logitlambda
rather than N
.
So, putting it all together, here are some of the pieces you’ll want to consider…
parameters{
// 1. specify logitlambda as a parameter
// 2. only specify 62 elements of logitlambda
vector[62] logitlambda;
}
transformed parameters{
// 3. treat lambda as a transformed parameter
vector[N] lambda;
lambda[1:61] = inv_logit(logitlambda[1:61]);
// 4. Assign the last element of logitlambda to the last 62:N elements of lambda
lambda[62:N] = inv_logit(logitlambda[62]);
}
model {
// 5. Put a prior on logitlambda; no need for temp
logitlambda ~ normal(0, 10);
}
Hopefully this provides some helpful hints for future models as well. The Jacobian/transform issue comes up regularly. Best of luck as you dive deeper into Stan.