I admit I am having trouble parsing all the details, but it appears that you assume that the noise term is a difference of a normally-distributed variable (v_{it}) and an exponentially distributed variable (u_{it}), right? If that’s right, than I think u_{it} - v_{it} should follow the exponentially-modified normal distribution and you should be able to avoid estimating u and v directly.
Parameter restrictions are a much harder problem. Generall Gibbs/Metropolis-Hastings tricks don’t interact well with Stan’s sampler and cannot be easily implelemented. You have basically two ways to approach that problem in a Stan-friendly way:
- Find a sufficiently well behaved parametrization of your model that meets the inequality by construction (e.g. to satisfy a constraint that \beta_1 + 2\beta_2 \leq 0, you could have
parameters {
real<upper=0> beta_sum; //== beta_1 + 2*beta_2
real beta_1;
}
transformed parameters {
real beta_2 = (beta_sum - beta_1) / 2;
}
This is a bit tedious to generalize for your case (but could definitely be done), the bigger problem is that there are many ways to build such parametrizations and many of those could sample pretty badly with Stan. But it allows you to enforce the penalty directly and fully.
- Use a soft constraint/penalization. In this setting you give up on satisfying the equation exactly and instead do something like:
real constraint_lhs = ... //your complex formula here
if(constraint_lhs > 0) {
target += normal_lpdf(constraint_lhs | 0, penalization_scale);
}
(the code above is for a \leq 0 constraint). The lower the penalization_scale
the more closely you mimic a hard constraint and the worse your model will sample. In some cases soft constraints can perform reasonably well (e.g. Test: Soft vs Hard sum-to-zero constrain + choosing the right prior for soft constrain)
brms
generated code is very hard to read, could you share the brms formula as well if you want some feedback on that?
Hope that helps a bit.