Just somewhat related, but I’m working with a similar (though more simple) model, and I found that identification/reliability improved considerably once I had covariates that could predict the probabilities. Might consider adding that to the simulation to see if it makes a difference.
I think everything @rtrangucci posts is correct. And yes, as far as I can see the problem I mentioned is addressed in his approach (in a quite elegant way).
Good luck with your paper!
Two years later and a search of the forum turned up this post. I’d be curious to know if there are further insights at this point, as the use of the inverse Mills ratio seems highly variable across fields and applications.
Not that I know of.
Just adding a bump here. I am excited to see the extension of these econometric models into Stan.
Hi everyone! I found a small bug in the model posted by @rtrangucci . By checking the likelihood against both the stata documentation here (https://www.stata.com/manuals15/rheckman.pdf) and the R package documentation here (https://cran.r-project.org/web/packages/sampleSelection/vignettes/selection.pdf) I think found that there was an unneccessary second division by (sqrt(1-rho)) in the model code above.
Of course, please do reply if you disagree or if you think I’ve misunderstood!
It took me a while to find the bug because it somehow does not affect the performance too badly in the model – in fact, the calibration is still good in the model above! I am not sure why. But I do know that if you try to generalize the model to one in which you can observe the unselected units (but the betas differ across the two types of units) you start getting bad behaviour then.
Below is the fixed model and the generalization (which is sometimes called a tobit-5, but sometimes not, alas) plus R scripts for some simulation-based calibration tests that show they do well :)
fake_data_generalized_heck_montecarlo_calibration.R (4.0 KB)
fake_data_heck_montecarlo_calibration_check.R (2.8 KB)
generalized_heck.stan (1.8 KB)
heck.stan (1.1 KB)
You’re absolutely right @RachaelMeager! So sorry about that! That’s a nasty bug, it sounds like a pain to track down. @martinmodrak is there a way to edit very old posts? it’d be nice to correct that bug and/or to point people to the right code in @RachaelMeager’s post.
Thanks @RachaelMeager ! I recall @edjee showing me some code he’d written for dynamic panels also. Are you able to post it here Ed?
I wasn’t aware this is not possible! (as an admin the system lets me do anything). It turns there is a default setting to prevent that for regular users, but I think I trust our user base enough to let anybody on Truest level 2 or above to edit their posts at any time. So you should be able to edit now. If not, let me know (possibly in a private message to avoid derailing the thread).
My code just uses MVN sufficient stats to speed up likelihood evaluation a lot, it’s agnostic about covariate choice so setting lagged Y as a control gives the dynamic panel model.
Unfortunately, it’s still a mess but I’m cleaning/working on it this summer with a view to sharing.
@RachaelMeager taught me the trick so she gets double discourse brownie points.
In any case, I tagged you in completely the wrong old thread–clearly projected to the same node in my brain by the presence of @rtrangucci
no apologies needed – your code saved me SO much time over the life of this project that I am and always will be filled with gratitude for you!! We all have bugs, and your code is so beautiful and clean that it was easy to find once i went back to the algebra (which of course i consider a last resort lol), so all’s well that ends well. :)
You’re too kind :) I’m so glad to hear that it’s been useful to your project despite the bug!!
Apologies for reviving an old thread.
Rachael’s simulated DGP in
fake_data_heck_montecarlo_calibration_check.R doesn’t actually introduce sample selection bias since
X_out are independent of each other and there’s no intercept (this blog post has more details The Heckman Sample Selection Model | Rob Hicks).
Just thought it worth flagging in case anyone else, like me, came across this thread and couldn’t figure out why OLS was doing so well. The model is still well calibrated when we introduce correlation across the Xs or an intercept.
Say we use 2 Heckman selection models (4 equations) at the same time. Can (How) we correlate these 4 equations witch each other?