LBA sampling: Stan vs particle MCMC vs annealed importance sampling


Thought this newly-posted pre-print might interest some here.

The “LBA” model is a very useful model in the niche context of cognitive science experiments using speeded decision tasks. Like the diffusion (a.k.a. Wiener) model, it permits you to observe response times & response accuracies and make inference on latent quantities like information processing efficiency and speed-vs-accuracy bias. Unlike the diffusion, LBA can be extended to more than 2 decision options and (if I understand correctly) has a more tractable likelihood. Annis et al (2017) showed how to implement a couple simple variants of LBA in Stan, including a hierarchical version, but the pre-print linked above claims easier parallelism. Wonder how that claim is going to fare when the Stan MPI stuff is ready?


Yeah that’s really interesting, I was just working through this recent paper: which references Stan quite a lot (but not LBA so it may be tangential to your interest). The researcher has created an R program to extract posterior draws from rstan to compute the DIC (deviance information criteria).