UK Postdoctoral Research Fellow (Wellcome Trust) in Bayesian modelling of decision-making

Dear all,

There is an opportunity for a postdoc research fellow/research scientist on a Wellcome-Trust funded project led by Joseph Barnby, Alex Pike, Lei Zhang, and Catia Oliveira. As a postdoc you will lead the development of a new browser-based platform: Hypatia Health.
Over 12 months you will have the unique opportunity to lead the first-stage development of a browser-based platform that allows the simulation and fitting of computational models for use by teachers, clinicians, scientists, and industry. You will also have lots of opportunities to work on and model new and existing data across the labs of the PI and Co-Is, and contribute to novel theoretical models.

The primary role will require that you liaise with front-end developers and designers to bring the platform to life.

The role is perfect for those with training in Computer Science, Psychology, Statistics, or Cognitive Neuroscience with strong skills in at least one open-source programming language and experience with theory-driven computational modelling (e.g. reinforcement learning/drift diffusion models).

If you think this sounds like you and you want more info get in touch with Joe Barnby (joseph.barnby@rhul.ac.uk)

Apply now

3 Likes

Hi, @Cmfo and thanks for posting the interesting looking job. UX and UI is a blast, especially if you have an experienced front-end web dev to help.

May I ask how you’re fitting Bayesian drift-diffusion models? I visited @bnicenboim a few years ago in Potsdam (he’s in Tilburg now) and Stan was taking a day or two to fit these models. May I ask how people fit these models these days?

Thank you for your encouragement!

From our experience DDMs don’t take that long to fit, see 17.2 Wiener First Passage Time Distribution | Stan Functions Reference

There are a few tricks for parameterizing the model that I’ve found very helpful.

In my projects, I’ve often found that the non-decision time parameter (i.e., the lower bound of the predicted response time) is the root cause of a lot of problems with sampling. Instead of directly estimating the non-decision time (on the scale of the RT data), I’ve found it helpful to estimate it as a proportion ndt_prop of a pre-defined range (ndt_lower to ndt_upper). The actual non-decision time is then determined in the transformed parameters block along the lines of ndt = ndt_lower + ndtprop * (ndt_upper - ndt_lower). The lower bound ndt_lower could be based on theoretical assumptions - for example, we know that the onset latency of early sensory processing is at least 50 ms (e.g., Schmolesky et al., 1998). The upper bound ndt_upper could be set to the fastest observed response time to satisfy the condition that the input to the function is greater than the non-decision time.

Another trick is to model the observed response time data as a mixture of the DDM (representing an “evidence accumulation” process) and a uniform distribution (representing a “contamination” process), as discussed in Ratcliff & Tuerlinckx (2002). The uniform distribution is used to explain response times that are very unlikely to have been generated by an evidence accumulation process in the context of a typical speeded decision-making task (e.g., extremely fast responses that reflect fast guesses or accidental button presses, or extremely slow responses that reflect lapses of attention).

@martinmodrak elaborates on both of these ideas in his excellent blog post in the context of a different model (lognormal distribution).

For parameters with hard constraints, I’ve also found it helpful to define the parameters without constraints, and then transform them as appropriate using e.g. the inv_logit or exp functions. Specifically, the boundary separation parameter is non-negative, so it could be defined without constraints and then transformed using exp; the a-priori bias and non-decision time proportion parameters are on the unit interval, so they could be defined without constraints and then transformed using inv_logit.

And of course, it’s useful to think carefully about priors (see e.g. this paper).

1 Like