'Observing the observer' models



I’d like to use an ‘observe the observer’ model, in which a subject has an internal model of how stimuli might be generated (the perceptual model), which they invert; then based on the resulting posterior(latent variables| perceptual data) they respond according to a response model.
From the experimenter’s perspective, given the stimuli, subject responses and candidate models for the perceptual and response models that the subject is using (with their perceptual and response parameters), I want to get the posterior on the subject’s perceptual and response parameters.

Does STAN have a way of specifying this nested inversion? Or do I need to provide an analytical formula for the subject’s posteriors(latent| perceived stimuli)?

Thank you!


I am not familiar with this kind of models, but it should IMHO be possible in Stan. In general, as long as you would be able to write a simulator that generates synthetic data according to your model you should be able to write the model down in Stan. The only exception is when you have discrete parameters (unobserved variables), which Stan cannot handle directly (but there are tricks to get around this).


Not sure if I understood completely, but I think it is not possible in Stan unless you have an explicit formula for the posterior distribution of the inner model (since the inner model’s normalizing constant would probably depend on the parameters).

I understood the problem to be something like this (since this is probably wrong, you might get better answers if you explicate your model in equations):

Inner model:
p(\text{latent variables} \mid \text{perceptual data}, \text{parameters}) \\ \propto p(\text{perceptual data} \mid \text{latent variables}, \text{parameters}) p(\text{latent variables})

Outer model (contains the inner model as nested):
p( \text{parameters}, \text{latent variables} \mid \text{response}, \text{perceptual data}) \\ \propto p( \text{parameters}) p(\text{response} \mid \text{latent variables}, \text{parameters}) \\ \;\;\times p( \text{latent variables} \mid \text{perceptual data}, \text{parameters})

Stan requires you to be able to write down the unnormalized log probability of the outer model explicitly (where the unknown normalizing constant cannot depend on any of the parameters to be inferred).

I’m not sure if there is any robust probabilistic programming language that can handle this well (Anglican and Church might be able to do some… I haven’t tried). This might be a useful reference:
Tom Rainforth, Nesting Probabilistic Programs, UAI2018, http://auai.org/uai2018/proceedings/papers/92.pdf