Behavior of HMC in continuous inference

It is widely believed that at least part of what brains do is some sort of approximate bayesian inference. One big difference between what the brain does and what Stan does is that the world is constantly changing while, as far as I know, Stan does not support changing the data (and thus the posterior) in the middle of a run.

Brains show some strange properties (transients and oscillations) when the world changes unexpectedly and a recent paper argues that those changes are the consequence of efficient sampling - they help us quickly update our posterior to match the new state of the world and reduce autocorrelations in the samples.

I’m curious how hard would it be to implement something like changing the data (and thus the posterior) mid-run to observe whether HMC would also show the “brain-like” phenomena described in the linked paper.

2 Likes

I don’t know know much about brains besides having my own (arguably) but I’d say:

  • brains don’t work with data they consider “true” and update based on “truth”; Bayesian inference typically does.

  • brains don’t much care for logical entailment and propagating updates; Bayes does.

  • brains don’t care for identifiability or conscilience; MCMC does.

… and so on. I can think of about 6 more. This given, does the comparison make sense?

I’m not sure I understand your claims, but your first point seem to be challenging the notion that brains do Bayesian inference (even approximately). Here are some references to support the notion that brains indeed do so.

And the article linked in the original post.

To your second (two) points. No one thinks that the brain can do exact Bayesian inference (which is what I think you are stating when you say that brains don’t care for logical entailment). But in certain domains we can get close. For example, over evolution brains learned the prior distribution of natural sounds in order to maximize prediction about what sounds are in the world based on a few samples from a sensor (the ear).

As far as “propagating updates” I think that is what learning is. We have some prior (e.g. memory/beliefs) which we combine with sensory evidence to make some inference. We can then compare our prediction with the world and when we are wrong, we can update our prior. Of course, some people reject the evidence instead of updating the prior. Which may be the correct thing to do sometimes. Nate Silver’s book is all about that.

Do you mean “identifiability” of model parameters? This is an interesting point. It seems some brains favor parsimony. Other brains favor models that confirm existing beliefs.

Brains definitely care about conscilience! We use each sense to calibrate the other senses. When we “hear” speech we are actually using vision to improve our inference about what is being heard. Just watch this youtube clip. There is lots of data about optimal integration of evidence from different sources.

Apologies if I misunderstood your points!

Just thoughts :-) I think we happily retain and work with entirely contradictory beliefs about the same thing from the same or difference perspectives whilst possibly even knowing them to be contradictory … which is what I mean by the brain not caring about conscilience. Further, if you adopt a paraconsistent logic (one that doesn’t support F\to T, explosion principal, i.e. allows for contradictions) then I’m not sure if interesting Bayesian results like the Markov property are even possible, and arguably paraconsistent is what we are. I’m not sure approximate entailment makes much sense. As in a person can believe A only because B, stop believing B but continue to believe A equally… there’s nothing approximate about it; it’s just illogical. By “propagating updates” I just mean that you can perfectly well know that your belief in X has entailments over all sorts of other beliefs you have but the brain does nothing about it.

A person can also “update” their beliefs based on some evidence E, then get some other evidence that makes them reconsider E and then update there beliefs based on E again … I’m not sure there’s an analog to “revising” evidence in classical Bayes.

Yes RE identifiability; my brain (for all my trying :-) ) doesn’t seem to care about whether it’s modelling is identifiable.

What else … you can dutch book a brain in a myriad ways, it adopts obviously flawed principals (like the gamblers fallacy), etc.

Worst of all we do all this every hour of every day :-) !

There is a field in computational psychiatry called “active inference” which uses multiple layers of approx Bayes to model a person’s confidence about their strategy to cope with a changing world about which they have beliefs of how it came to be. It’s been used to model changing perception of reality in PTSD and psychosis and explicitly allows for erroneous beliefs and inference. It operates using the concept of free energy. Friston (Karl, I believe) is the name to look for.

The processes you are referring too are mainly language based or symbolic processes. These modules are crappy and subject to many biases and logical fallacies. However, our non-symbolic systems, perceptual and motor systems, are nearly Bayes optimal for naturalistic conditions.