I am completely new to MCMC and wrote the idea of an algorithm in Jax and it seems to perform quite well. So I am looking for feedback if you want to try it for your posterior sampling.
I had to ask an LLM for a ballpark number on existing samplers, and after my insisting that they are not uncountable, it came up with several dozen to low hundreds. So there are a ton of “new” samplers, usually based on some of the main, established, and widely used ones (Gibbs, Metropolis-Hastings, HMC, etc).
That said, and not getting too technical and/or philosophical, the basic idea of MCMC is quite simple, but in practice if becomes very complex, and its theory gets real deep real fast. So proposing and implementing a new sampler or suggesting an improvement to an existing one is relatively simple, but actually evaluating its performance broadly and understanding its implications and limitations is a herculean job (to this day I’m convinced that there are problems better solved through using MH, but wouldn’t dare to delimit which ones).
So if you learned about Langevin diffusion last week, there’s probably a long way to grasping the deeper underpinnings of MALA in particular and MCMC samplers in general. If you can get a performance improvement and justify the use of your sampler, great (normally, that would raise a lot of questions from peers regarding its validity and applicability). More commonly, thinking about variations and improvements to existing samplers is a healthy practice to challenge currently established methods, but in practice it is safer to stick to well-understood samplers.