Thanks very much for the reply.
I got the idea that the likelihood function is usually created using maximum likelihood estimation from here: http://m-clark.github.io/docs/IntroBayes.html
For this depiction let us consider a standard regression coefficient b. Here we have a prior belief about b expressed as a probability distribution. As a preliminary example we will assume perhaps that the distribution is normal, and is centered on some value μb and with some variance σ2b. The likelihood here is the exact same one used in classical statistics- if y is our variable of interest, then the likelihood is p(y|b) as in the standard regression approach using maximum likelihood estimation.
I’d appreciate it if you could tell me where my misunderstanding comes from?
In addition would you be able to provide a bit more information on how Stan creates the likelihood function (assuming one doesn’t enter their own function)?
Thanks so much!