Hi,

I’m trying to understand better the difference between likelihood and probability in Bayesian framework. As I’m weak in maths I’m looking for an intuitive approach.

McElreath in this video makes a distinction between the likelihood or prior probability.

In Stan an anova model is the same no matter if the likelihood looks like,

`data~N(mu, sigma)`

i.e. p(data|mu,sigma) - looks like the frequents likelihood, right? Or,`mu~N(data, sigma)`

ie p(mu|data,sigma) - Now, what’s that? Is it still likelihood? (since it’s condition over data and parameter (sigma))

I’ve used this code to play with,

```
set.seed(123)
ngroups <- 5 #number of populations
nsample <- 10 #number of reps in each
pop.means <- c(40, 45, 55, 40, 30) #population mean length
sigma <- 3 #residual standard deviation
n <- ngroups * nsample #total sample size
eps <- rnorm(n, 0, sigma) #residuals
x <- gl(ngroups, nsample, n, lab = LETTERS[1:5]) #factor
means <- rep(pop.means, rep(nsample, ngroups))
X <- model.matrix(~x - 1) #create a design matrix
y <- as.numeric(X %*% pop.means eps)
data <- data.frame(y, x)
Xmat <- model.matrix(~x, data)
data.list <- with(data, list(y = y, X = Xmat, nX = ncol(Xmat), n = nrow(data)))
# Model with data~N(mu, sigma)
modelString_A = "
data {
int<lower=1> n;
int<lower=1> nX;
vector [n] y;
matrix [n,nX] X;
}
parameters {
vector[nX] beta;
real<lower=0> sigma;
}
transformed parameters {
vector[n] mu;
mu = X*beta;
}
model {
//Likelihood
y~normal(mu,sigma);
//Priors
beta ~ normal(0,1000);
sigma~cauchy(0,5);
}
"
fitA <- stan(data = data.list, model_code = modelString_A , chains = 4, cores = 4, seed = 1)
# Model with mu~N(data, sigma)
modelString_B = "
data {
int<lower=1> n;
int<lower=1> nX;
vector [n] y;
matrix [n,nX] X;
}
parameters {
vector[nX] beta;
real<lower=0> sigma;
}
transformed parameters {
vector[n] mu;
mu = X*beta;
}
model {
//Likelihood
mu~normal(y,sigma);
//Priors
beta ~ normal(0,1000);
sigma~cauchy(0,5);
}
"
fitB <- stan(data = data.list, model_code = modelString_B , chains = 4, cores = 4, seed = 1)
```