Variance not taken into account by models

Hi all,

I am running brms models with flat prior using several different types of data, just to train, I am learning^^, and each times I run the simplest model - means without random effect and only with a factor as fixed effect -, I have a problem of variance not taken into account by the model.

This is what is happening when I plot pp_check intervals:
Rplot

The model is simply set as brm(response~factor,…)

Is it because brms doesn’t work without random effect in the formula?

1 Like

Hi Theo!

That’s great! :)

I’m not brms expert, but I know that it will run without random effect (unlike lme4, I think, right?).

Honestly, it’s a bit hard to tell for me what’s going on here. Can you post the data that you used and the exact brms model you were running?

Cheers!
Max

2 Likes

Hi @Max_Mantei,

Thank you for your fast anwser!

It seems to not come from the data as I tried different one and the same thing is happening.

Here an example:

Response <- sample(100, size = 31)#Percentage
Predictor <- sample(LETTERS[1:4], size = 31, replace = TRUE)
data<-data.frame(Response,Predictor)

library(brms)
model<-brm(Response|trials(100)~Predictor, data, family = binomial(),
iter = 5000,warmup = 500, chains = 4, cores = 4,
thin = 10)

pp_check(model, type = “intervals_grouped”, group = “Predictor”)

Cheers : )
Theo

I am sorry but I cannot be of much help without a reproducible example.

1 Like

Hi Paul,

Sorry I might not understand well what is necessary.

I put an example above with a model and data.

Do you need the exact data I am using ?

Theo

Ideally something simple with simulated data. Also the code you use is quite important

I wrote this in a previous message, not sure you saw it:

Response <- sample(100, size = 31)#Percentage
Predictor <- sample(LETTERS[1:4], size = 31, replace = TRUE)
data<-data.frame(Response,Predictor)

library(brms)
model<-brm(Response|trials(100)~Predictor, data, family = binomial(),
iter = 5000,warmup = 500, chains = 4, cores = 4,
thin = 10)

pp_check(model, type = “intervals_grouped”, group = “Predictor”)

I didnt see it sorry. Will take a look later on.

1 Like

No problem, thank you for your help!

The plots looks reasonable to me. Without modeling overdispersion, you will not see correct calibration in the plot with your data. To add overdispersion use, for example

data$obs <- 1:nrow(obs)
Response|trials(100)~Predictor + (1 | obs)

Yes I thought about this too but then I run normally distributed data with gaussian family and the same problem is happening. I might be wrong but we should not have problem of overdispersion with normal data.

Here a random example:

RespNorm<-rnorm(100)
Predictor <- sample(LETTERS[1:4], size = 100, replace = TRUE)
data<-data.frame(RespNorm,Predictor)

library(brms)
model<-brm(RespNorm~Predictor, data, family = gaussian(),
iter = 20000,warmup = 10000, chains = 4, cores = 4,
thin = 10)

pp_check(model, type = “intervals_grouped”, group = “Predictor”)