# Prior predictive check - asymmetry in estimation of uncertainty?

I am using brms (2.1.2, windows, rstudio) and am running a prior predictive check for data from an experiment, and was playing around with the parameters, specifically the slope and don’t understand the behaviour:

``````modelTEST1<- brm(outcome ~ 1 + predictor ,
data  = dat, warmup = 1000,
prior = c( prior(normal(0,1000), class="Intercept"),
prior(cauchy(0,100), class = "sigma"),
prior(normal(0,10000000), class = "b") ),
iter  = 3000, chains = 2,
sample_prior = "only",
seed  = 221, control = list(adapt_delta = 0.97),
cores = 2)
``````

plot(conditional_effects(modelTEST1)). This yields a graph (see attached)

What I don’t understand is why there is this asymmetry in the graph? Why is there more uncertainty of the estimate of predictor=0 than predictor =1? It appears that the larger the slope, the larger the asymmetry - but why?

Sorry if I am missing something really basic, but am totally stumped!

Thanks

I’m not an expert with `brms` but looking at your graph, it seems like the model is interpreting your `ConditionASINT` variable as a continuous predictor, when in fact it seems more like a factor with two levels (0, 1). Is that correct? If so, you need to specify `ConditionASINT` as a factor prior to running the model, otherwise you will get weird output.

Hope this helps!. Also, posting your code might help others to diagnose any potential problems.

Thanks so much for answering - actually I recoded it as a factor and the same thing happens - the interval on the left is greater than the interval on the right

I would have thought that both intervals should be identical, since I am calling brms with "sample_prior = “only” "

I’m not sure what you mean by “posting your code”. The code was in the post:

``````modelTEST1<- brm(outcome ~ 1 + predictor ,
data  = dat, warmup = 1000,
prior = c( prior(normal(0,1000), class="Intercept"),
prior(cauchy(0,100), class = "sigma"),
prior(normal(0,10000000), class = "b") ),
iter  = 3000, chains = 2,
sample_prior = "only",
seed  = 221, control = list(adapt_delta = 0.97),
cores = 2)
``````

(sorry if I am making some mistake with the formatting or etiquette of asking questions - this is my first time posting on here)

Thankyou

I just realized the problem - because there is an unequal number of data in both conditions, it expects different amounts of variation in both conditions

2 Likes

Yeah, sounds like you got your solution! Also, what’s the range of your data? The “normal(0, 100000000)” prior may be unnecessarily vague.