# Help with varying intercept and slope for Poisson model

Hello, stan community. I am very new to this program and I am trying to create a multilevel Poisson model for crashes in each county. I need to add varying intercept and slope to this model based on the severity of crash (=1,2 and 3). My data looks like this:

Each county has three rows, which represents the number of crashes for each severity level.
I apologize if I am being stupid anywhere. Thanks in advance!

``````data {
int<lower=0> N;
int<lower=0> J;                 //Type of crash severities (3 Nos.)
int<lower=0, upper=3> S[N];              // Severity
int<lower=0> y[N];              // count of crashes
int<lower=1> K;                 // number of covariates
matrix[N, K] x;                 // design matrix

}
transformed data {
vector[N] log_V = log(V);
}
parameters {
real beta0;             // intercept
vector[K] betas;             // slope
}
model {
y ~ poisson_log(log_V + beta0 + x*betas);
beta0 ~ normal(0.0, 1.0);
betas~normal(0.0,1.0);

}
generated quantities {
vector[N] eta = log_V + beta0 + x*betas;
vector[N] mu = exp(eta);
}
``````

What happens? (Edit: doesn’t compile, or runs slow, or produces weird estimates, etc. etc.)

Also use a negative binomial (this one: https://mc-stan.org/docs/2_25/functions-reference/nbalt.html#nbalt) instead of a Poisson.

The dispersion parameter is worth having in a lot of cases (just whenever you think Poisson, just do negative binomial)

1 Like

@bbbales2 I tried the following model which has varying intercept and slope. However, it throws me following error even when I am using 13000 iterations with 10000 samples, adapt_delta = 0.99, and
used non-centered parameterization. I went through ‘brief guide to stan’s warning’, but could not fix the problem or maybe I didn’t understood it.
I would be grateful if you could point out what’s going wrong here.

``````
data {
int<lower=0> N;
int<lower=0> J;                 //Type of crash severities (3 Nos.)
int<lower=1, upper=3> S[N];              // Severity
int<lower=0> y[N];              // count of crashes
int<lower=1> K;                 // number of covariates
matrix[N, K] X;                 // design matrix

}
transformed data {
vector[N] log_V = log(V);
}
parameters {
vector[J] beta0;             // intercept
vector[K] gamma;
vector<lower=0>[K] tau;
vector[K] betas_raw[J];
}
transformed parameters{
vector[K] betas[J];
for (j in 1:J){
betas[j]=gamma + tau .*betas_raw[j];
}
}
model {
vector[N] mu;
gamma ~ normal(0,5);
tau ~ cauchy(0,2.5);
for (j in 1:J){
betas_raw[j] ~ normal(0.0,1.0);
}
for (n in 1:N)
{
mu[n]= beta0[S[n]]+X[n]*betas[S[n]];
}
y ~ poisson_log(log_V + mu);
beta0 ~ normal(0.0, 1.0);

}
``````

Error:
1: There were 2 divergent transitions after warmup. Seehttp://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmupto find out why this is a problem and how to eliminate them.
2: There were 3 chains where the estimated Bayesian Fraction of Missing Information was low. Seehttp://mc-stan.org/misc/warnings.html#bfmi-low
3: Examine the pairs() plot to diagnose sampling problems
4: The largest R-hat is 3.83, indicating chains have not mixed.Running the chains for more iterations may help. Seehttp://mc-stan.org/misc/warnings.html#r-hat
5: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.Running the chains for more iterations may help. Seehttp://mc-stan.org/misc/warnings.html#bulk-ess
6: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.Running the chains for more iterations may help. Seehttp://mc-stan.org/misc/warnings.html#tail-ess

The problem probably is that your priors allow for obscenely large values, given that `mu` is the on the log scale. How large are typical values for `V`?

A prior predictive check would be helpful to see what values you are implying for `y`, or maybe just throw normal(0, 1) priors on everything first and see if that improves sampling.

The values of V ranges from 55 to 5589, with average of 644 and median of 329.
I tried using normal(0,1) (and other close to it) prior on everything, I still get the same error.

I’ve been taught the hard way that N(0,1) can be very broad since you use a log link. You need to do prior predictive checks.

Cool beans. What @daniel_h and @torkar say makes sense.

The biggest warning in the output there is:

The largest R-hat is 3.83, indicating chains have not mixed

So you’ll want to find out in what way things did not mix.

1. Check if the chains ended up around different solutions. With cmdstanr + bayesplot that is: `bayesplot::mcmc_trace(fit\$draws(), "lp__")`

2. If they’re ending up in the same place, you probably want to check which parameters have the lowest ESS/highest Neff. With cmdstanr that is `fit\$summary() %>% arrange(ess_bulk)` and then the parameters of concern will be at the top

If it’s the first thing, use predictions to figure out what the different solutions mean (chapter 11 here is an example). If it’s the second thing, then priors can sometimes help.

Also still worth switching to a negative binomial.

3 Likes