# Rat growth example 5.6 from Carlin and Louis second edition

Hi Gang:

So, I switched to Stan recently and I have been pleasantly surprised.
Most things just work. And things that you don’t think would work
get really poor mixing. Rhat’s of 1.5 and higher with 99% divergent
transitions; effective sample size of 10-20%. And looking at the
trace plots: the movement is very minimal and almost flat. I’m
probably doing something silly, but I don’t see what. Any ideas
of what is going wrong here. See below.

Thanks,

Rodney

y|\beta,\tau\sim No_n(X\beta,\tau I_n),\\
where No_n(X\beta,\tau I_n) is equivalent to N_n(X\beta,\tau^{-1} I_n)\\
\tau\sim Ga(\alpha_0,\lambda_0),\\
\beta_i|\mu_\beta,T_\beta\stackrel{iid}{\sim}No_p(\mu_\beta,T_\beta), \ i=1,\cdots,k\\
{\rm or}\ \ \ \beta|\mu_\beta,T_\beta\sim No_{pk}(1_k\otimes\mu_\beta,I_k\otimes T_\beta),\\
T_\beta\sim W_p(\Omega_0,\nu_0),\\
\mu_\beta\sim No_p(\beta_0,T_{\mu_\beta})\\

data {
int<lower=1> K;          // number of rats
int<lower=1> P;          // number of parameters
int<lower=1> M;          // number of measurements
int<lower=1> B;          // length of beta_long: K*P
int<lower=1> N;          // length of y: M*K
matrix[N, B] X;
matrix[N, N] I;
vector[N] y;
real<lower=1> nu0;
matrix[P, P] Omega0;
real<lower=0> alpha0;
real<lower=0> lambda0;
vector[P] beta0;
matrix[P, P] T_mu_beta;
}
parameters {
vector[P] mu_beta;
vector[P] beta[K];
real<lower=0> tau;
matrix[P, P] T_beta;
}
transformed parameters {
vector[B] beta_long;
for(i in 1:K)
for(j in 1:P) {
beta_long[(i-1)*P+j] = beta[i][j];
}
}
model {
mu_beta ~ multi_normal_prec(beta0, T_mu_beta);
T_beta ~ wishart(nu0, Omega0);
for(i in 1:K)
beta[i] ~ multi_normal_prec(mu_beta, T_beta);
tau ~ gamma(alpha0, lambda0);
y ~ multi_normal_prec(X * beta_long, tau * I);
}

ratgrowth. = read.table('ratgrowth.dat', header=TRUE,
colClasses=c('integer', 'integer', 'numeric',
'numeric', 'numeric', 'numeric',
'numeric', 'integer'))
days = c(8, 15, 22, 29, 36)
(days. = days-mean(days))

weeks = 1:5
(weeks. = weeks-mean(weeks))

y = t(as.matrix(ratgrowth.[ , paste0('y', 1:5)]))
y = c(y)
y. = y - mean(y)

K = 30  ## number of rats
P = 2   ## number of regression parameters
M = 5   ## number of measurements
B = K*P ## length of beta.
N = M*K ## length of y
X = matrix(0, nrow=N, ncol=B)
Z = matrix(0, nrow=B, ncol=P)
I = diag(nrow=N, ncol=N)
for(i in 1:K) {
h = (i-1)*M
k = 2*i - 1
Z[k, 1] = 1
Z[k+1, 2] = 1
for(j in 1:M) {
h = h + 1
X[h, k] = 1
X[h, k+1] = weeks.[j]
}
}

ratgrowth = list(
K = 30, ## number of rats
P = 2,  ## number of regression parameters
M = 5,  ## number of measurements
B = K*P,## length of beta.
N = M*K,## length of y
X = X,
I = I,
nu0 = P+1,
Omega0 = diag(nrow=P, ncol=P)*0.001,
alpha0=0.001,
lambda0=0.001,
beta0 = c(mean(ratgrowth.$y3), 43.4), T_mu_beta = diag(nrow=P, ncol=P)*0.001, y = y) init = as.list(1:8) for(i in 1:8) { init[[i]]=list(T_beta=diag(nrow=P, ncol=P)*0.001, mu_beta=c(mean(ratgrowth.$y3), 43.4), tau=0.001,
beta=matrix(c(mean(ratgrowth.\$y3), 43.4),
nrow=K, ncol=P, byrow=TRUE))
}

strain diet y1 y2 y3 y4 y5 id
2 2 151 199 246 283 320  1
1 2 145 199 249 293 354  2
1 2 147 214 263 312 328  3
2 1 155 200 237 272 297  4
1 2 135 188 230 280 323  5
3 1 159 210 252 298 331  6
1 1 141 189 231 275 305  7
3 2 159 201 248 297 338  8
2 2 177 236 285 350 376  9
1 1 134 182 220 260 296 10
3 2 160 208 261 313 352 11
3 1 143 188 220 273 314 12
2 2 154 200 244 289 325 13
3 2 171 221 270 326 358 14
2 1 163 216 242 281 312 15
2 1 160 207 248 288 324 16
1 2 142 187 234 280 316 17
2 1 156 203 243 283 317 18
3 2 157 212 259 307 336 19
3 1 152 203 246 286 321 20
2 2 154 205 253 298 334 21
1 1 139 190 225 267 302 22
1 1 146 191 229 272 302 23
3 1 157 211 250 285 323 24
1 2 132 185 237 286 331 25
3 2 160 207 257 303 345 26
2 2 169 216 261 295 333 27
2 1 157 205 248 289 316 28
1 1 137 180 219 258 291 29
3 1 153 200 244 286 324 30

2 Likes

I don’t see anything obviously wrong with the model, but there are some things that are strange and might help you debug the problem.

• You don’t seem to be actually needing multi_normal_prec as all your precision matrices are diagonals (unless I am mistaken). So you should be able to replace statements like mu_beta ~ multi_normal_prec(beta0, T_mu_beta); with mu_beta ~ normal(beta0, sqrt(1/T_mu_beta));
• You can use the transformed data to compute B, N and possible others and avoid having to pass them as data.
• The gamma prior on precision is probably to broad, putting large prior mass at very small variances

Hope that helps!