Hey Carina!
I tried something that did not work super well, unfortunately. The idea was to do something similar to the model discussed in this thread – it is also a non-linear model, with added hierarchical structure for the parameters. This approach models steepness
and inflection
on the log scale (to ensure that they are positive) and then apply a multivariate normal prior for partial pooling.
This is my attempt:
data{
int<lower=1> N;
int<lower=1> T;
int<lower=0, upper=8> y[N, T];
}
transformed data{
int<lower=1, upper=8> trials[N ,T];
for(t in 1:T)
for(n in 1:N)
trials[n, t] = 8;
}
parameters{
real<lower=0, upper=1> minval[N];
real<lower=0, upper=1> maxval[N];
vector[2] mus;
vector<lower=0>[2] sigmas;
cholesky_factor_corr[2] L_Omega;
vector[2] z[N];
}
transformed parameters{
real<lower=0> steepness[N];
real<lower=0> inflection[N];
real<lower=0, upper=1> performance[N, T];
for(n in 1:N){
vector[2] params = mus + diag_pre_multiply(sigmas, L_Omega) * z[n];
steepness[n] = exp(params[1]);
inflection[n] = exp(params[2]);
for(t in 1:T)
performance[n, t] = (maxval[n] - minval[n])*(1/(1+exp(-steepness[n]*(t-inflection[n])))) + minval[n];
}
}
model{
for(n in 1:N)
for(t in 1:T)
y[n, t] ~ binomial(trials[n, t], performance[n, t]);
maxval ~ beta(9, 1);
maxval ~ beta(1, 9);
mus[1] ~ normal( 1, 1);
mus[2] ~ normal(2.5, 1);
sigmas ~ exponential(10);
for (n in 1:N)
z[n] ~ std_normal();
L_Omega ~ lkj_corr_cholesky(4);
}
With a high number of iterations and adapt_delta = 0.99
this results in a few divergent transitions on the data set that you provided earlier. I think better priors could solve the problem… Maybe someone else has some thoughts as well…