Help Vectorizing a Cross-Classified Model?

Hello, everyone.

I would really like some help in reparameterizing or vectorizing the attached model.

Following suggestions in this forum I have built up the model to a three-level hierarchical IRT model that successfully recovers my parameters using a simulated data file (beginning with a random person random item model), but it currently takes two days to run with 3 chains, 2000 iterations with each chain. A simulated data file with responses and all of the necessary indices is also attached.

Any suggestions would be greatly appreciated.

Cross-Classified Rasch Model.R (6.6 KB)

Apologies.

The simulated response data file “…test.csv” is attached.

empiricaldataforestimation_model1_v6_test.csv (787.8 KB)

This line

//distribution of deviations, common estimate across families
   for (p in 1:P){
      fam_diff[p] ~ normal(family_mu[p],sigma_famresid[mm[p]]);
   }

should not even work because fam_diff is declared but not defined when the line executes. But in general, you will probably have to move to non-centered parameterizations wherever possible in a model like this.

Thank you for the input. I thought that I had defined it as a temporary variable in the code within the model block just above that line. Is that not accurate? The model did run successfully.

I am working to reparameterize now and will return to the thread with updated code. That said, any suggestions for how to improve this through vectorizing are also welcome - if those can be made before the other changes :)

It’s much easier for everyone if you can just include the code in the post. Here it is with auto-indent from emacs:

 data { 
  int<lower=1> N;            // number of observations 

  int<lower=1> J;            // number of students 
  int<lower=1> K;            // number of items 
  int<lower=1> P;            // number of families
  int<lower=1> M;      	     // number of item models
  int<lower=1> Q;		         // number of templates

  int peeps[N];                  // student giving response n 
  int<lower=1,upper=K> item[N];  // item for response n 
  int<lower=0,upper=1> score[N];   // correctness for response n 

  int<lower=0,upper=J> pid[J];   // Person ID number
  int<lower=0,upper=K> iid[K];   // Item ID number 
  int<lower=0,upper=P> fid[P];   // family ID number   
  int<lower=0,upper=M> mid[M];   // item model ID number
  int<lower=0,upper=Q> tid[Q];   // template ID number

  int<lower=1,upper=P> parent[K]; 		//indexes items to families
  int<lower=1,upper=M> mm[P];     		//indexes families to item models
  int<lower=1,upper=Q> tt[M];	        //indexes item models to templates

  int<lower=0,upper=1> multten[P];                 // Content code - numbers are some multiple of ten
  int<lower=0,upper=1> threedig[P];                // Content code - numbers are maximum of three digits

  int<lower=0,upper=1> vv[M];	//dummies for display format as verbal
  int<lower=0,upper=1> hh[M];	//dummies for display format as horizontal
}

transformed data {
  real noval;
  noval = 0;
}

parameters {

  //people parameters
  vector[J] uj;              		


  //mean structure of items
  vector[M] model_mu;
  real char_m10;                    //fixed effects of content characteristics 
  real char_d3;                     //fixed effects of content characteristics

  //random effects
  vector[K] item_dev;
  real<lower=0> sigma_item [P];
  real<lower=0> sigma_famresid [M];
}

transformed parameters{
  vector[N] eta;
  vector[K] betai;
  vector[P] family_mu;

  // varying item family diffculty across item families within models
  // decomposition of family means into a model mean and fixed effects
  for (p in 1:P){
    family_mu[p] = model_mu[mm[p]] + (multten[p] ? char_m10 : noval) + (threedig[p] ? char_d3 : noval);
  }

  // item difficulties parameterized as random, with parent-specific means
  for(k in 1:K){
    betai[k] = family_mu[parent[k]]+item_dev[k]; 
  }

  //log odds of a correct probability
  for(n in 1:N){
    eta[n] = uj[peeps[n]]-betai[item[n]];
  }

}

model { 

  vector[P] fam_diff;

  //prior on random variable theta to scale the item difficulty parameters
  uj ~ normal(0,1);

  //weakly informative prior on item variance
  sigma_item ~ cauchy(0,2.5);
  sigma_famresid ~ cauchy(0,2.5);

  //distribution of deviations, common estimate across families
  for (p in 1:P){
    fam_diff[p] ~ normal(family_mu[p],sigma_famresid[mm[p]]);
  }

  //distribution of deviations, common estimate across families
  for (k in 1:K){
    item_dev[k] ~ normal(0,sigma_item[parent[k]]);
  }

  //likelihood function                                         
  score ~ bernoulli_logit(eta);

}

Ben’s right—that fam_diff variable isn’t defined. You declare it as a temporary variable, but it’s never defined before it’s used. The sampling statement ~ doesn’t literally sample—it just increments the log density accumulator. So the variable needs to be defined first.

You can vectorize almost all those operations if you declare everything as vectors. For example,

item_dev[k] ~ normal(0, sigma_item[parent]);

But the bigger issue is that you’ll probably need a non-centered parameterization here (see the manual).

This one’s trickier:

 for (p in 1:P){
    family_mu[p] = model_mu[mm[p]] + (multten[p] ? char_m10 : noval) + (threedig[p] ? char_d3 : noval);
  }

which can be coded as this:

family_mu = model_mu[mm];
family_mu[multten] += char_m10;
family_mu[threedig] += char_d3;

If you change mulltten to be the array of indexes for which the current multten[p] == true.

Thank you for all of the insights. I am working on implementing, having
read a number of articles on the importance of non-centered
parameterizations.

I will update with my new code inside the post as requested as well.

Hello, @bgoodri and @Bob_Carpenter,

Again, I truly appreciate your insights. I reviewed several articles on non-centered parameterizations, and also the relevant Stan documentation, and made several changes to the code.

I was able, with a two-level hierarchical model to successfully recover mean and variance parameters of simulated data after vectorizing my code and using the non-centered parameterization as follows:

parameters {
  vector[J] uj;              		
  real<lower=0> sigma_item;
  vector[P] family_mu;
  vector[K] betai_offset;
}
transformed parameters{
   vector[N] eta;
   vector[K] betai;

    betai = family_mu[parent] + betai_offset*sigma_item; 
    eta = uj[peeps]-betai[item];
}
model { 
  betai_offset ~ normal(0,1);
  uj ~ normal(0,1);
  sigma_item ~ cauchy(0,2.5);                                     
  score ~ bernoulli_logit(eta);
}

I am still having difficulty with the decomposed means in the more complex model, however. I did define the arrays of indices as suggested by the previous post:

mydat<-mydat[with(mydat,order(family)),]

multten<-unique(mydat[mydat$m10==1,c("family")])
threedig<-unique(mydat[mydat$d3==1,c("family")])

However, when I run the code to estimate the model with decomposed means at that second level, I get the following error with the arithmetic operations:

SYNTAX ERROR, MESSAGE(S) FROM PARSER:
error in 'model3070257e244c_rpri' at line 67, column 24

65: 
66:    family_mu = model_mu[mm] + fammu_offset*fam_resid; 
67:    family_mu[multten] += char_m10*ident_x;
                           ^
68:    family_mu[threedig] += char_d3*ident_y; 

PARSER EXPECTED: <expression>
Error in stanc(file = file, model_code = model_code, model_name = model_name,  : 
failed to parse Stan model 'rpri' due to the above error.

The code that I am using to run the model is here (and I think, @bgoodri, that I have successfully used the non-centered parameterizations and also defined everything I am looking to estimate?) – any additional suggestions would be greatly appreciated. I believe I have coded the model correctly, based on my reading of the documentation, but I am sure there is something I am overlooking.

Thank you again!

mymodel<- "

data { 
int<lower=1> N;            // number of observations 

int<lower=1> J;            // number of students 
int<lower=1> K;            // number of items 
int<lower=1> P;            // number of families
int<lower=1> M;      	     // number of item models
int<lower=1> Q;		         // number of templates

int<lower=1> X;            // length of multten array
int<lower=1> Y;            // length of threedig array

int<lower=1> A;            // length of vv array
int<lower=1> B;            // length of hh array

int peeps[N];                  // student giving response n 
int<lower=1,upper=K> item[N];  // item for response n 
int<lower=0,upper=1> score[N];   // correctness for response n 

int<lower=0,upper=J> pid[J];   // Person ID number
int<lower=0,upper=K> iid[K];   // Item ID number 
int<lower=0,upper=P> fid[P];   // family ID number   
int<lower=0,upper=M> mid[M];   // item model ID number
int<lower=0,upper=Q> tid[Q];   // template ID number

int<lower=1,upper=P> parent[K]; 		//indexes items to families
int<lower=1,upper=M> mm[P];     		//indexes families to item models
int<lower=1,upper=Q> tt[M];	        //indexes item models to templates

int multten[X];                 // Array of indices for families - numbers are some multiple of ten
int threedig[Y];                // Array of indices for families - numbers are maximum of three digits

int vv[A];	//Array of indices for imodels - display format is verbal
int hh[B];	//Array of indices for imodels - display format is horizontal

}

parameters {

  vector[J] uj;              		
  
  real<lower=0> sigma_item;
  real<lower=0> fam_resid;

  vector[K] betai_offset;
  vector[P] fammu_offset;

  vector[M] model_mu;
  real char_m10;                    //fixed effects of content characteristics 
  real char_d3;                     //fixed effects of content characteristics

}

transformed parameters{
  vector[N] eta;
  vector[K] betai;
  vector[P] family_mu;

// varying item family diffculty across item families within models
// decomposition of family means into a model mean and fixed effects

   family_mu = model_mu[mm] + fammu_offset*fam_resid;
   family_mu[multten] += char_m10;
   family_mu[threedig] += char_d3; 

// item difficulties parameterized as random, with parent-specific means
    betai = family_mu[parent] + betai_offset*sigma_item; 

//log odds of a correct probability
    eta = uj[peeps]-betai[item];

}

model { 

//hyperprior
  betai_offset ~ normal(0,1);
  fammu_offset ~ normal(0,1);

//prior on random variable theta to scale the item difficulty parameters
  uj ~ normal(0,1);

//weakly informative prior on variances
  sigma_item ~ cauchy(0,2.5);
  fam_resid ~ Cauchy(0,2.5);

//likelihood function                                         
  score ~ bernoulli_logit(eta);

}

"

mymodel<-stan(model_code = mymodel, model_name = "rpri", 
              data = c("N","J","K","P","M","Q",
                       "A","B","X","Y",
                       "ident_x","ident_y",
                       "peeps","item","pid",
                       "iid","fid","mid","tid",
                       "parent","score","mm","tt",
                       "multten","threedig",
                       "vv","hh"), 
              control = list(stepsize=0.01, adapt_delta=0.99, max_treedepth=15),
              iter = 2000, chains = 3, thin=1, save_warmup=TRUE,
              verbose = T) 

##### Check recovery

mydat<-read.csv("empiricaldataforestimation_model1_v3_params.csv",header=TRUE)

fammu<-cbind(unique(mydat[order(mydat$fid),c("fid","family_mu")]),summary(mymodel,pars="family_mu"))

fammu[,c(2,3,6,10)]


sigma_item
summary(mymodel,pars="sigma_item")

Which version of Rstan are you using? The compound assignment += is relatively new. You might have to update Rstan or write the sum out in full.

family_mu[multten] = family_mu[multten] + char_m10*ident_x;

Hello @stijn,

Thank you - I wish it was an update issue with Stan, but maybe something else? I am in the process now of updating RStudio to the latest version.

Currently I am running:
R version 3.4.3 (2017-11-30) – “Kite-Eating Tree” in RStudio 1.1.383
Platform: x86_64-w64-mingw32/x64 (64-bit)
rstan (Version 2.17.3, GitRev: 2e1f913d3ca3)

data { 
  int<lower=1> N;            // number of observations 
  
  int<lower=1> J;            // number of students 
  int<lower=1> K;            // number of items 
  int<lower=1> P;            // number of families
  int<lower=1> M;      	     // number of item models
  int<lower=1> Q;		         // number of templates
  
  int<lower=1> X;            // length of multten array
  int<lower=1> Y;            // length of threedig array
  
  int<lower=1> A;            // length of vv array
  int<lower=1> B;            // length of hh array
  
  int peeps[N];                  // student giving response n 
  int<lower=1,upper=K> item[N];  // item for response n 
  int<lower=0,upper=1> score[N];   // correctness for response n 
  
  int<lower=0,upper=J> pid[J];   // Person ID number
  int<lower=0,upper=K> iid[K];   // Item ID number 
  int<lower=0,upper=P> fid[P];   // family ID number   
  int<lower=0,upper=M> mid[M];   // item model ID number
  int<lower=0,upper=Q> tid[Q];   // template ID number
  
  int<lower=1,upper=P> parent[K]; 		//indexes items to families
  int<lower=1,upper=M> mm[P];     		//indexes families to item models
  int<lower=1,upper=Q> tt[M];	        //indexes item models to templates
  
  int multten[X];                 // Array of indices for families - numbers are some multiple of ten
  int threedig[Y];                // Array of indices for families - numbers are maximum of three digits
  
  int vv[A];	//Array of indices for imodels - display format is verbal
  int hh[B];	//Array of indices for imodels - display format is horizontal
  
}

parameters {
  
  vector[J] uj;              		
  
  real<lower=0> sigma_item;
  real<lower=0> fam_resid;
  
  vector[K] betai_offset;
  vector[P] fammu_offset;
  
  vector[M] model_mu;
  real char_m10;                    //fixed effects of content characteristics 
  real char_d3;                     //fixed effects of content characteristics
  
}

transformed parameters{
  vector[N] eta;
  vector[K] betai;
  vector[P] family_mu;
  
  // varying item family diffculty across item families within models
  // decomposition of family means into a model mean and fixed effects
  
  family_mu = model_mu[mm] + fammu_offset*fam_resid;
  family_mu[multten] = family_mu[multten] + char_m10;
  family_mu[threedig] = family_mu[threedig] + char_d3; 
  
  // item difficulties parameterized as random, with parent-specific means
  betai = family_mu[parent] + betai_offset*sigma_item; 
  
  //log odds of a correct probability
  eta = uj[peeps]-betai[item];
  
}

model { 
  
  //hyperprior
  betai_offset ~ normal(0,1);
  fammu_offset ~ normal(0,1);
  
  //prior on random variable theta to scale the item difficulty parameters
  uj ~ normal(0,1);
  
  //weakly informative prior on variances
  sigma_item ~ cauchy(0,2.5);
  fam_resid ~ cauchy(0,2.5);
  
  //likelihood function                                         
  score ~ bernoulli_logit(eta);
  
}

This passes the parser with Rstan 2.17.3 . I wrote the summation out instead of using the compound assignment and changed the last ‘cauchy’ from ‘Cauchy’. This version does get warnings about inefficient deep copies. I guess because I am not using the compound assignment. I hope Bob can help with the compound += statement.

@stijn, thank you again.

Just as you were writing - I was authoring this post!

I did take your suggestion and wrote out the sums in full, decomposing the item family means into an item model mean and a combination of fixed effects:

   family_mu = model_mu[mm] + fammu_offset*fam_resid;
   family_mu[multten] = family_mu[multten] + char_m10*ident_x;
   family_mu[threedig] = family_mu[threedig] + char_d3*ident_y;

To accomplish this, as you can see above - I multiplied the scalar values by vectors of 1’s as needed so that I could execute the code without including loops. This also passed the parser - but perhaps my approach (adding ident_x and ident_y ) was unnecessary?

Looking at the code, there is one final modification I would like to make after improving efficiency: I would like to be able to, like the original code, have family-specific random effects. With the modifications here, fam_resid is estimated to be the same across all families. What if I want to estimate vector [M] fam_resid because I think that there is variability across item models in how consistently items were generated? Or if I wanted to estimate vector [P] sigma_item ?

I am currently running three chains - which with the previous version of this same code took… days, with 2000 iterations per chain. It appears already to be running MUCH faster, even with the deep copies. I did already notice in testing versions of this code on simpler models that the non-centered parameterization lead to considerable increases in speed and also I no longer have warnings about low information.

Copies are relatively cheap—they are usually memory local and don’t put anything on the autodiff stack.

If they are equal to 1, then they’re unnecessary.

Just make a hierarchical model for each. The scales can be tricky if lognormal isn’t appropriate (it avoids zero, so if scales are very close to zero it won’t work).

1 Like

@Bob_Carpenter - I was going back through my old posts and I wanted to close the loop on this post and clarify how to adjust the code when a vector of effects is being estimated – You had written:

When writing out the non-centered parameterization when there are, let’s say, different residuals across groups (and it seems entirely reasonable that fixed effects won’t explain the variation equally across) — would it look something like this:

family_mu = model_mu[mm] + fammu_offset * fam_resid[mm];
family_mu[multten] = family_mu[multten] + char_m10;
family_mu[threedig] = family_mu[threedig] + char_d3;

where we have defined fammu_offset as a real number and distributed normal(0,1), or would we want to define the standardized variable as a vector?

I am asking, because I have defined fammu_offset as a vector and as a vector that I have indexed with [mm], meaning that although there are twenty-seven item families in my data there are only nine unique values being drawn, and I am not able to effectively recover the variance parameters used when simulating the data. I am able to recover the mean structure - but the variances are… elusive. I have been playing with different priors ( Cross-classified Hierarchical Model, Heterogenous Variances - #18 by bbbales2), and also exploring what happens when I change adapt_delta, have different numbers of chains and longer chains as well. I’m wondering if maybe the issue is simply that I am incorrectly implementing the non-centered parameterization?

@bbbales2 - I am editing this post to include your name since you were extremely helpful with my recent (related!) post, to see if you had any suggestions. I would love to say that I had everything figured out - but it seems that I’m not able to recover variance parameters successfully, and I think it is because I’m not specifying the non-centered parameterization correctly… My code works and values are successfully recovered when I’m estimating a single variance parameter rather than a vector of values.

Thanks so much,
Shauna

Could you post the variable declarations for family_mu, model_mu, fammu_offset, fam_resid… Basically all the variable declarations? Or just the model itself?

I wanna answer your question specifically and it’ll be easiest if I know the shapes of things exactly. I just don’t wanna say the wrong thing :P.

Hi @bbbales2 - sorry to leave the declarations out of the thread… The structure of the model at each level is similar, where I am trying to define a vector of residuals at each level, and in my current model, i have also defined the offsets as vectors…

parameters {

vector[J] uj;              		

vector <lower=0> [P] sigma_item;
vector <lower=0> [M] fam_resid;
vector <lower=0> [Q] mod_resid;

vector[K] betai_offset;
vector[P] fammu_offset;
vector[M] modmu_offset;

vector[Q] template_mu;
real disp_horiz;
real disp_verb;
real char_m10;                   
real char_d3;                    

}

At each level of the model, there is a regression, and you’ll see here I have used the .* operator to multiply the offsets by the vector of residuals - and this is where I think my model is having difficulty.

transformed parameters{
vector[N] eta;
vector[K] betai;
vector[P] family_mu;
vector[M] model_mu;

// varying item model difficulty across item models within templates
// Item model difficulty is decomposed into a linear combination of the template difficulty
// and the effect(s) of display characteristics
// variances differ across item models

model_mu = template_mu[tt] + modmu_offset .* mod_resid[tt]; 
model_mu[vv] = model_mu[vv] + disp_verb;
model_mu[hh] = model_mu[hh] + disp_horiz;

// varying item family diffculty across item families within models
// decomposition of family means into a model mean and fixed effects

family_mu = model_mu[mm] + fammu_offset .* fam_resid[mm];
family_mu[multten] = family_mu[multten] + char_m10;
family_mu[threedig] = family_mu[threedig] + char_d3;

// item difficulties parameterized as random, with parent-specific means
betai = family_mu[parent] + betai_offset .* sigma_item[parent]; 

//log odds of a correct probability
eta = uj[peeps]-betai[item];

}

The model statement is here - and you will see that the priors for the random effects are now normal rather than cauchy, per [another thread/discussion](Cross-classified Hierarchical Model, Heterogenous Variances - #18 by bbbales2 - Thank you)!

model { 
//hyperprior
betai_offset ~ normal(0,1);
fammu_offset ~ normal(0,1);
modmu_offset ~ normal(0,1);

//prior on random variable theta to scale the item difficulty parameters
uj ~ normal(0,1);

//weakly informative prior on item variance
sigma_item ~ normal(0,1);
fam_resid ~ normal(0,1);
mod_resid ~ normal(0,1);

//likelihood function                                         
score ~ bernoulli_logit(eta);

}

I have done due diligence, I think to attempt to explore different priors, chain lengths, starting values, and estimation parameters (i.e. adapt_delta, etc.) in an effort to recover the variance parameters - so I think there is an issue with how the residuals and/or offsetts (often called “tau”) are being coded.

As I am writing this - I’m wondering if the issue may be with how I have defined/indexed the residuals… I feel as though as I have been attempting to translate my models into code, I have gotten a bit tangled, even doing the step-through process of building from simple to more complex that folks have advocated on this forum!

Thanks again, and in advance!

Nah, no apologies necessary. Usually what you did is better (try to cherry pick the important parts and describe the problem).

family_mu = model_mu[mm] + fammu_offset * fam_resid[mm];

I think this is right. It has the same meaning as:

family_mu ~ normal(model_mu[mm], fam_resid[mm]);

Except for these multten/threedig elements which will be:

family_mu[multten] ~ normal(char_m10 + model_mu[mm[multten]], fam_resid[mm[multten]]);
family_mu[threedig] ~ normal(char_d3 + model_mu[mm[threedig]], fam_resid[mm[threedig]]);

I’m assuming mm is a 1d array of integers?

Yeah, I don’t see obvious stuff. Checking indexing sounds like a good bet.

In what way are the posteriors wrong? If the marginals of each of the variances just look way too wide you might check that there are unidentifiabilities that are allowing them to wiggle around a lot.

[mm] is a 1d array of integers that indexes item families to item models; [tt] indexes models to templates, and [parent] indexes items to famillies — this has proven a great way to vectorize the code and successfully estimate the means structure(s).

There seem to be a number of issues with the posteriors. In my most recent effort to trouble-shoot, I ran the code with normal(0,5) priors on the residuals - which really only had the impact I expected: longer run time for the model, higher numbers of divergent transitions compared to narrower priors, and probably related to the divergences the estimates of the fixed effects were upwardly biased. The residuals were… the means were off, they were identical across groups which is NOT what the code should be estimating, and also the n_eff estimates look extremely strange - something is not right. I should note: I ran four chains with 5,000 iterations each and 2500 burn-in.

                 mean se_mean    sd   2.5%  97.5% n_eff  Rhat
sigma_item[1]   0.557   0.002 0.145  0.307  0.873  6496 1.000
sigma_item[2]   0.661   0.002 0.142  0.421  0.971  6527 1.000
sigma_item[3]   0.581   0.002 0.148  0.316  0.899  6109 1.000
sigma_item[4]   0.659   0.002 0.138  0.422  0.957  7060 1.000
sigma_item[5]   0.790   0.002 0.139  0.549  1.088  6860 1.000
sigma_item[6]   0.688   0.002 0.138  0.449  0.989  7260 1.000
sigma_item[7]   0.782   0.001 0.141  0.539  1.089 10000 1.001
sigma_item[8]   0.789   0.002 0.139  0.548  1.086  6912 1.001
sigma_item[9]   0.681   0.002 0.141  0.437  0.988  6809 1.001
sigma_item[10]  1.017   0.001 0.142  0.760  1.313 10000 1.000
sigma_item[11]  0.757   0.001 0.144  0.505  1.066 10000 1.000
sigma_item[12]  1.151   0.001 0.144  0.896  1.450 10000 1.001
sigma_item[13]  0.868   0.001 0.144  0.616  1.174 10000 1.000
sigma_item[14]  0.677   0.002 0.156  0.401  1.012 10000 1.000
sigma_item[15]  0.857   0.001 0.142  0.609  1.167 10000 1.000
sigma_item[16]  0.796   0.002 0.139  0.552  1.091  6606 1.000
sigma_item[17]  0.987   0.001 0.144  0.728  1.293 10000 1.000
sigma_item[18]  0.833   0.001 0.140  0.587  1.128 10000 1.000
sigma_item[19]  0.687   0.002 0.141  0.441  0.990  6487 1.000
sigma_item[20]  0.751   0.002 0.142  0.499  1.048  6379 1.000
sigma_item[21]  0.826   0.002 0.140  0.578  1.131  6961 1.000
sigma_item[22]  0.723   0.002 0.137  0.485  1.019  6645 1.001
sigma_item[23]  0.819   0.001 0.141  0.574  1.127 10000 1.000
sigma_item[24]  1.141   0.001 0.143  0.880  1.439 10000 1.000
sigma_item[25]  0.915   0.002 0.140  0.671  1.215  6692 1.000
sigma_item[26]  0.824   0.002 0.138  0.575  1.119  6647 1.000
sigma_item[27]  0.736   0.001 0.138  0.498  1.037 10000 1.000
fam_resid[1]    0.406   0.003 0.207  0.055  0.850  6531 1.001
fam_resid[2]    0.413   0.003 0.214  0.050  0.878  5119 1.000
fam_resid[3]    0.368   0.003 0.210  0.032  0.824  5944 1.001
fam_resid[4]    0.402   0.003 0.214  0.038  0.860  4976 1.000
fam_resid[5]    0.404   0.003 0.213  0.048  0.859  6344 1.000
fam_resid[6]    0.420   0.003 0.213  0.052  0.865  5639 1.000
fam_resid[7]    0.446   0.003 0.207  0.083  0.894  5783 1.001
fam_resid[8]    0.389   0.003 0.213  0.042  0.852  5968 1.000
fam_resid[9]    0.419   0.003 0.211  0.056  0.863  5555 1.000
mod_resid[1]    0.445   0.003 0.220  0.057  0.903  7055 1.000
mod_resid[2]    0.432   0.002 0.218  0.049  0.885 10000 1.000
mod_resid[3]    0.418   0.002 0.219  0.046  0.880 10000 1.000
template_mu[1] -1.256   0.006 0.431 -2.098 -0.342  5133 1.000
template_mu[2]  0.569   0.006 0.439 -0.319  1.439  4611 1.000
template_mu[3] -0.698   0.006 0.422 -1.554  0.134  5080 1.001
char_m10        0.402   0.004 0.227 -0.047  0.847  3708 1.000
char_d3        -0.066   0.003 0.213 -0.487  0.355  4407 1.000
disp_verb       0.987   0.006 0.420  0.168  1.825  4401 1.001
disp_horiz      0.124   0.006 0.415 -0.688  0.950  4903 1.001

n_effs can be higher than the number of samples. I don’t understand the math, but that’s okay. Aki posted some stuff on it. That doesn’t help with your modeling though. I’ll have to think on this a little but it’s not obvious what’s happening to me.

For lack of anything better to do I’d just center stuff lol, but you originally had the model written in centered form you said…

Ha! I did attempt a centered version… I will attempt another version here - and as I did in the last post, once I get some results that are promising I will post a graph here :)

1 Like