How to model within- and between-subject factors in a cognitive computational model

Dear community,

I’m struggling with comparisons of parameter estimates of a cognitive computational model by means of RStan. Let me give you a quick example: I want to analyze data from a experiment, which includes two groups (clinical sample and healthy controls) and a within-subject factor (first session and second session). The model summarizes multi-trial data of the Iowa Gambling Task, among others, by a learning rate parameter (for details, see https://link.springer.com/article/10.3758/s13423-017-1331-7). The standard analysis would be to calculate individual means for each experimental cell and compare them by, for example, a mixed ANOVA. However, I want to use the full potential offered by Stan for my analysis. So, how do I analyze my experimental factors?

A quick look at the literature was inconclusive. There seems to be different solutions such as estimating the model parameters individually for each experimental cell (e.g., first session of healthy controls and second session of healthy controls) and subtract posterior distributions of parameter estimates (and then check whether the HDI/ contains 0 or is overlapping a ROPE). However, if I do this, I neglect the within-subjects information, etc.

I came up with the following idea, which might be quite analogue to a mixed ANOVA (for reasons of simplicity, lets stick with a single learning rate parameter of the model): I can think of three effects in my data:

a) a main effect of the group (m_group)
b) a main effect of the session (m_session)
c) the interaction effect of group and session (m_session*m_group)

This results in four possible linear models of the learning rate alpha:
null-model: alpha = m + m_individual (as I think of an hierarchical implementation, the lowest-level model has a overall mean m and for each individual a unique mean m_individual. Whereas the latter individual mean is scaled by a variance parameter, but I cut that out for my post).
group-model: alpha = m + m_group + m_individual
session-model: alpha = m + m_session + m_individual
interaction-model: alpha = m + m_group + m_session + m_group*m_session + m_individual

To analyze my data, I would implement the computational model four times with each containing a different linear combination of the learning rate above. Finally, I can compare model performances by, e.g., well-known information criteria (BIC, AIC, WAIC and so on).

What do you think about the proposed approach? Is there anything I’m missing? Do you have other solutions? I very much appreciate your comments and help.

Alex

BIC is not really an information criterion, but an estimate of the marginal likelihood under stronger assumptions that are necessary. AIC is a well-known information criterion but isn’t a very good estimator of the deviance. WAIC is a good information criterion but not as good as LOOIC, which comes with diagnostics that tell you when its assumptions are not satisfied.