Random values in model block - alternative?

I want to fit behavioural data of a sequential decision-making task. The data I have is of the form: each segment of 90 seconds has different actions A or B and I have features of the segment that drive those actions (e.g. action is more likely to be ‘look for rewards’ when rewards are available). I can then make summary measures for each segment (most simply: avg number of actions A or B, but also others, e.g. time until you start doing anything in a segment).

I am thinking of modelling it this way (see below for a more standard way I have tried to model it and the problems I encountered). [pseudo code]

Is there any way to do this in Stan?

while (time <90){
   // when you can, make a choice
  choice = rand_binomial(features [including e.g. what your previous choice was]* free_parameter) // create a weighted sum of features, like a   regression

  // as a function of the choice, time goes forward
  if (choice == A){
  } else if (choice == B){

avg_choiceA = avg(choices==A)
error_term = true_avg_choiceA-avg_choiceA

What I have tried before was a very standard regression

choice_AorB ~ feature1 + feature2 +...

This fits, BUT, across two large samples of participants was a lot less sensitive to individual differences in clinical scores on standard questionnaires than the kind of summary measure (like avg_choiceA or how much is avg_choiceA influenced by environment feature1). Part of the issue is I think that the design matrix (how many choices there are, not just the identify of what was chosen) differs between people due to if you choose A, it takes longer than B.

However, I have overall ~40 summary measures (and then it balloons to 200 by examining the effect of each feature on each summary measure), when I think they are probably - given that they are correlated - driven by a much simpler underlying model (so say 10 parameters per person, rather than 200). So I’m looking now for a way to model the summary measures with a simpler underlying model.