Specifying family for discrete count data


I’m modeling how scores change over trails from a game, the scores can take discrete values between [-6,6]. I used a gaussian as I thought I could consider this data to be continuous and I’m getting the following fit. This doesn’t look fine.

I’m thinking about using ‘family = cumulative()’ and changing my score here as an ordered factor. Does anyone have any insights on this?

1 Like

I think you should structure your model to predict the actual score, not the change , using a proper count distribution like poisson, zero-inflated poisson, negative binomial, etc. You can compute the score change in the generated quantities.


@mike-lawrence the actual scores include negative values.Do you mean that I shift the values to a positive scale? Start with 0 to 12?

following up on what @mike-lawrence wrote: I think he suggested that if the values -6 to 6 are differences of scores, then it is better to model the scores from which the differences are calculated.

  • If the underlying scores are counts, then the count distributions @mike-lawrence suggested are natural.
  • If the underlying scores are restricted to a range then the binomial or beta binomial distribution are natural.
  • If it makes sense that the scores result from an underlying latent trait, you could also use an IRT model.

Evidently, the type of model one use depends on the details of the data generating process. So if you could describe a bit more how the change scores you are analysing are generated, this likely increases the chance that you get more specific feedback.


The dependent variable here is team performance score that two players gets per trial playing game ‘A’. The maximum teamscore per trial is +6 and the least is -6(the median teamscore is 2).
So, here teamscore per trial can take values as[ -6,-5,-4,-3…+4,+5,+6]. It’s a between subject design with two conditions : a control and treatment.

What I’m trying to model here is a) How does the teamscore change across trials? b) How does teamscore change between the two conditions?

My base model is this:

model_tp = brm(
  # model formula
  pairtotal ~ Trials_scaled +(Trials_scaled|Subnum),
  # data
  data = tp ,
  iter = 4000,
  seed = 111

I tried with the distributions mentioned by @mike-lawrence before and I get warnings as : Error: Family ‘negbinomial’ requires response greater than or equal to 0.

I also tried specifying a mixture model with skew-normal and gaussaian components( just because the plot I showed earlier looks like a mixture model…)

As I understand, binomial is for 0/1 stuff? but here it’s making 6 selection from a set of 18 selections /trial…depending on how much of this 6 is correct/ wrong the score is calculated,@Guido_Biele

Could you elaborate on this please? I suspect modelling this process directly rather than the -6:+6 summary you create might be key.

The game is a Multiple Object tracking , there are 18 bubbles and of which 6 turn red only for couple of seconds and then 18 bubbles move around the screen for another few seconds.The players has to select the ones that turned red in the beginning.Each player can individually decide.So, player ‘A’ selects say 4 and gets 3 right and player B selects 6 and get’s 6 right their score will be : 2+ 3 = 5 for that trial. [edited]
( selecting the same correct/incorrect object leds to +1/-1). @mike-lawrence

Some follow up questions:

Is it theoretically possible that each player selects 4 and all 4 are wrong?

Is there a reason that you do not want to use player level data (instead of team level)?

yes.It’s possible also to have -8 as score.(Though in this expt we only have values within [-6,6])
Yea, because my hypothesis is around team performance @Guido_Biele

@martinmodrak Hi:) would you have some insight on this?

Is there one or more typos in your description of player B’s choice and performance? Did you mean they choose 6 and got 3 right?

Also is there any penalty in the game for guessing? If not, why would player A only choose 4 ?

Thank you for pointing it out…for A it’s 2 instead of 3.
If the correct selections are same for both players, if they choose the same ones then it’s just considered as +1 and if they choose the same bubbles which is actually wrong, then again it’s -1 instead of -2.
So, in total it should be 31 +1-1 = 2 for player A and now B will only get three for the ones that didn’t overlap with A’s selection .Totalscore would be 3+2 =5.@mike-lawrence

Here is one way to approach this:

One can imaging the situation as one in which there are two players in a team, each with a latent ability to solve the task. In addition there are some “rules of the game” according to which a team score is calculated from individual players’ responses.

I would use an IRT model (for which one would define the scores as ordinal values, or integers 1 to …) to model individual players’ responses* and calculate the team score in the generated quantities, according to the rules of the game. Then one can do any desired comparisons of the team score, if this is the main DV of interest.

Modeling players individual responses is harder when they interfere with each other (i.e. each bubble can only be clicked by at most one player). In this case I’d start with the team score, again using and IRT model.

Finally, I can also imaging that some people would simply use the team score and the normal model and not worry so much that the posterior predictions look different than the data, as the long as the means and standard deviations of the posterior predictions are consistent with the means and standard deviations of the observed data. If this is a viable approach probably depends also on the research question.

*(This would also be a starting point to model dependency of team-members responses - I am not sure for instance if the players observe each others responses).


Thanks a lot for the all this explanation.
The pp are consistent with the data.So, it’s good to know that I can fall back to that if the IRT model stuff doesn’t work out.
The players do see each other’s selections after each trial, they can also see the team score after each trial.They just don’t know who scored what.

I have few doubts:
a) Are you suggesting to make separate model per individual? How do I go from here to generating team score every trial ?
b) Over sounding silly, is it possible that I just tranform this teamscore into something between [1…13] and then use something like a poisson distribution…? Is this better than using the normal distribution? @Guido_Biele

You would analyze both players in the same model. If one is not comfortable working directly with Stan code, one can also use the posterior_predict to get posterior predictions and calculate team scores and analyze them in R.

The thing with the poisson is that it is the least flexible count model (it also assumes that any positive score is theoretically possible). The negative binomial has an overdispersion parameter and is thus more flexible.

The binomial and beta binomial models account for the fact that you have also an upper bound (brms vignette on custom distributions shows how to implement the beta binomial, which you can think of as a binomial model with overdispersion parameter).
(For count and binomial models you need to scale to 0-12)

One disadvantage with of count and binomial models is that you have a link function, which makes understanding of priors much harder (with a log link function the predictors are effectively combined multiplicatively and not additive like e.g. with a gaussian model).


@Guido_Biele thank you much for this explanation. IRT model is looking good…I’m not sure whether what I’m inferring from it is right, but for now, it’s good :) Thank you again! @Guido_Biele

If I remember correctly from our previous discussion, a good strategy is that the players somehow distribute between themselves objects to track, so that each one tracks only half the objects (which should be easier). The question then is, whether the teams converge on some sort of distribution of work, when they are unable to communicate directly (but they see which objects the other selected).

I don’t think the team score is a very good outcome measure here, as it is a complex function of what you care about. I think you might want to model:

  • Each player’s ability to select objects correctly (accuracy)
  • The overlap in objects selected. I think it makes sense to restrict to overlap in correct answers, but I am not 100% sure about this. Once again, we can model this for each player separately, i.e. we could have a single trial where p1 selects 5, p2 select 4, there is an overlap of 2, so p1 has 5 trials, 2 successes, while p2 has 4 trials, 2 successes.

For simplicity, those might be treated in separate models. If the overlap decreases, this is some evidence for division of labor taking place. If accuracy increases at the same time, this division is useful.

Both of those should be reasonably modelled by a binomial (or possibly beta-binomial)

I don’t have experience with IRT so I can’t comment to what extent is using IRT for any of the models sensible.

Does that make sense?


The IRT model is a good model to estimate individuals ability, where individual level abilities are estimated as random effects. It is also possible to model this “ability” as time dependent.

Your formulation of s successes out of T makes it even clearer that a binomial or beta-binomial model is a good candidate for the data. A differences between the IRT and binomial or beta-binomial approach are that the IRT model imposes less structure on the relationship of ability and “number of points” (in a nutshell: IRT = ordinal model + random effects for individuals ability and item difficulty**) than the binomial or beta binomial distribution that uses link function.

** I don’t know if all trials are equally hard, there are a priori difference in trial difficulty, an IRT model could account for this with a random effects term.