Let me expand on the problem a bit: Assume we have M items a person could select from, each m_i are in a group M_j with \cup_{j=1}^K M_j = M That is, M is made up of disjoint subsets. Now, we are going to aggregate over a bunch of people selecting items and then add up how many are selected in each group. There are N total items selected, so we have a multinomial model over the K groups, *but* if an item m_i isn’t available for selection (i.e. we ran out if it), then a person picks another object. This non-availability of an item in a group M_j biases the total count in M_j low.

If I do the above for every day, then I get a timeseries of multinomials. My assumption is that how the probabilites \pi_i,\ldots,\pi_K change over time is stationary (say modeled by a parametric function).

If I say, just dropped those days where one of the groups had censored data, then I would let my model interpolate over this date (in `generated_quantities`

) but the data is such that pretty much every day will have a group with censored data.

Now, I would like, not only for the model to infer what is happening with the current data, but what happens as we run out of items in each of the subsequent groups going forward…So, say we run out of items in group q, then we typically see an increase in group p, etc.

Does this help with clarity to the issue?