Hello Stanimals! I have a generic question I am trying to solve.
I have a retail dataset that consists of client-item pairs and whether or not the client purchased the item (Bernoulli). Furthermore, I have k
types of feedback from each purchase:
- price loved, price okay, price too high
- fit too small, fit okay, fit perfect
- not my style, style okay, loved style,
- etc.
the typical feedback is: poor, okay, great.
So my question/ask is: I would like to build an “explainable” model that not only tells me the client-item Bernoulli probability of sale, but why they might have/have not purchased it.
I was thinking something like:
target += Bernoulli(sold| \theta[client-item_s])
target += Bernoulli(not_sold|1-\theta[client-item_ns])
target += Dirichlet(\theta[client-item_s] | \prob_price_s, \prob_fit_s, \prob_style_s,...)
target += Dirichlet(1-\theta[client-item_ns] | \prob_price_ns, \prob_fit_ns, \prob_style_ns,...)
target += Bernoulli(feedback_price[client-item_s] | \prob_price_s)
target += Bernoulli(feedback_price[client-item_ns] | \prob_price_ns)
...
where the _s
and _ns
suffixes refer to “sold” and “not sold”, respectively. Here, I decomposed the data into sold and not sold events and partitioned the client-item pairings appropriately,
I am modeling the feedback above separately: that is, we expect for a non purchase that the feedback for price is either “poor” or “good” and we would bucket them accordingly. Similarly for the other feedbacks.
The full model will also learn embeddings for the client and items so as to allow for unseen items and clients – this will help estimate if a client would buy an item in the future.
My question is: does this look right? I can’t remember ever seeing a Dirichlet
used like this before, but it kinda looks like a multivariate Beta-Binomial
but trying to “explain” things.
Of course, there will be priors on the prob_price, prob_fit, ...
Any ideas would be super helpful. This will be a HUGE model to fit, so getting this generative part completed will go a long way to helping implement it.
Thanks!