Mixed Logit Model

Just came across a really cool paper by Alex Peysakhovich and John Ugander https://dl.acm.org/citation.cfm?doid=3106723.3106731 that attempts to resolve the problem you mention (having individual preferences vary over time.

They do it using neural networks to learn a representation of the relevant feature matrix and “context”. But neural networks are just functional approximations. So you could do something similar by saying

beta_it = beta_i*exp(Delta * context_matrix_it)

with beta_i ~ multi normal (beta, Sigma)

You’d center the context matrix around 0 and put sparsity-inducing priors on Delta, so that the model would reduce to standard mixed logit if context doesn’t matter.

which would retain interpretability of how context affects preferences for various attributes.

Hope this helps,
Jim