 # Multinomial model: starting value for time parameter

Hello,

I’m a beginner data science student and for a course we need to replicate a paper which models multi-party elections in a Bayesian framework. Basically, we understand the model: poll respondents y are modeled by a multinomial distribution and the linear predictor includes a pollster parameter \beta _{d} and, more importantly here, a time parameter \beta _{t} which is modeled after a random walk.

y_{i} \sim multinomial(\pi_{i})
\pi_{i,p} = \frac{exp(\eta _{i,p})}{1 +\sum_{j=1}^{P-1}exp(\eta _{i,j})}
\eta _{i,p} = x_{i,d} *\beta _{d, p} + x_{i,t} *\beta _{t, p}
\beta _{t,p} \sim N(\beta _{t-1,p}, \sigma_{p}^{2})

What we don’t understand is the following sentence from the paper: “For the starting values \beta_{1,p} we use normal distributions with variance 1 and the maximum likelihood estimated for t = 1 as the mean.” How do you compute the MLE of the time parameter for t=1 from the data? I hope it’s not too dumb of a question, but can anyone help? Thanks in advance!

I’d guess that B_{1,p} \sim normal( z _{p}, 1) where z_{p} is related to the probability of category p observed in the data (computed across timepoints). I say “related” bc I’m not familiar with the transform you’re using to go from the latent propensity scale \eta to the probability simplex scale \pi.

1 Like

Thanks a lot for your answer! Some follow-up questions: Can you explain what you mean with transform, something like the log(odds)? And if you’re saying “computed across timepoints”, do you mean that the maximum likelihood should be estimated across the whole time series? Because we thought the “MLE for t=1” meant that we only consider the MLE for the first time point for the starting value. Best regards