I’m working on model where I expect to be using additive log ratios (ALRs) quite a bit. I’m thinking about functions to build up my likelihoods in the spirit of the catergorical_logit() family of functions, where the softmax is carried out internally and things are kept on the log-scale as much as possible. But rather than including the reference state as a zero within the vector of ALRs, I’d like to just work the reference state as a separate value. So to that end I wrote this function:

```
real log1p_sum_exp(vector alr) {
return (log_sum_exp(append_row(alr, 0)));
}
```

Which is all fine and good, but it occurs to me that this might be useful as basic Stan function (and possibly faster/more stable). What I want is ultimately want is -\mathrm{log}(1 + \sum_{n=1}^N\mathrm{exp}(v_n)) which is equivalent to the log probability of outcome K, if v is K - 1 vector of ALRs. I’m doing that by appending a zero and then calling log_sum_exp.

My thinking is that since `log_sum_exp`

is doing \mathrm{max}(v) + \mathrm{log}\sum_{n=1}^N\mathrm{exp}(v_n - \mathrm{max}(v)), the `log1p_sum_exp`

is doing a bit of redundant/unnecessary math. Within `log_sum_exp`

, the zero at the end of the vector will just be replaced by -\mathrm{max}(v) and \mathrm{max}(v) will be replaced by zero, all of the other values would be v_n - \mathrm{max}(v). So you could instead just define log1p_sum_exp base level Stan function that does:

\mathrm{max}(v) + \mathrm{log}(\sum_{n=1}^N\mathrm{exp}(v_n - \mathrm{max}(v)) + \mathrm{exp}(-\mathrm{max}(v)))

I don’t know how much faster this would be then the function I wrote, but it’s avoiding the step appending a zero to the end of the row (or a column of zeros as advised in the users guide section on multi-logit regression), which might be advantageous. Is this potentially useful enough/easy enough to request as a new function? It seems to me that this is the extension of the function `log1m_inv_logit`

to the additive log ratio realm.

I took a look at the Stan math on Github, but I don’t know C++ and it’d be a learning curve for me to contribute.