Hi,

I am trying to implement my own prior distribution. It is a Gaussian mixture model with three components.

Unfortunately I am running into trouble when having to evaluate `log(0)`

.

I understand that in `normal_lpdf`

the logarithm is expanded such that

target += -\frac{1}{2}log(2\pi\sigma^2) -\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2}

Is there a way to evaluate `log(0)`

since I cannot expand below in a helpful manner?

log\left(\frac{1}{\sqrt{2\pi\sigma_1^2}}e^{-\frac{1}{2}\frac{(x-\mu_1)^2}{\sigma_1^2}}+\frac{1}{\sqrt{2\pi\sigma_2^2}}e^{-\frac{1}{2}\frac{(x-\mu_2)^2}{\sigma_2^2}}+\frac{1}{\sqrt{2\pi\sigma_3^2}}e^{-\frac{1}{2}\frac{(x-\mu_3)^2}{\sigma_3^2}}\right)

Have a look at the `log_mix`

function.

Thank you, I now have.

Maybe I can trouble you for an additional question.

The sum of the mixture ratio has to be 1. Now I have estimated my parameters via sklearn fitting histograms (probability density against model parameter). The weights I got don’t sum to one. Would there be any trouble in simply normalizing them?

I’m afraid you’ll have to wait for someone who is familiar with sklearn to get a definitive answer. If you can afford the time, maybe estimate the mixing probabilities in Stan using `log_mix`

. Should be fairly straightforward to do and would remove the dependence on “external” routines, at least for this bit.

Thanks for the suggestion. But there are quite a few thus doing so would severely increase my model space.

If interested:

I found that the few outliers in weight were due to an overfitting of mixture components. Only two were needed, but I used three, leading to one of them having a very large spread and weight so to make it disappear.