Hello everyone, I believe I have found a solution to my very specific problem, and I’m putting it here in hopes that either somebody comes along and provides some feedback on it and/or can be used for somebody in the future with a similar problem to mine.
Naturally, I’m far from an expert on the matter at hand and this should be read with caution.
Before the marginalization, the exponent of the exponential in the Gaussian likehood (multiplied by -2) which I will refer to as \chi^2 was of the form
\chi^2 = A + 2\mathcal{M}B - C\mathcal{M}^2 \,,
where the minimum is located at
\chi^2 = - B/C \,.
Combining the two previous equations together yields the marginalized \chi^2, which intuitively makes sense to me.
Now, if I am to consider the \chi^2 for a single event, I will do so only for A, B and C, but not for \mathcal{M}, which results in
\chi^2_i = A_i + 2 \mathcal{M} B_i + C_i \mathcal{M} \,,
where the index i denotes the i-th event and the value of \mathcal{M} is computed using the minimum which I have written before, using all of the events.
This works, with the statistics by the elpd looking very good, which matches the fact that visually the fit also seems to be very good (I won’t show the fit here, but trust me, it is always very close to the observations!)
Computed from 10000 posterior samples and 40 observations log-likelihood matrix.
Estimate SE
elpd_loo -25.83 6.22
p_loo 0.93 -
------
Pareto k diagnostic values:
Count Pct.
(-Inf, 0.5] (good) 40 100.0%
(0.5, 0.7] (ok) 0 0.0%
(0.7, 1] (bad) 0 0.0%
(1, Inf) (very bad) 0 0.0%
Further comments, ideas or feedback is always appreciated.