I had asked about the relationship between the log-lik values computed (for the purpose of LOO computation, for example) and the Stan-generated lp__. Dr. Goodrich pointed out that the lp__ values take into consideration the priors (which I should have realized straight off) and the transformations that Stan does of constrained variables to their unconstrained correspondents (which I had not picked up on). Dr. Vehtari added the comment that constant terms are not included with the “~” notation, but are when “_lpdf” is used. I understand that, and having played around with different formulations, I know exactly how that affects log_lik type computations. Thank you.
What I am really interested in, and why I was asking the questions in the first place, is that I am trying to figure out how to deal with PSIS-LOO computations where some of the k parameters are bad.
As a simple example (and I would be happy to send the data or summary outputs if someone wants to pursue this), I have a data set for which I am starting with a simple linear regression (even simpler in the sense that it has zero intercept), where I am regressing summary data (means and variances) for a response, on the explanatory variable, x; the summary data (observations) come from different publications, for which we only have the summaries, not the individual values. The slope parameter, b, has a normal (0, s) prior and for sigma I have the prior set to half-Cauchy (0, scale). Truly basic. But for one of the 85 observations, the Pareto k diagnostic is in the “very bad” range (>1). The only interesting thing I can tell you about that observation is that it has a much bigger variance than the other observations, and the Stan-fit SD estimate for its log_lik contribution is more than an order of magnitude greater than the contributions from the other observations.
So, I am wondering if this is something, in general, that will be problematic for the PSIS-LOO calculations? Or what is it that I should be aware of, and may need to address, when I am computing PSIS-LOO? I was not previously familiar with PSIS smoothing (or importance sampling in any detail, honestly), so I am trying to understand the basics of the diagnostics and some root causes or known problems.
I have read the papers cited in the LOO package documentation, so I have at least that much familiarity. If I recall correctly, one thing you suggest when diagnostics are bad is that one might want to try a more robust model. It occurs to me that assuming a constant sigma for these data, in the presence of this one observation that demonstrated a much greater degree of variability, might fall under the situation of a “not-robust-enough” model. Does that make sense to you; is that the kind of thinking you were alluding to in the discussion about problematic cases?
Ultimately, I want to apply some more complex models and do a hierarchical analysis. That is why I am very keen to get LOOic values for comparison. If you are interested and willing to assist, I can share more details about that plan, too. But for now, any insight you can offer at this basic level would be gratefully appreciated.
Thanks in advance,