Say I have a model expressed something like this:
y ~ 1 + me(x_mu, x_se)
Where the predictor X is observed with known measurement error, so the data comprise a point estimate
x_mu and a standard error
x_se for each observation.
This model appears to work just fine for my application, and from post-processing functions I can see that the estimates of the latent X corresponding to the observed data, are saved in the model object.
If I then want to use
predict(), for example to visualize the effects, I am required to specify new values of
x_se; this makes sense for the expected value of a new observation, but I’d like to show the relationship with the ‘true’ X. Is it possible to pass in values of the noise-free latent variable X to these functions instead? Can anyone suggest a preferred workflow for this? I haven’t been able to locate an example.
Bumping this, as I have the exact same question. @AWoodward have you found a solution since posting this?
I might well be missing something, but what if you specify
x_se of 0 (or a number very close to 0) for the new values? Then it seems like the latent variable is the same as the
x_mu value that you pass in.
Hi all, sorry for the delayed reply, I’ve been on leave.
I’m not aware of a solution that does what I was hoping for, which is to make a prediction from the latent predictor. The solution I settled on for this one was to set the SE as an arbitrarily small value, on the relevant scale. It can’t be set to zero because in the underlying Stan code the parameter is constrained to be positive.
Of course it would be relatively simple to do this manually from the Stan code. In this case I was working with a graduate student and so for the sake of keeping a simpler workflow, we decided to stick entirely to operating in R/brms.