Machine noise -- do we have to be fancy?

I see why you say it looks like censoring here, as it does have that feel. I think it’s actually closer to a “noisy measurement model” and you’ll have more luck finding answers in that chapter of the User’s Guide and in the literature.

In your case, you should be able to treat your observed measurement as containing two components, a noisy underlying measurement I’ll call Z and the injected noise that I’ll call \epsilon (following your notation), so that

Y^{\textrm{measured}}_n = Z_n + \epsilon_n

where \epsilon_n \sim \textrm{normal}(L, \tau) is the injected noise and I’m assuming Z_n \sim \textrm{normal}(Y^{\textrm{true}}_n, \sigma) is the measurement model before the injected noise, and our goal is to do inference for Y^{\textrm{true}}. Normals are very convenient here, in that we can just build a straight-up measurement error model by noticing that the above implies

Y^{\textrm{measured}}_n \sim \textrm{normal}(Y^{\textrm{true}}_n + L, \sigma + \tau)

With that, you can proceed the same way as suggested in the noisy measurement chapter of the User’s Guide. You may not be able to identify \sigma + \tau, though, and it may make more sense to combine them. You’ll need a prior for L and presumably a lower bound for its value in its declaration.

The model you describe does not technically enforce positivity because normals can take on any value (in theory—in practice, you’re bounded by scale). If you really know everything’s positive and the errors are multiplicative (depend on scale of value) rather than absolute, then you can convert the whole thing over to lognormal. I’d think of that as just being normal on the log scale, then average again and hop back.

1 Like