Latent parameters behind human data entry

Using stan::math::hessian for multi-dimensional Newton’s method direction finding and stan::optimization::WolfeLineSearch for making steps, I’m minimizing model fit square errors using a Huber norm, which involves taking the median of the errors first and then capping any errors at 2x the median, resulting in automated bad data rejection. This is necessary because the errors aren’t normally distributed since they occur from engineers making typos, miscalculations, or copying bad data. The regressions have multiple latent variables and multiple output variables for each observation.

I’m wondering if anyone knows a more stan-like way to handle this situation. If not, no big deal. Thanks.