I would like to add skewness to an ad-hoc distribution (defined essentially by an _lp user-defined function).

Is there a simple / “canonical” / generally used transformation

f(y; \lambda) = x

which would make y \in \mathbb{R} “more skewed” than x \in \mathbb{R} in some direction depending on its sign? With a nice clean log derivative (Jacobian) so it is easy to program in Stan (hey, I can dream ;P).

It does not need to be mean-preserving, ie E[x] \ne E[y] is fine, the distribution already has something that controls the mean that will just adjust accordingly in the posterior.

y = f(x; m, \lambda, \kappa) = m + (x - m)\cdot\lambda \cdot\text{tanh}(\kappa(x - m))

which would shift things around m. But I imagine that \kappa and \lambda will be heavily correlated in the posterior in nonlinear way, so perhaps fix or drop one of them.

I don’t have experience with this, but if you’re emulating a sign function with that tanh, then I’d imagine you’d probably just fix \kappa going in.

Or like, if it was a skewed normal distribution you were working with, you could scale it with the standard deviation (cause you probably want this switch to happen at a smaller scale than the standard deviation). I dunno though. Like just make \kappa = 10 \sigma?