Thanks for all the support so far, especially @Bob_Carpenter, @avehtari, @martinmodrak , I think I found what was the problem (until I prove myself wrong, as often happens)!
Essentially the model shows good behavior for a exp-linear link function, the question was why it does not show a good behavior for a generalized logit link function?
\frac{plateau}{1 + e^{-(\alpha + \beta * X)}}
While this equation is perfectly suitable for this data:
in case of an quasi-exponential trend,
the plateau swings to exponentially high values and is not a stable anchor anymore for normalization (like it is the intercept of the exp-linear function). This would not be a problem if I wasn’t using a multinomial(softmax( REGRESSION )) framework, where I need normalization action (see below). The solution is to reparametrize using the real intercept (at x = 0; alpha here is just really the point of inversion), which is directly supported by data even for an exponential-like trend.
\frac{intercept * \left ( 1 + e^{-\alpha} \right ) }{1 + e^{-(\alpha + \beta * X)}}
Now this function behaves better in all scenarios.
Yes I was just testing the above similar reparametrization, but for some reason it was not improving the stability (for maybe some messy tests I was doing… and Saturday starting over it did work!) :)
With this, and applying the hierarchical prior between the inversion parameter alpha and the slope(s) I got the expected results.
With the first 50 genes having slope = 0 and being centered on 0 even with a softmax wrap (normally these would be read as decreasing, just because the change is not symmetric)
I mean this by normalization.
Without the right sparse assumption (using normal prior on beta) this data would have the wrong inference for 0 slopes (using my multinomial(softmax( REGRESSION )) framework)