The piece wise density effectively creates two truncated distributions. If you tell Stan you have truncated distributions, you can recover the parameters.

Thank you very much! How long did it take for you to run this code though? It seems like with the presence of truncation the sampling suddenly became unbelievably inefficient. Maybe I should first split the vector based on the truncatong knot and vectorize them?

Warmup took about 13 seconds for 250 iterations, 2 second for sampling 250 iterations. I think in this example you can get away with 250 iterations warmup and increase sampling depending on how much precision you need (maybe because of the initialisation.)

Splitting and vectorizing would be a good idea but I am not sure that you can vectorize the T operator.

I think you can use vector<upper=0> [Nneg] yneg and yneg ~ normal(mu[1], sigma as an alternative.
EDIT:

There are also no priors. I think this will run much smoother with some reasonable priors.

I am experiencing a really strange problem, with the same code and more sampling I thought I would get an better estimate. But I am receiving fairly biased estimates, did I code anything wrong?

I did some further experimentation and warmup gets stuck pretty easily. I only realised after your latest response that you are using algorithm = “HMC”. I think the sampling problems are because plain vanilla HMC is just less robust than NUTS. If you use the default NUTS, you should not experience any problems.