Model fit diagnostics for NLMM with censoring

Dear all,
I have a question regarding the model fit diagnostics for a NLMM with censoring in the response.
I fitted a NLMM model in stan where there is censoring in the response.
I used an equivalent version of Tobit regression to fit this model.
Now, when I want to assess the model fit by using, for example, the plot between predicted values v.s observed values, I am facing now the difficulty to deal with censored data points.
The plot now I have is something which looks like:
Obs_Pred_antiPT

Clearly, I am searching for an appropriate way to be able to assess the fit of censored data points (which based on the plot: the vertical line at around 1.6 since the data are plotted on the log scale, and the response has lower censoring at 5 which means those values less than 5 are not identified to the accurate value of it).

In this link, http://monolix.lixoft.com/data-and-models/censoreddata/, they proposed to sample the censored data points from this distribution:
p(y_{censored} | y_{observed} ,ψ,θ ) where ψ , θ are the estimated population and individual parameters. " This is done by adding a residual error on top of the prediction, using a truncated normal distribution to make sure that the simulated y remains within the censored interval."
Can anyone help to give me a hint how to sample this using Stan, given the fact that I have a NLMM model where the response values are predicted based on the time point they were measured?

Thank you.

The code to fit the model is provided below:

data {
int<lower=1> N;                         // Total number of data points
real<lower=0> time[N];                  // Time points
real<lower=0> y[N];                     // Antibody levels on log10 scale
int<lower=1>     J;                     // Number of subjects
int<lower=1, upper=J>    subj[N];       // Subject id
int<lower=0, upper=1>    cens[N];       // Censoring indication
int<lower=0, upper=1>    C2[N];         // Child indication
real                     ABMum[N];      // AB of pregnant women at birth
}

parameters {
real<lower=0> beta;  real<lower=0> gamma; real<lower=4 ,upper=12> h;
real<lower=0> alpha; real<lower=0,upper=100> A0;
real<lower=0> sigma;

vector<lower=0>[J] b_raw;                          //To apply non-centered parameterization
vector<lower=0>[J] bbeta; vector<lower=0>[J] bgamma; vector<lower=0>[J] balpha;

real<lower=0,upper=100> sigma_A0;    real<lower=0,upper=100> sigma_beta; 
real<lower=0,upper=100> sigma_gamma; real<lower=0,upper=100> sigma_alpha;  

real v1A0 ;   real v2A0 ; 
real v1beta;  
real v1gamma; 
real v1alpha;  
real v1h; 
}

transformed parameters{
real betapart[N];  real gammapart[N]; real alphapart[N]; real<lower=0> A0part[N]; real<lower=4> hpart[N];
real mu[N]; real diff[N];
for (i in 1:N) {
  A0part[i]    = ( A0 + v1A0*C2[i] + v2A0*ABMum[i] ) * b_raw[subj[i]] ;
  betapart[i]  = ( beta + v1beta*C2[i]  ) * bbeta[subj[i]] ;
  gammapart[i] = ( gamma + v1gamma*C2[i] ) * bgamma[subj[i]] ;
  alphapart[i] = ( alpha + v1alpha*C2[i] ) * balpha[subj[i]] ; 
  hpart[i]     = h + v1h*C2[i]  ;

  if ( (time[i] <= 2 ) && (cens[i] ==0) ) { //tvacc[i]
      mu[i] = log(A0part[i]) - betapart[i]*time[i] ;  }
  else if ( time[i] > 2 && time[i] <= hpart[i] && ( cens[i] ==0 ) ) { //tvacc[i]
      mu[i] = log(A0part[i]) - betapart[i]*2 + gammapart[i]*(time[i] - 2) ;   } //tvacc[i]
  else if ( (time[i] > hpart[i] ) && (cens[i] ==0) ) {
      mu[i] = log(A0part[i]) - betapart[i]*2 + gammapart[i]*(hpart[i] - 2) - alphapart[i]*(time[i] - hpart[i]); } //vacc[i]
  else if ((time[i] <= 2) && (cens[i] ==1)) { //tvacc[i]
      mu[i] = log(A0part[i]) - betapart[i]*time[i] ; }
  else if ( time[i] > 2 && time[i] <= hpart[i] && (cens[i] ==1) ) { //tvacc[i]
      mu[i] = log(A0part[i]) - betapart[i]*2 + gammapart[i]*(time[i] - 2) ;  } //tvacc[i]
  else if ((time[i] > hpart[i] ) && ( cens[i] == 1 ) ) {
      mu[i] = log(A0part[i]) - betapart[i]*2 + gammapart[i]*(hpart[i] - 2 ) - alphapart[i]*(time[i] - hpart[i]);} //2
  
 diff[i] = (y[i] - mu[i])/sigma;
}
}

model {
// Priors
  bbeta  ~ gamma( sigma_beta , sigma_beta);    balpha ~ gamma( sigma_alpha , sigma_alpha ) ;
  b_raw ~ gamma( sigma_A0 , sigma_A0 ); bgamma  ~ gamma( sigma_gamma , sigma_gamma );
  sigma_A0 ~ gamma(0.01, 0.01); sigma_alpha ~ gamma(0.01, 0.01);
  sigma_beta ~ gamma(0.01 , 0.01); sigma_gamma ~ gamma(0.01, 0.01);
// Likelihood
for (i in 1:N){
  if ( (time[i] <= 2 ) && (cens[i] ==0) ) { //tvacc[i]
      target += normal_lpdf(y[i] | mu[i], sigma); }
  else if ( time[i] > 2 && time[i] <= hpart[i] && ( cens[i] ==0 ) ) { //tvacc[i]
      target += normal_lpdf( y[i] | mu[i], sigma); }
  else if ( (time[i] > hpart[i] ) && (cens[i] ==0) ) {
      target += normal_lpdf( y[i] | mu[i], sigma); }
  else if ((time[i] <= 2) && (cens[i] ==1)) {  //tvacc[i]
      target += normal_lcdf(y[i] | mu[i], sigma);}
  else if ( time[i] > 2 && time[i] <= hpart[i] && (cens[i] ==1) ) { //tvacc[i]
      target += normal_lcdf( y[i] | mu[i] , sigma ) ;  }
  else if ((time[i] > hpart[i] ) && ( cens[i] == 1 ) ) {
      target += normal_lcdf(y[i] | mu[i] , sigma ) ; }
 }
 }

Why do the points quickly drop at the beginning?