Product of normal distributions: implementation of log likelihood and model comparison

Hello,

First of all sorry if there is anything that is too obvious on the question, I am quite new to this.

I am trying to make a model comparison and the results I am getting are a bit odd, not sure if the implementation is correct.

I am trying to compare two models with different levels of complexity. One of them has a single normal distribution, the second one is the product of two normal distributions.

Model 1:
Response[n] ~ Normal(predictor1[n], sqrt(parameter 1[s]))
Model 2:
Response[n] ~ Normal(predictor1[n], sqrt(parameter 1[s])) * Normal(predictor2[n], sqrt(parameter 2[s]))

I have implemented it this way:

Model 1:

model{
 for (n in 1:N){
    target += normal_lpdf(response[n] | predictor1[n], sqrt(parameter1[s]));
    }
}
generated quantities {
   vector [N] log_lik;
     for (n in 1:N){
       log_lik[n]= normal_lpdf(response[n] | predictor1[n], sqrt(parameter1[s]));
	}
}

Model 2:

model{
 for (n in 1:N){
    target += normal_lpdf(response[n] | predictor1[n], sqrt(parameter1[s]));
    target += normal_lpdf(response[n] | predictor2[n], sqrt(parameter2[s]));

    }
}
generated quantities {
   vector [N] log_lik;
     for (n in 1:N){
       log_lik[n]= normal_lpdf(response[n] | predictor1[n], sqrt(parameter1[s])) + 
normal_lpdf(response[n] | predictor2[n], sqrt(parameter2[s]));
	}
}

Then I have used loo for model comparison. Is this correct? Is it correct to add the two normal_lpdf-s?