in your weightless ;-) model this line increments the log posterior:
to implement weighting, you replace it with
for(i in 1:N){
target += normal_lpdf(yHat[n], sigma) * weights[n];
}
You could also first write all results of normal_lpdf(yHat[n], sigma)
to a vector and do a dot product with the weights, but I don’t think this will make the model faster (but the model would be harder to read).
[Your proposal added the sum of the weighted outcomes y to the log posterior, which is not what you want to do.]