Chen, constraining the Z to be non-negative is not a traditional identifiability constraint. It also makes your life difficult by leading to a truncated MVN. Instead, one would typically constrain a single loading per column (of W) to be positive. I suspect that the non-negative Z could cause problems with parameter recovery, because that constraint does not allow any of the posterior distributions to overlap with 0. Some related discussion is here.
About bad Pareto k and the conditional likelihood, I often see the same thing and do not know of a good solution. I think the problem is that the number of latent variables increases with the sample size.
About speed of marginal vs conditional, it partly depends on the specific model, sample size, and number of latent variables, and you might not see a big difference for lower sample sizes and latent variables. To speed up the marginal, it is helpful to write the multivariate normal lpdf via the mean vector and covariance matrix, and define a custom function using this form of the lpdf. Then you can compute a single marginal density across all the observations, instead of separately for each observation. This is what happens in newer versions of blavaan.
About the warning message, I would guess that it is related to the initial values and could be fixed by defining your own initial values.