Estimating a Distal Continuous Outcome Based on LPA Using brms

I have carried out a Latent Profile Analysis in tidyLPA:

	
    CPROB1	    CPROB2	    CPROB3	    CPROB4	    CPROB5	    Profile	y
1	0.984486416	6.11585E-06	0.012188146	0.003314539	4.78324E-06	1	    3
2	0.942039075	0.051160482	0.000192298	0.005817891	0.000790254	1	    5
3	0.971518416	0.003151397	0.000643429	0.024041195	0.000645563	1	    3
4	0.020140694	0.00216542	0.001397633	0.559996466	0.416299787	4	    1
5	0.585624504	5.19584E-06	0.221967448	0.191645379	0.000757472	1	    3.5
6	0.058921098	0.938565638	4.66154E-05	0.000202797	0.002263851	2	    2.5
7	0.256234354	0.001481276	0.000186002	0.724453177	0.017645191	4	    4.5
8	0.994840375	0.000143271	0.000530821	0.004469261	1.62731E-05	1	    2
9	0.038113316	0.041504004	0.083662776	0.043465051	0.793254853	5	    2.5
10	0.356971355	4.19548E-06	0.036824375	0.60471304	0.001487035	4	    4.5

I would like now to regress y ~ Profile in brms taking into account the uncertainty of the profile classification of each row given by the class probabilities. What is the best way to do this? I already looked at the measurement error term me(), which is soft-deprecated in favour of mi().I also tried an (inverse) probability weight approach using y|weights(weight)where the weight is the inverse of the respective class belonging probability. But I am not sure if this is the right approach.

Thanks in advance!

Perhaps worth considering: in this regression, by assumption, the values of y conditional on the class membership are distributed as Gaussian. I think this is precisely the same assumption that you would need to include y as one of the variables that is considered when producing the LPA, and doing so would intrinsically produce a set of predictions for y conditional on the class membership, right? So is there any need to regress y on class membership in a second step?

1 Like

I do see your point. However, if I want to estimate more complex distal models based on the LPA, this should not work. For example, if I want to estimate the treatment effect between a control and an intervention condition based on the profiles and taking into account the classification uncertainty. See for example the (manual) BCH method in Mplus:

  • Asparouhov and Muthén (2014): Auxiliary Variables in Mixture Modeling: Three-Step Approaches Using Mplus
  • Nylund-Gibson et al. (2019): Prediction from Latent Classes: A Demonstration of Different Approaches to Include Distal Outcomes in Mixture Models