Identification for mixture model or HMM with covariate(s)?

Hi, how does one identify mixture model or HMM with covariates?

I understand the identification issue in the non-covariate case can be handled by:

ordered[S] mu; //S is the number of mixture, let’s say S=2, this ensures mu1<mu2

But how to do this when there is a covariate, x? What if I further believe the covariate would only affect one of the two mixtures or states, but its coefficient, b, is not certain to be positive. I can only be sure that mu2+x*b > mu1, for all x and b values. Is this possible or is my thinking flawed? Thanks.

1 Like

Hi, I think I figured this one out.
Like a regression, when there is no additional covariate, it is the the intercept-only model. Therefore, if I need to impose the order for the coefficient of the intercept among the states in the intercept-only model, I also need to impose the order among states for the coefficients of the other covariates.

I am happy to hear if anyone thinks I am wrong. Thank you.

1 Like

Hi, sorry that your question slipped through.

Generally this is AFAIK a tough problem. What brms does is to enforce ordering only for the intercept (e.g. https://rdrr.io/cran/brms/man/mixture.html), ordering of intercept AND coefficients will likely improve identifiability, but would often be overly restrictive to let you fit your model. The mixture should be IMHO well identifiable by ordering only with intercept if all combinations of the other covariates are well represented among the dataset. If the model would have minor problems without mixture (e.g. the design matrix is not full rank) those are likely to be worse once you add mixture.

Best of luck with your model!

1 Like

Thanks. For my model with my data, mixing only works when I impose the ordering on both intercept and coefficients.

But can you explain a bit more on whether the design matrix is full rank or not? I don’t know how to think about this in a non-experimental setting. Thanks.

This IMHO suggests a potential problem with your data/model and I would investigate carefully to get to the source of the problem (it is possible that it is not a problem, but it is a warning sign). Hard to be more specific without knowing more about your full model and data.

Very roughly, if you have two binary predictors, but all your data is either only (0,0) or (1,1) you can’t distinguish the individual effects. In theory, having just one (0,1) or (1,0) resolves this. In practice (especially in complex models), the model can run into issues even if some (0,1) and (1,0) are present but there are only few of those as they can fail to constrain the coefficients enough to let you identify the model (distinguish between the components). More broadly strong correlations among predictors can be problematic.

1 Like