Hello,
(CmdStanPy) I am fitting a certain number of intensity traces, that evolve over time, using a hidden Markov model with transition probabilities and emission probabilities that depend on some shared parameters (shared among all traces). ntraces correspond to the total number of traces (of different lengths), and ndata is the total number of time points (the sum of the traces lengths). For simplicity, I define the log_omega matrix of the log of the emission probabilities as n_states x ndata, where n_states is the total number of hidden states. My model block looks as follows
model {
for (p in 1 : ntraces){
target += hmm_marginal(log_omega[1 : n_states, starts[p] : ends[p]], Gamma, rho);
}
}
where starts[p] and ends[p] is the start and end of the trace p. The transition probability Gamma has dimensions n_states x n_states, and rho is a vector of length n_states. I was wondering if I could use map_reduce to parallelize the sum that is done incrementally with the loop. I am not sure because of the signature of the map_rect function that does not allow for matrices, for example. what do you think?
Thank you in advance!!!