Hi, two quick questions. First, can Stan handle NAN’s? If not is there a preferred way to deal with them? I am having trouble initializing the sampling when I use 0’s in place of NAN, and I am not sure the correct way to handle them to avoid accidentally messing up the log likelihood.
The relevant portion of my model looks like the following:
model {
lambda ~ gamma(.001,.001);
for (t in 1:T_max) {
P[t,1:M_max]~multi_normal((S[t]*exp(r[t,1:J_max])')',T);
}
}
lambda here is used in the construction of T, which is a M_max by M_max covariance matrix that has diagonal entries only, each element of S[t] represents a M_max by J_max matrix, and when P[t,1:M_max] has a NAN entry, S[t] has NAN entries in the corresponding spots (r is always real-valued.)
My second question is, unfortunately, the matrix S ends up containing 295350000 entries. When I run this as a Stan model this allocates 60GB to RAM. If I try and use multiple cores, I get a pickling error with ‘i’ type exceeding some threshold that I think is due to this array being too large. If I can replace NAN entries with 0, I can use a sparse matrix and that will cut down on memory usage since a lot of entries are NAN, but barring that solution is there any way to get around this size constraint?