Thanks a lot for the quick help, Dr. Goodrich!
1) About a month
That's quite depressing...
I did a test run with 1% of the data and with 4 cores. It took about 7 minutes.
2) I would start with stan_lmer(Value ~ 1 + pulse + (1|Sbj)+(1|Path), data=dat). You don't need 32 chains. That will run reasonably quickly.
I did that for a testing run. However, I would like to say something about the 'pulse' effect for each path, and that was the reason I wanted to add the random effects for 'pulse' with my original model:
fm <- stan_lmer(Value ~ 1 + pulse + (1|Sbj)+(1+pulse|Path), data=dat, chain=32)
In other words, I'd like to have something like the following in the output:
mean sd 2.5% 25% 50% 75% 97.5%
b[pulse Path:Seed_10] .....................................................
Please correct me if you're wrong.
So, with my original model, there is no hope that I would be able to get it done within a realistic time frame?
3) No one besides you knows enough about this data-generating process to say, but the only one that matters is on the standard deviation of the intercept shifts across levels of Path.
Is that specified through "prior_covariance"? What would be a good choice other than the default (normal?)?
4) I would not divide variables by their empirical standard deviation. It is centered internally and shifted back in the output, so you don't have to worry about that. The prior on pulse is by default a function of its standard deviation. Only divide by constants so that the parameters have reasonable units.
Thanks for the clarification!