I have a situation where I’m using a LOT of data (tens of thousands of survey responses) and I want to keep track of an intermediate quantity, so I modified my model
instead of in the model block having something like:
for(i …)
data[i]/function(parameters[i]) ~ some_distribution()
I have a transformed parameter:
for(i…){
intermediateval[i] = data[i]/function(parameters[i]);
}
then in the model block:
intermediateval ~ some_distribution()
the point being that I’ve got a big vector now that stores the intermediate value, and then the sampling statement is now vectorized.
after doing this and running stan, it takes “quite a while” (minutes?) to get the first message about the speed of calculation of the gradient, and then once that occurs, sampling becomes sort of similar speed to what was going on before.
Is there some massive one-time calculation that takes place before sampling that would get substantially longer in the case where I have this large intermediate value?