Computation speed and data format (long vs wide)

Hi all,

I have a question that may be silly. I am working on an item response theory (IRT) model. I originally arranged my data in wide format. However, in order to deal with missing data, I rearranged them into long format. Everything (e.g., priors) else remains the same. I have a feeling that long format requires longer computation time compared to wide format even when there is no missing data. Is that to be expected? I did not explicitly estimate missing values. I just made use of all available data (similar to what full information maximum likelihhod does).

Best,
Bobby

Hi, sorry for taking too long to reply.

There might be some slight inefficiences as with long format you sometimes cannot do vectorized (“bulk”) operations which tend to be faster. But I wouldn’t expect that to be a big effect. If the difference is big, I would primarily suspect that something else actually changed between the models. Also hard to judge without seeing the full model before and after converting to long form.

Best of luck with your model!