I have got a question concerning the opportunity to conduct item analyses with the brms package as suggested by Sinharay (2003; an early publication could be found here: http://www.ets.org/Media/Research/pdf/RR-03-34.pdf). In my work I’m not only interested to develop a data-driven Test model, but also to develop or optimize tests due to removing items that show a huge model misfit. Following my aims, I wondered myself whether it would be possible (and how) to conduct such bayesian driven item analyses with Brms.
I currently can’t look at the suggested approach in detail but perhaps the discussion of Bayesian IRT models provided in https://arxiv.org/abs/1905.09501 could be a start.
Thank you for your suggestions. Briefly, the approach as suggested by Sinharay (2003) is based on the a posterior predictive model checking method. Item fit is examined by the comparison of predicted vs. observed proportion of correct answers on every item. This procedure could also be applied on the proportion of subjects’ test scores. It is my aim to identify items for which the Rasch-model does not hold. These items should be removed due to the fact, that the response behavior could not be explained by the test model. Thus it is not the aim to modify the model, but to identify those items that show model fit given a theoretically driven test model.