I have a paper under review employing Stan to achieve inference on a hierarchical model of both response time and accuracy in a speeded two-alternative forced-choice task. It’s been through a couple rounds, but despite attempts at describing the model and posting both the data and code, the stats reviewer still doesn’t understand the idea of modelling both kinds of performance variables in the same model; they keep thinking we’re doing some sort of covariate adjustment. Anyone feel sufficiently comfortable with hierarchical models for me to put your name forward as a more expert reviewer?
Since nobody responded yet, I will try to, despite this not really being my expertise.
I understand your frustration. But I would approach it in a different way - if a stats reviewer doesn’t understand your model, can you expect the readership of your paper to understand it? My guess is that you don’t. Maybe you can do a better job at explaining what the model does - particularly try to find a broadly accessible non-stats heavy description to let people build intuition.
I am attaching an example (under review) of how I tried do such an accessible explanation for a hierarchical linear model for the supplement of a paper. I tried to err on the side of explaining more stuff than less. This is then followed by a section with the usual dense formulas.
excerpt_accessible.pdf (380.0 KB)
Since I would like to review more, I’d be happy to serve as a reviewer, though my expertise is questionable :-)
Thanks! I actually ended up finding earlier papers describing the modeling approach, so added this:
I actually think I disagree somewhat on the suggestion that the reviewer’s confusion suggests I need to write a better description. Acknowledging that I’m probably partially motivated to avoid more work, I do think there is an argument to be made that a written report is not the end-all-be-all in scientific communication, and that it should not be the aim of authors to ensure that every reader from every background achieve complete understanding of the material (n.b. I’m possibly extending your suggestion to a straw man here). This echoes @betanalpha’s recent tweet that folks need to be more comfortable outsourcing statistical expertise and working in teams of experts; same goes for when you are a reader. I think it’s helpful to look to fields that are more mature, like physics, where team-based research is the norm, no one is expected to be a jack-of-all-trades, and they have complicated mathematical/statistical tools that are published alongside more colloquially expressed reports. For the paper I have under review, the data and well-commented code is available for experts to delve into, and it would have distracted/detracted from the primary results to insert a full tutorial in the manuscript.
There obviously is a trade-off and you are definitely free to choose one that suits you :-) My personal experience (N=2) is that explaining the model in accessible prose is rewarded by very nice words from the practitioners who’ve read it. My general feeling is that we as statistical profession are mostly terrible at communicating models. I often have problem understanding what people did from their descriptions in papers. Code helps, but not necessarily a lot, unless it is well written, commented and structured which is IMHO all in short supply.
I further believe that part of the problem is that we insist on describing the models in a convoluted way - huge joint likelihood formulas anyone, using greek letters that then don’t match the code, having multiple names for the same thing and many names with multiple meanings etc. I think there is a lot of low hanging fruit where we can increase the amount of people that understand us without a lot of additional effort. But well, that’s a different topic - best of luck with the paper!