A quick post.
I recently had an interesting experience with the journal Neuropsychologia, which led to a personal decision that some of my colleagues will probably think is a bit rash (To which my answer is: hey, it's me, what do you expect?!)
We submitted a manuscript that related pre-existing biases in spatial perception to the effects of transcranial magnetic stimulation (TMS) on spatial perception performance. The results are interesting (we think), even though there are some 'weaknesses' in the data: one of the significant effects is reliable in itself but doesn't dissociate significantly from another condition that is non-significant. For this reason we were careful about the interpretation, and the study was reasonably well powered compared to other studies in the field.
The paper was eventually rejected after going through two rounds of review. Once the initial downer of being rejected had passed, I realised that the reasons for the rejection were simple: it wasn't that our methodology was flawed or incomplete, it was that the data didn't meet the journal's standard of perfection.
This obsession with data perfection is one of the main reasons why we face a crisis of replicability and dodgy practices in psychology and cognitive neuroscience.
So after some consideration, I wrote to the action editor and the editor-in-chief and officially severed my relationship with the journal. The email is below. I'm a bit sad to do this because I've published with Neuropsychologia before, and reviewed for them many times -- and they have published some good work.
However my gripe isn't with either of the editors personally, or even with the reviewers of our specific paper (on the contrary, I am extremely grateful for the time and effort everyone invested). My problem is with the culture of perfection itself.
For that reason I'm leaving Neuropsychologia behind and I urge you to do the same.
_________
Dear Jennifer and Mick,
I wanted to write to you concerning our rejected Neuropsychologia manuscript: NSY-D-12-00279R "The predictive nature of pseudoneglect for visual neglect: evidence from parietal theta burst stimulation".
Let me say at the outset that I am not seeking to challenge the decision. The reviewers make some excellent points and I'm very grateful for their considered assessment of the paper. I'm also grateful that you sought an additional review for us when the decision seemed to be a clear 'reject' based on the second review alone. That said, I would like to make a couple of comments.
First, the expectations of reviewers 2 and 3 about what a TMS study can achieve are fundamentally unrealistic. Indeed, it is precisely such unrealistic expectations for 'perfect' data that have created the file drawer problem and replicability crisis in psychology and cognitive neuroscience. It is also this pressure that encourages bad practices such as significance chasing, flexible data analyses, and cherry picking. All of the reviewers commented that our study was well designed, and it is manifestly well powered with 24 participants. If we had simply added another 10 subjects and shown 'cleaner' results, I wonder how many of the reviewers would have spotted the fatal flaw in doing so without correcting for data peeking. I suspect none.
Second, a number of the comments by the reviewers are misplaced. For instance, in commenting on the fact that we found a reliable effect of right AG TMS but not left AG TMS on line bisection performance, Reviewer 3 notes that "One cannot state that two effects are statistically different if one is significant and the other is not. A direct comparison is necessary." This is true but is also a straw man: we never state (or require) that the effects are statistically different between left AG and right AG. Our predictions were relative to the Sham condition and we focus our interpretation on those reliable significant effects. Similarly, Reviewer 2 challenges the categorisation of our participants into left and right deviants, noting the variable performance in the initial baseline condition. But this variation is expected, and we show with extra analyses that it cannot explain our results. Reviewer 2 simply disagrees, and this disagreement is sufficient grounds for rejection.
Overall, however, my main concern isn't with our specific paper (I am confident we will publish it elsewhere, for instance in PLoS One where 'perfect' data is not expected). My real problem is that by rejecting papers based on imperfect results, Neuropsychologia reinforces bad scientific practice and promotes false discoveries. It worries me how many other papers for Neuropsychologia get rejected for similar reasons. As Uri Simonsohn and colleagues note in their recent Psych Science paper on 'false positive psychology', "Reviewers should be more tolerant of imperfections in results. One reason researchers exploit researcher degrees of freedom is the unreasonable expectation we often impose as reviewers for every data pattern to be (significantly) as predicted. Underpowered studies with perfect results are the ones that should invite extra scrutiny." (Simonsohn et al., Psychol Sci. 2011 Nov;22(11):1359-66.)
Based on my previous experiences as both an author and reviewer for Neuropsychologia, I have long suspected that a culture of 'data perfection' dominates at the journal. In fact, I have to admit that - for me - the current submission served as a useful experiment to test whether this culture would prevail for a study that is robust in design but with 'imperfect' (albeit statistically significant) results.
For this reason, my main purpose in writing is to inform you that I will no longer be submitting manuscripts to Neuropsychologia or reviewing them. I will be encouraging my colleagues similarly. Please note that this is in no way a criticism of you personally, but rather a personal decision to oppose what I see as a culture that needs active reform. I felt I owed you the courtesy of letting you know.
best wishes, Chris