tl:dr – to be
shortlisted for interview, all future post-doctoral vacancies in my lab will require candidates
to show a track record in open science practices. This applies to two posts I am currently advertising, and for all such positions henceforth.
Twitter never ceases to amaze me. The other day I posted a fairly typical complaint about publication bias, which I expected to be ignored, but instead it all went a bit berserk. Many psychologists (and other scientists) are seriously pissed off about this problem, as well they should be.
My tweets were based on a manuscript we just had rejected from the Journal of Experimental Psychology: Applied because the results were convincingly negative in one experiment, and positive but “lacked novelty” in the other. Otherwise our manuscript was fine – we were complimented on it tackling an important question, using a rigorous method, and including a thorough analysis.
But, of course, we all know that good theory and methodology are not enough to get published in many journals. In the game of academic publishing, robust methods are no substitute for great results.
The whole experience is both teeth-grindingly frustrating and tediously unremarkable, and it reminds us of three home truths:
1) That this can happen in 2016 shows how the reproducibility movement still exists in an echo chamber that has yet to penetrate the hermitically sealed brains of many journal editors.
2) To get published in the journals that psychologists read the most, you need positive and novel results.
3) This is why psychologists p-hack, HARK and selectively publish experiments that “work”.
So what, I hear you cry. We’ve heard it all before. We’ve all had papers rejected for stupid reasons. Get over it, get over yourself, and get back to cranking the handle.
Not just yet. First I want to make a simple point: this can’t be explained away as a “cultural problem”. Whenever someone says publication bias is a cultural problem, all they are really saying is, “it’s not my problem”. Apparently we are all sitting around the great Ouija board of Academia, fingers on the glass, and watching the glass make stupid decisions. But of course, nobody is responsible – the glass just moved by itself!
Publication bias isn’t a cultural problem, it is widespread malpractice by senior, privileged individuals, just as Ted Sterling defined it back in 1959. Rejecting a paper based on results is a conscious choice made by an editor who has a duty to be informed about the state of our field. It is a choice that damages science and scientists. It is a choice that punishes honesty, incentivizes dishonesty and hinders reproducibility.
I’m a journal editor myself. Were I to reject a paper because of the results of the authors’ hypothesis tests, I would not deserve to hold such a position. Rejecting papers based on results is deliberate bias, and deliberate bias – especially by those in privileged positions – is malpractice.
How to change incentives
Malpractice it may be, but publication bias is acceptable malpractice to many researchers, so how do we shift the incentives to eliminate it?
Here are just three initiatives I’m part of which are helping to incentivize open practices and eliminate bias:
Registered Reports: many journals now offer a format of article in which peer review happens before data collection and analysis. High quality study protocols are then accepted before research outcomes are known, which eliminates publication bias and prevents many forms of research bias. To date, more than 20 journals have joined the Registered Reports programme, with the first ‘high-impact’ journal coming on board later this year.
TOP guidelines: more than 500 journals and 50 organisations have agreed to review their adherence to a series of modular standards for transparency and reproducibility in published research. For background, see our TOP introductory article.
PRO initiative: led by Richard Morey of Cardiff University (follow him), this grassroots campaign calls for peer reviewers to withhold comprehensive review of papers that either fail to archive study data and materials, or which fail to provide a public reason for not archiving. You can read our paper about the PRO intiative here at Royal Society Open Science. If you want to see open practices become the norm, then sign PRO.
Registered Reports, TOP and PRO are much needed, but they aren’t enough on their own because they only tackle the demand side, not the supply side. So I’m going to add another personal initiative, following in the (pioneering) footsteps of Felix Schönbrodt.
Hiring practices
If we’re serious about research transparency, we need to start rewarding transparent research practices at the point where jobs and grants are awarded. This means senior researchers need to step up and make a commitment.
Here is my commitment. From this day forward, all post-doctoral job vacancies in my research group, on grants where I am the principal investigator, will be offered only to candidates with a proven track record in open science – one which can be evidenced by having pre-registered a study protocol, or by having publicly archived data / materials / code at the point of manuscript publication.
This isn’t me blowing smoke in the hope that I’ll get some funding one day to try such a policy. I’m lucky enough to have funding right now, so I’m starting this today.
I am currently advertising for 2 four-year, full-time post-doctoral positions on my European Research Council Consolidator grant. The adverts are here and here. Both job specifications include the following essential criterion: “Knowledge of, and experience applying, Open Science practices, including public data archiving and/or study pre-registration.” By putting this in the essential criteria, it means I won’t be shortlisting anyone who hasn’t done at least some open science.
Now, before we go any further, lets deal with the straw man that certain critics are no doubt already building. This policy doesn’t mean that every paper published by an applicant has to be pre-registered, or that every data set has to have been archived. It means that the candidate must have at least one example of where at least one open practice has been achieved.
I also realise many promising early-career scientists won’t have had the opportunity to adopt open practices, simply because they come from labs that follow the status quo. We all know labs like this; I used to work in a place surrounded by them (hell, I used to be one of them) – labs that chase glamour and status, or that just don't care about openness. It’s not your fault if you’re stuck in one of these labs. Therefore I’ve included a closing date of April 30 to give those so interested the time to generate a track record in open science before applying. Maybe it's time to test your powers of persuasion in convincing your PI to do something good for science over and above furthering their own career.
If you’re a PI like me, I humbly invite you to join me in adopting the same hiring policy. By acting collectively, we can ensure that a commitment to open science is rewarded as it should be.