Monday, 25 April 2016

The things you hate most about submitting manuscripts

A few days ago I asked the twittersphere what rubs people the wrong way when it comes to submitting manuscripts to peer reviewed academic journals. Oh let us count the ways. From the irritation of having to reformat references to fit some journal’s arbitrary style, to consigning figures and captions to the end of a submission as though it really is still 1988, to the pointlessness of cover letters where all you want to say is “Dear Editor, here is our paper” but feel the need to throw in some gumpf about how amazing your results are. (Hint: aside from when the cover letter has a specific purpose, such as summarising a response to reviewers or conveying vital information about a key issue, I can tell you that a lot of editors -- maybe most -- ignore this piece of puffery).

The tweet proved a lot more popular than I expected and for a good two days you could see a steam of delicious rage rising from my timeline. 

I had an ulterior motive in seeking out this information from your good selves. As most of you will know, one of my aims is to help improve the transparency and reproducibility of published research, and one of the journals I edit for is working through its (future) adoption of the new Transparency and Openness Promotion (TOP) guidelines. The TOP guidelines are a self-certification scheme in which journals voluntarily report their level of policy compliance with a series of transparency standards, such as data sharing, pre-registration, and so forth. TOP is currently endorsed by over 500 journals and promises to make the degree of transparency adopted by journals itself more transparent. I guess you could call this "meta-transparency".

Now, in putting together our TOP policy at this journal at which I serve, we realised that it involves the addition of some new submission bureaucracy for authors. There will be a page of TOP guidelines to read beforehand and a 5-minute checklist to complete when actually submitting. We realise extra forms and guidelines are annoying for authors, so at the same time as introducing TOP we are going to strive to cut as much of the other (far less important) shit as possible. 

Here are the things you hated the most, and your most popular recommendations. For fun, I calculated an extremely silly and invalid score of every interaction to this tweet, adding up RTs, favourites and the number of independent mentions of specific points:

1. Abolish trivial house style requirements, including stipulations on figure dimensions & image file types, especially for the initial submission, as well as arbitrary house referencing and in-text citations styles. This is by far the most popular response. (score 112)

2. Allow in-text figures and tables according to their natural position until the very final stage of submission. (score 61)

3. Abolish all unnecessary duplication of information about the manuscript (e.g. word count, keywords), main author details and (most especially) co-author contact details that is otherwise mentioned on the title page or could be calculated automatically; abolish any requirement to include postal addresses of co-authors at least until the final stage (affiliation and email address should be sufficient, and should be readable from title page without requiring additional form completion); eliminate fax numbers altogether because, seriously, WTF are those fossils doing there anyway. (score 50)

4. Abolish requirement for submissions to be in MS Word format only. (score 36)

5. Abolish endnotes and either replace with footnotes or cut both. (score 33)

6. Allow submission of LaTeX files. (score 29)

7. Allow submission of single integrated PDF until the final stage of acceptance. (score 27)

8. Abolish cover letters for initial submissions. (score 21)

9. Abolish the Highlights section altogether because 
* Highlights are Stupid 
* Everyone knows Highlights are Stupid
* I can't think of anything else to say here, so I'll just repeat the conclusion that Highlights are Stupid (score 18) 

10. Remove maximum limits on the number of cited references. (score 7) 

11. Abolish the requirement for authors to recommend reviewers. (score 7) 

12. Increase speed of user interface. (score 6)

Not all of these apply to our journal, but we’ll try and improve on the things that do, and which we can change. 

Oh, and lucky number 13, which actually scored the same as abolishing cover letters, goes to Sanjay Srivastava: "Getting rejected, can you do away with that?” Alas that is beyond my current lowly powers, although...cough....I am getting there.* 


* Shameless plug alert: At one journal I edit for (Cortex), submitting a pre-registered article called a Registered Report greatly increases your chances of being published.  The rejection rate for standard (unregistered) research reports? Just over 90%.  The rejection rate for the 50% of Registered Reports that pass editorial triage and proceed to in-depth Stage 1 peer review? About 10%.  

The reason the rejection rate is so low for Registered Reports isn’t because our standards are any less (if anything they are higher, in my opinion) but because this format attracts  particularly good submissions and also gives authors the opportunity to address reviewer criticisms of their experimental design before they do their research – a point made by Dorothy Bishop who recently published an excellent Registered Report with Hannah Hobson.

Thursday, 7 April 2016

So you've been scooped

It’s the moment every junior researcher dreads – and more than a few senior ones too. You’re on the verge of submitting that amazing paper describing a new and exciting finding, or a hot new method, and someone beats you to the post

That sinking feeling when you read the abstract in a zeitgeist journal announcing that  “Here we show for the first time….” followed by something achingly similar to what you have done. The rug has been ripped out. You’ve been cruelly gazumped with nothing left but doubts and self-recriminations. They will get all the credit and nobody will care what you did. You’ll be seen as some lame copycat following in their illustrious tailwind, even though you conceived your idea long before they published theirs. If only you’d worked harder. Worked more Sundays instead of spending time with family or friends. Written faster. Spent less time on Twitter. And the worst part is you had no clue that you were about to be gazumped. You’ve been blindsided.

The chances are, if you work in a busy or popular area using techniques that are widely available, this is going to happen to you at some point. And I’m going to try to convince you that unless your research falls within a very narrow set of parameters, it doesn’t matter. Not one bit.

It really doesn’t. Despite all the feelings of frustration and disappointment it provokes, this is all in your head. It is your own ego screaming into the void. On the contrary there are several positive sides to being “scooped”. (Note I refer to “scooped” here to refer to the kind of inadvertent gazumping that can happen when multiple researchers work independently but in parallel – I am not referring to the deliberate theft of ideas, which is extremely rare if it happens at all).

Here are some tips for junior researchers on how to come to grips with being scooped and why you shouldn’t feel so bad.

1.    It means you are doing something other people care about. Getting scooped is a sign that your research is important and that you are probably asking the right questions. If someone finds something similar to you it also adds to the convergent validity of your methods and suggests you may be doing work that is reproducible. Note: the corollary of this is not the case – just because you never get scooped doesn’t mean your research is unimportant. You might have cornered the market in a particular technique, or the field might be small, or your approach might be unusual or specialised in some other way.

2.    Being first isn’t necessarily a sign of being a good scientist. Why? Because many initial discoveries are wrong or overclaimed. As a post-doc, I was the “first” to show that TMS of the right inferior frontal cortex can impair response inhibition in healthy people. So what? Does that make my methods or results more convincing, or any better than later convergent findings? Does it make me a better scientist? Nope, nope, nope. If anything, my paper is weaker because it overclaimed. When I and my co-authors wrote it we knew we were the first to report this particular effect, so we aimed “high” with journals and over-egged the cake. We initially submitted it to a bunch of zeitgeist journals where it was predictably rejected, one after another (after all, we were only repeating what had already been concluded on the basis of brain injury). The spin remained, though, until it found its way into a specialist journal, and on the basis of the results we claimed evidence for a selective role of the IFG in response inhibition. We were wrong, as we and others later discovered – the original results turned out to be repeatable but our explanation was trite and erroneous.

3.    Most senior scientists know this. Many PIs – me included – are sceptical of researchers who claim to be the first to show something. For one thing it is almost never the case; the vast majority of science is a process of derivative, incremental advance, despite whatever spin the authors cake their abstracts in. When I’m assessing fellowship applications or job applications by junior researchers, the type of questions I’m asking are: is this research important either to theory or applications? Is it robust, feasible and transparent? Is the applicant an excellent communicator? I am not asking whether they were the first at making previous claims. I couldn’t care less. Knowing what I do about statistics and research culture, I know that s/he who claims they are first most likely did a small study, did not take the time to replicate their findings, fell prey to research bias, benefited from publication bias, and probably exaggerated the implications. Are these attractive characteristics in a scientist?

4.    In the vast majority of cases you don’t show you are a brilliant scientist or intellectual force by being the first to claim something. You prove your mettle by shaping the theoretical landscape in which everyone works. You set the scene, one of two ways. One way is by accruing a coherent body of important and credible work that changes the way people think about a topic (and not just by publishing a long list of glamour publications, but through the transparent accumulation of knowledge). Or, you construct a robust and falsifiable theory that could explain something better than all the other theories out there, and then set about trying to disconfirm it. If it is brilliant, others will try doing the same, and if nobody can disconfirm it then you've probably discovered something for real.

5.    There are a few cases where being the first might matter and can have career benefits. If you’re the first to describe an amazing new technique, or the first to make a Nobel-level discovery then scooping might count. But how many of us fall into that category? 0.0001%? The rest of us are labouring away in the trenches. Our discoveries are small and, frankly, none of us individually matter a great deal. Our value lies in our collective contribution as scientists. A large part of getting over being scooped is getting over yourself and realising that you are a small cog in a very big machine.

6.    Remember that what matters in science is the discovery, not the discoverer. That’s why the public pays your salary or stipend. When someone scoops you, it provides an opportunity for you to reflect on their findings in preparing your own paper. What can you learn from what they found, or from the data itself? If you have access to their data, can you perform a meta-analysis to aggregate evidence usefully between their study and your own? Might they be someone you could collaborate with on a future study to do something even bigger and better than either of you could do alone? Remember that in the quest to make discoveries, competition is for climbers and egomaniacs. Cooperation beats competition every time.

7.    Finally, if you really feel you have an idea for a study that is unique and you want to declaw the Scoop Monster, consider submitting it as a Registered Report. This might seem counterintuitive – after all, aren’t Registered Reports only for incremental research or replications? Aren't you risking being scooped by sharing your amazing idea with reviewers? Actually, you're more protected than you think, and Registered Reports are not limited to replications; they are simply an avenue for robust, transparent, hypothesis-driven research, and they can (and often do) describe novel ideas or critical tests of theory. Aside from all those benefits, Registered Reports offer something very simple that asserts intellectual primacy: when they are published, the date that the initial Stage 1 protocol was first received is published in the margin, right above all the other received and accepted dates. This means that if anyone publishes anything similar in the meantime, you will always be able to prove – if it really matters – that you had your idea before they published theirs. Plus your study will probably be three times the size and relatively bias-free.

Now, get back to sciencing (or chilling out) and leave the worrying about scooping to scientists who don't really understand how science works or why they are doing it.

Wednesday, 17 February 2016

My commitment to open science is valuing your commitment to open science

tl:dr – to be shortlisted for interview, all future post-doctoral vacancies in my lab will require candidates to show a track record in open science practices. This applies to two posts I am currently advertising, and for all such positions henceforth. 

Twitter never ceases to amaze me. The other day I posted a fairly typical complaint about publication bias, which I expected to be ignored, but instead it all went a bit berserk. Many psychologists (and other scientists) are seriously pissed off about this problem, as well they should be.

My tweets were based on a manuscript we just had rejected from the Journal of Experimental Psychology: Applied because the results were convincingly negative in one experiment, and positive but “lacked novelty” in the other. Otherwise our manuscript was fine – we were complimented on it tackling an important question, using a rigorous method, and including a thorough analysis.

But, of course, we all know that good theory and methodology are not enough to get published in many journals. In the game of academic publishing, robust methods are no substitute for great results.

The whole experience is both teeth-grindingly frustrating and tediously unremarkable, and it reminds us of three home truths:

1) That this can happen in 2016 shows how the reproducibility movement still exists in an echo chamber that has yet to penetrate the hermitically sealed brains of many journal editors.
2) To get published in the journals that psychologists read the most, you need positive and novel results.
3) This is why psychologists p-hack, HARK and selectively publish experiments that “work”.

So what, I hear you cry. We’ve heard it all before. We’ve all had papers rejected for stupid reasons. Get over it, get over yourself, and get back to cranking the handle.

Not just yet. First I want to make a simple point: this can’t be explained away as a “cultural problem”. Whenever someone says publication bias is a cultural problem, all they are really saying is, “it’s not my problem”. Apparently we are all sitting around the great Ouija board of Academia, fingers on the glass, and watching the glass make stupid decisions. But of course, nobody is responsible – the glass just moved by itself!

Publication bias isn’t a cultural problem, it is widespread malpractice by senior, privileged individuals, just as Ted Sterling defined it back in 1959. Rejecting a paper based on results is a conscious choice made by an editor who has a duty to be informed about the state of our field. It is a choice that damages science and scientists. It is a choice that punishes honesty, incentivizes dishonesty and hinders reproducibility.

I’m a journal editor myself. Were I to reject a paper because of the results of the authors’ hypothesis tests, I would not deserve to hold such a position. Rejecting papers based on results is deliberate bias, and deliberate bias – especially by those in privileged positions – is malpractice. 

How to change incentives 

Malpractice it may be, but publication bias is acceptable malpractice to many researchers, so how do we shift the incentives to eliminate it?

Here are just three initiatives I’m part of which are helping to incentivize open practices and eliminate bias: 

Registered Reports: many journals now offer a format of article in which peer review happens before data collection and analysis. High quality study protocols are then accepted before research outcomes are known, which eliminates publication bias and prevents many forms of research bias. To date, more than 20 journals have joined the Registered Reports programme, with the first ‘high-impact’ journal coming on board later this year. 

TOP guidelines: more than 500 journals and 50 organisations have agreed to review their adherence to a series of modular standards for transparency and reproducibility in published research. For background, see our TOP introductory article. 

PRO initiative: led by Richard Morey of Cardiff University (follow him), this grassroots campaign calls for peer reviewers to withhold comprehensive review of papers that either fail to archive study data and materials, or which fail to provide a public reason for not archiving. You can read our paper about the PRO intiative here at Royal Society Open Science. If you want to see open practices become the norm, then sign PRO.

Registered Reports, TOP and PRO are much needed, but they aren’t enough on their own because they only tackle the demand side, not the supply side. So I’m going to add another personal initiative, following in the (pioneering) footsteps of Felix Schönbrodt. 

Hiring practices 

If we’re serious about research transparency, we need to start rewarding transparent research practices at the point where jobs and grants are awarded. This means senior researchers need to step up and make a commitment.

Here is my commitment. From this day forward, all post-doctoral job vacancies in my research group, on grants where I am the principal investigator, will be offered only to candidates with a proven track record in open science – one which can be evidenced by having pre-registered a study protocol, or by having publicly archived data / materials / code at the point of manuscript publication.

This isn’t me blowing smoke in the hope that I’ll get some funding one day to try such a policy. I’m lucky enough to have funding right now, so I’m starting this today.

I am currently advertising for 2 four-year, full-time post-doctoral positions on my European Research Council Consolidator grant. The adverts are here and here. Both job specifications include the following essential criterion: “Knowledge of, and experience applying, Open Science practices, including public data archiving and/or study pre-registration.” By putting this in the essential criteria, it means I won’t be shortlisting anyone who hasn’t done at least some open science.

Now, before we go any further, lets deal with the straw man that certain critics are no doubt already building. This policy doesn’t mean that every paper published by an applicant has to be pre-registered, or that every data set has to have been archived. It means that the candidate must have at least one example of where at least one open practice has been achieved.

I also realise many promising early-career scientists won’t have had the opportunity to adopt open practices, simply because they come from labs that follow the status quo. We all know labs like this; I used to work in a place surrounded by them (hell, I used to be one of them) – labs that chase glamour and status, or that just don't care about openness. It’s not your fault if you’re stuck in one of these labs. Therefore I’ve included a closing date of April 30 to give those so interested the time to generate a track record in open science before applying. Maybe it's time to test your powers of persuasion in convincing your PI to do something good for science over and above furthering their own career.

If you’re a PI like me, I humbly invite you to join me in adopting the same hiring policy. By acting collectively, we can ensure that a commitment to open science is rewarded as it should be.

Thursday, 26 November 2015

It's nice to be nice but it's more important to be honest

Science is hard, and if you're on the receiving end of criticism it can be especially hard. As scientists we need to have thick skins because we deal with harsh criticism every day - we are bombarded with critical comments from reviewers (usually anonymously) when they tear down our latest grant applications or papers. We get critical questions at conferences. We argue with our friends, colleagues, and people we don't even know. We disagree a lot. We get frustrated. It's fair to say that disagreement and frustration are hallmarks of the job.

As a junior scientist this can take some getting used to. Most of the time the disagreement is good-natured, but occasionally it can creep into the personal.

This morning we saw an example of this when a prominent study just published in PNAS drew some flack on Twitter. Small N, no replication, big story.  Personally I saw it as just another day at the office -- just another unremarkable exemplar of the low empirical standards we set for ourselves in cognitive neuroscience. I realise that sounds harsh but that's just how I feel about it. We need to set higher standards, and step one is being publicly honest about our reactions to published work.

Our field is peppered with small studies pumped out by petty fiefdoms, each vying for a coveted spot in high impact journals so we can have careers and get tenure and maybe make a few discoveries along the way. It would be disingenous to say that I'm any different. I've got my own fiefdom, just like the rest. It's no less petty; I am no better than anyone else.

When I look at fMRI studies like the one this morning, I see how far we need to come as a field. Does that sound arrogant? I don't care. I wrote about this recently because reproducibility is a huge problem in biomedical science and something a lot of people (but not enough) are working hard to fix. It is a bigger problem than anyone's ego, bigger than anyone's career.

Some folks get upset at the direct nature of post publication peer review. They might know the scientists involved; they might think they're careful; they might like them. And they might think such criticism is an attack on the integrity of the researchers -- that robust post-publication-peer-review, pointing out probable bias or low reproducibility, is tantamount to an accusation of misconduct. 

This is false because questionable practices aren't the same as fraud and bias isn't the same as misconduct. Much, if not most, research bias happens unconsciously. It can and does distort our results despite our best efforts because we're humans rather than robots. I believe many in our community are not only blind to unconscious bias, they're blind to the possibility of unconscious bias. They think that because they're careful that their studies are robust. But once you know the extent of your own bias it changes your mindset in a deep way. We learned this in my lab some time ago, which is why we now pre-register our studies.

Twitter is a great social leveller, allowing all kinds of voices to be heard. This is tremendous for science because it adds a layer of immediacy and diversity to peer review that busts conventions and blows traditional (stuffy) forms of interaction, and traditional hierarchies, right out the window. 

So while I agree with the sentiment that it's nice to be nice to each other, I believe it's even more important to be honest. If you wave what I see as bullshit in my face I will probably call it bullshit, and I expect you to do the same to me. In fact I expect you to do the same for me because by being honest you are doing me a favour.

Thursday, 30 July 2015

Be sure to have your say on the future of the Guardian Science blog network

It is one of the great privileges of my career to be a writer on the Guardian science blogs, where I contribute mainly to the psychology blog Head Quarters. It's nearly two years since we launched Head Quarters, and it has been great fun for all of us on the writing team (Thalia, Pete, Molly, and me) and, most importantly, I hope readers have enjoyed our posts.

The science blogs overall have been a great success for the Guardian, and we're now entering an interesting period where the structure of the blog network is being reviewed and some aspects may be revised. We now need your input as readers to ensure that any changes we make are the right ones, so if you read any of the science blogs, please have your say by completing our reader's survey

Next year we will also be launching an exciting citizen science platform, fronted by a new section of the Guardian tentatively called "Guardian Experiments". The platform will provide a workspace for hosting large-scale online research studies, including (for instance) psychology experiments, polls, and citizen science initiatives. We'll provide regular updates on this initiative, and in the meantime you can read about one of the initial projects we plan to launch on the platform here.

Tuesday, 16 June 2015

The first rule of Tim Hunt is…

I see a lot of people at the moment saying we should stop talking about the Tim Hunt affair and focus on the Real Issues facing women in science. As though condescending arseholes at the top of the profession aren't one of those issues.

Even Brian Cox is doing it. Lucky I don’t idolise anyone or my illusions might just be shattered.

All of you saying we should move on, or that the response to Hunt was “disproportionate” (if I never hear that word again it will be too soon) need to take a good hard look.

Many of us will only “move on” from Tim Hunt as soon as there is a serious recognition that Hunt’s remarks at the WCSJ were serious and damaging enough to warrant the sanctions that have been applied. Spare me the world’s smallest violin, but a white male professor FRS Nobel Laureate having an unremunerated honorary position taken away together with a couple of positions of influence on the ERC and Royal Society does not an excommunication make. I don’t want to hear any more self-pitying bullshit about him being “hung out to dry” or “removed from society”.

I haven’t said much about the Tim Hunt affair. To be honest I’ve been busy listening to the reactions from others, particularly women in science. And as a privileged white male professor at a leading UK university I honestly don’t feel that my opinion counts for much. But I do have one, and for what it’s worth here it is:

1 – Hunt’s comments were unacceptable and stupid. He has yet to offer a full apology, which just shows how little recognition he has of sexism in science. Oh but he's old, right, so that's ok? Fuck that. My dad is the same age as Hunt, has one less Nobel prize, grew up in 1950s Australia (AKA Betty Crocker Central) and could teach him a thing or two about equality.

2 – the prompted resignations from UCL, Royal Society, and the ERC were appropriate. Some have criticised them for being too quick. Bullshit. They were fast because the case was clear. They did the right thing and I applaud them.

3 – there has been no witch hunt, no lynch mob, no burnings or beheadings. Just people, including lots of women scientists, expressing their displeasure with Tim Hunt’s comments on social media. And often with great humour.

4 – I am deeply disappointed by some of the defences of Hunt emerging from various Establishment figures, publicly and in private. A lot of these defences are being expressed behind the scenes and consist of “He’s a nice guy; he has no media training and was lost at sea; I’ve never seen any evidence of him behaving in a sexist manner so everything is fine”. Many of these people are sending these messages in the hope that the recipients will use their influence to defend him on their behalf. Stop it. If you want to defend Tim Hunt, at least have the spine to do it yourself.

5 – To those calling for more evidence of wrongdoing before "condemning" Hunt, just stop. The comments are evidence enough that he is not fit to hold ambassadorial roles in science. Being a great scientist does not justify being a purveyor of 1950s sexism.

6 – Those telling us to move on or pay attention to something else would do well to examine the privilege of their own vantage point. Why exactly do you want to move this debate on so quickly? And here's some fun Bingo to play while you’re at it.

7 – We are all sexist. I know I am because I was raised in 1980s Melbourne surrounded by gender stereotypes and it is an ongoing battle combatting these in work and life. Avoiding benevolent sexism is particularly challenging. I will be working hard to teach my 9-month old son to fight these stereotypes as he grows up, rather than accept them as I did.

8 – Fuck off, Boris Johnson. You tedious populist fart. There, that was easy.

9 – Athene Donald has published a fantastic list of actions we can all take to further the cause of women in science. My only proviso is that she predicates it all on a very shaky defense of Hunt, who is clearly her friend. But the list is excellent and I’ve reproduced it below without the unnecessary "Hunt is a really nice guy" baggage:

  • Call out bad behaviour whenever and wherever you see it – in committees or in the street. Don’t leave women to be victimised;

  • Encourage women to dare, to take risks;

  • Act as a sponsor or mentor (if you are just setting out there will still always be people younger than you, including school children, for whom you can act);

  • Don’t let team members get away with demeaning behaviour, objectifying women or acting to exclude anyone;

  • Seek out and remove microinequities wherever you spot them;

  • Refuse to serve on single sex panels or at conferences without an appropriate level of female invited speakers;

  • Consider the imagery in your department and ensure it represents a diverse group of individuals;

  • Consider the daily working environment to see if anything inappropriate is lurking. If so, do something about it.

  • Demand/require mandatory unconscious bias training, in particular for appointment and promotion panels;

  • Call out teachers who tell girls they can’t/shouldn’t do maths, physics etc;

  • Don’t let the bold (male or female) monopolise the conversation in the classroom or the apparatus in the laboratory, at the expense of the timid (female or male);

  • Ask schools about their progression rates for girls into the traditionally male subjects at A level (or indeed, the traditionally female subjects for boys);

  • Nominate women for prizes, fellowships etc;

  • Tap women on the shoulder to encourage them to apply for opportunities they otherwise would be unaware of or feel they were not qualified for;

  • Move the dialogue on from part-time working equates to ‘isn’t serious’ to part-time working means balancing different demands;

  • Recognize the importance of family (and even love) for men and women;

  • Be prepared to be a visible role model;

  • Gather evidence, data and anecdote, to provide ammunition for management to change;

  • Listen and act if a woman starts hinting there are problems, don’t be dismissive because it makes you uncomfortable;

  • Think broadly when asked to make suggestions of names for any position or role.


Monday, 30 March 2015

Why I am resigning from the PLOS ONE editorial board

Today I tendered my resignation as an Academic Editor at PLOS ONE. 

It's a slightly sad day for me. As I explained to Damian Pattinson in an email, I remain as much a supporter of the PLOS ONE mission as when I joined the editorial board over two years ago. PLOS ONE has done more than any other journal to combat publication bias and to normalise open data practices. Sure, the PLOS ONE mechanism doesn't always work perfectly, but in terms of philosophy it is light years ahead of most other journals in the social and life sciences.

The reason I'm leaving PLOS ONE isn't because they did anything wrong (although it must be said that the volume of editorial requests is unfeasibly high). Instead, the Registered Reports initiative is really starting to gain traction and I am increasingly finding myself helping other journals launching the initiative or even serving on editorial boards that are offering the format. So I have decided to focus my efforts on editing for journals that offer, or plan to offer, Registered Reports. For now, at least, PLOS ONE isn't willing or able to do so. Meanwhile, the list of adopting journals continues to grow; the latest exciting addition is Royal Society Open Science, which will be launching Registered Reports across all sciences later this year.

In the interests of transparency I should say that, for the same reason that I am leaving PLOS ONE, I also declined last week to join the editorial board of Nature Scientific Reports. Upon being invited to join their editorial board, I responded that I would be happy to do so if they would consider offering Registered Reports, and that I would be delighted to help them set up the format. I had hoped their response might be positive given the stated mission of the journal to avoid setting "a threshold of perceived importance to the papers that it publishes; rather, it publishes all papers that are judged to be technically valid." Unfortunately they responded: "We have considered venturing into the world of registered reports, but it isn’t something we’re able to get involved with right now." 

Fair enough, but then I'm afraid I can't (in good conscience) join your editorial board. A growing number of journals claim to celebrate scientific validity and transparency above the standard values (like "novelty" and "impact" of findings) -- in fact, the banner of transparency could almost be said to be in vogue right now -- but I find that the real litmus test is whether such journals are willing to accept papers before the results are known. If not then some small part of them still wants to selectively publish "good results". There is no room for fine print on the transparency banner.

What I have recently done is join the editorial board of Collabra, an interesting new open access journal being launched by the University of California Press. Collabra have agreed to offering Registered Reports and we will be updating the Open Science Framework information hub for Registered Reports as soon as there is further news. 

So - my thanks and a fond farewell to PLOS ONE. And my message to any other journals: if you want Chris Chambers on your editorial board (not that anyone really should of course!) then you need to either offer Registered Reports or plan to do so in the future. 

Trust me, you won't regret it.*


* Well, you won't regret offering Registered Reports. I, on the other hand, am an entirely different matter...