Monday, 11 November 2019

Guest post by Claudio Tennie: Why I am resigning as Associate Editor from Proceedings B today


The following is a guest post published at the request of my colleague, Dr Claudio Tennie, University of Tübingen.
 
I have always been fond of Royal Society Proceedings B. And yet, today I am resigning in protest as one of their Associate Editors. What happened? 
Earlier this year, a group of people, spearheaded by Dr. Ljerka Ostojic, approached Proceedings B with a well-versed request: that it should adopt Registered Reports. Yet, to our dismay, they declined to do so.
We are now very aware of the various replication crises in many fields. A lack of robust findings is not surprising, and is indeed the logical outcome, of the current system. To be blunt, this system actively selects for bad science. In order to (once again) explain how and why Registered Reports can drastically help this situation, a comparison between science and car crash testing might be helpful.
It is safe to say that none of us would like to live in a world where all cars are advertised as having five star crash test ratings, but where, in reality, many should really rate as zero stars. This would be the expected case in a world where car crash outcomes were measured by car makers and selected by car-sellers. Why? Because capitalistic forces would select both for invalid crash testing and biased crash test reporting. Allowing Registered Reports is the logical equivalent to checking crash test dummies before they are being used in car crashes and also then publicising all outcomes of all crash tests.
Likewise, in science, we want to know which hypotheses find support and which do not. And we want to use the best methods to arrive at these conclusions. Currently, we often use sub-ideal methods, which alongside the inherent biases towards publishing positive findings, selects for bad science. As a result, it is even not clear what proportion of positive findings within the suspiciously large mountain of positive findings are valid. The current situation is an absurd and truly unbearable situation – wasting time, money and energy galore. We urgently need to change it.
Of course, an especially efficient policy is to properly check crash test dummies pre-test; and to publish all crash test results. Registered reports creates exactly this situation for the scientific field. In Registered Reports, methods are properly checked before they are applied. And the eventual publication must report all results – and will be published regardless the specific outcomes. While this does not mean that every study can be a Registered Report – there are exceptions to the rule (see the FAQ section here) – many should be. As a result of this simple and compelling logic, the number of journals adopting Registered Reports is constantly increasing.
I was therefore very disappointed to witness Proceedings B refusing to adopt Registered Reports. Moreover they did so on the very unconvincing grounds that one of their sister journals (Royal Society Open Science) already allows them. The general problem persists with every (suitable) journal that refuses to allow Registered Reports. Proceedings B should adopt Registered Reports. But because they refuse to do so, I must protest. I am therefore resigning as an Associate Editor at Proceedings B.
Goodbye.

Claudio Tennie

Tuesday, 19 March 2019

The battle for reproducibility over storytelling in cognitive neuroscience


Here is my twitter thread on our upcoming Discussion Forum on reproducibility in cognitive neuroscience at the journal Cortex. I've posted it to my blog because, weirdly, it appears on twitter to be broken on some browsers (but not others!) To see it on twitter, start here.

--------------

A late-night thread on reproducibility and in cognitive neuroscience, including our upcoming series of (rather punchy) comment pieces at the journal Cortex. Gather round all ye.

Here is my editorial introducing the seven commentaries. I’m going to move through each of them here in turn, and stick around to the end of the thread to hear about two new initiatives we’re launching this year in response /1

First up, Huber et al . report how they tried to replicate a study published in . After being invited beforehand to run & submit the study by one editor, a different editor then desk rejected them once the (non-replication) results were in. /2
 
Sidebar: we later published Huber et al’s replication study at Cortex (thanks , we’re happy to help you out any time). You can read the paper here: /3

Good for them, but Huber & co believe the problem w/ replication in cog neurosci is deep & serious. They call for more stringent checks on reproducibility *before* publication & dynamic tracking of rep attempts & outcomes. Their full comment here: /4

Next, pushes back a little at the suggestion to select what gets published based on results, even when doing so is based on replicability. Instead he calls for a “pending replication” stamp to be placed on unverified exploratory studies /5

But wait...what about the tools we’re using? argues that the reliability of our research cannot exceed the reliability of the methods we employ. And in cognitive neuroscience this is poorly understood. It's not just about publication culture. /6

Nevertheless the often obstructive nature of peer review isn’t terribly helpful. weighs in to point out the value of adversarial collaborations for reducing bias & encouraging better theory, especially when submitted as Registered Reports /7

Do reforms to how science works take into account the scientists who DO the work – the early career researchers? & argue that unless reforms work for ECRs, they will fail. M&T suggest “replication & extension” as one solution /8

But it’s not all about incentives. calls for cognitive neuroscientists to rise above their egos and fallibilities, embrace error correction & champion reproducibility over reputation. And he is someone who practices what he preaches /9

In particular, you can read ’s recent Registered Report at Cortex where he tests the reproducibility of one of his own previous findings & concludes that the original result may be a false positive Almost nobody ever does this in cogneuro. /10 

And finally, , a former editor, takes on the newsroom culture of sci publishing. Huber et al.’s fixes will help but only superficially. To really fix these problems, he says, scientists need to take back control from publishers /11

Where does all this leave us? Cortex has been at the forefront of initiatives such as , Exploratory Reports, TOP guidelines & badges. But these are NOT enough and this year we’ll be launching two new initiatives. /12

The first is an Accountable Replications policy – ’s now famous "pottery barn rule" of publishing, which we recently introduced at Open Science. In a nutshell: if Cortex published the original study we’ll publish the replications of that study. /13

The second is an entirely new initiative, again the creation of : Verification Reports. Short articles with the sole purpose of testing the reproducibility & robustness of original studies using the exact SAME data. /14

These steps aren’t a total answer but they move us in the right direction. The recent launch of – together with the wide support the network is receiving from funders, publishers & regulators – means that reproducibility is going to be a Big Deal for many years. /15

That’s why cognitive neuroscientists need to be at the forefront of those discussions. And it’s why cog neurosci journals need to work harder to support reproducibility. That means adopting , Exploratory Reports, TOP guidelines, replication initiatives & more. /16

I will end this very long thread there! Hope you enjoy the articles (which are all available as preprints in the tweets above) and thanks to all the wonderful contributors for weighing in. Onward. /end

Wednesday, 19 December 2018

My manifesto as would-be editor of Psychological Science






**Update 8 Feb 2019: After my initial application, I'm happy to report that I've progressed to the next stage of consideration. Obviously I'm still a long way from the destination, and it will undoubtedly be an extremely competitive field of candidates, but I am one step closer. Special thanks to all the colleagues who took the time to support my nomination! I will update again when there are further developments.**

**Update 18 June 2019: I have just heard that the role has been offered to another candidate. A big thank you to everyone who supported me and to the APS for considering me. I wish the new editor -- whoever they are -- the very best of luck and would urge them to consider implementing at least some of the initiatives in my agenda below.**

This week I received a nice email informing me that I have been elected as a Fellow of the Association for Psychological Science. A warm thanks to whoever nominated me -- I have no idea who you are, but I appreciate your faith in me. 

In the spirit of using this position to achieve something meaningful, I have put myself forward for consideration as Editor-in-Chief of the journal Psychological Science. The turnover of the Journal's editorship offers the opportunity to elevate Psychological Science from being the flagship journal of the APS to becoming a global beacon for the most important, open and reliable research in psychology -- an example not just for other journals in psychology but for science as a whole.

On December 18, I submitted the following statement to the Search Committee:


I am a professor of psychology and cognitive neuroscience at the School of Psychology, Cardiff University (see here for homepage including basic CV). I currently serve as a senior section editor at six peer reviewed academic journals, including BMJ Open Science, Collabra: Psychology, Cortex, European Journal of Neuroscience, NeuroImage, and Royal Society Open Science and I previously served on the editorial board at PLOS ONE and AIMS Neuroscience. Among other initiatives, I co-founded Registered Reports, the Transparency and Openness Promotion (TOP) guidelines, the Peer Reviewers’ Openness Initiative, and the accountable replications policy at the Royal Society. In total I have edited ~200 submissions, including ~120 Registered Reports. As a senior editor of Registered Reports, I am experienced at managing teams of editors. I am a fellow of the British Psychological Society (BPS) and was recently awarded fellowship of the Association for Psychological Science (member #119281). In 2007 I was awarded the BPS Spearman Medal and in 2018 my book on the need for reform in psychology won the BPS Book Award (Best Academic Monograph category.) As chief editor of Psychological Science, I would complete the important mission that Steve Lindsay began, implementing a range of policy reforms to maximise the quality and impact of research published in the Journal.


Steve Lindsay, and Eric Eich before him, have done a superb job introducing the APS and Psychological Science to the world of open science. I am standing for consideration as chief editor on a manifesto that will consolidate and extend the mission that they began.  

1. Full implementation of Registered Reports

Psychological Science currently offers a limited version of Registered Reports in which the format is available only for direct replications of selected previous studies published in the Journal. I will expand the format to offer full Registered Reports and I will appoint a dedicated Registered Reports editor. 

2. Registered Reports Funding Models

I will commence discussions with funding agencies to support Registered Reports grant models in which a Stage 1 Registered Report to Psychological Science is simultaneously assessed by the Journal and the funder, with successful proposals achieving provisional acceptance and funding support on the same day. 

3. Accountable Replications Policy

I will introduce an Accountable Replications Policy in which Psychological Science guarantees to publish any rigorous, methodologically sound replication of any previous study published in the Journal. This initiative will be similar to the policy I recently launched at Royal Society Open Science and the European Journal of Neuroscience. 

4. Exploratory Reports

Hypothesis-testing is just one way of doing science. I will introduce a new Exploratory Reports format, similar to the initiative I helped shape at Cortex, to provide a dedicated home for transparent exploratory research employing inductive or abductive methods. This format will focus on generating ideas and testable predictions for future studies. 

5. OSF Badges

I will review the Journal’s current policy concerning OSF Badges, seeking to raise standards for the awarding of the Open Data, Open Materials and Preregistered badges. I will appoint a dedicated Reproducibility Editor to oversee this review and the badges programme. 

6. TOP Guidelines

Psychological Science is a signatory of the Transparency and Openness Promotion (TOP) guidelines. I will implement the TOP guidelines at the Journal, achieving a minimum of Level 2 across all eight standards. Among other requirements, this will mean that all empirical articles must either make anonymised study data, analytic code, and digital study materials freely available in a publicly accessible repository, or the authors must explain in the article the legal and/or ethical barriers to archiving. The appointed Reproducibility Editor will oversee the implementation and compliance with TOP. 

7. Open Peer Review

I will implement a simple policy of open review in which all reviews and editorial decision letters are published alongside the corresponding articles, with the action editor identified and reviewers retaining the choice to either sign their reviews or remain anonymous. 

8. Verification Reports

I will launch a new ultra-short report format in which independent authors are given the opportunity to repeat and expand the analyses of original data in published articles in the Journal. The format will serve to verify or challenge the original authors’ conclusions and subject the results to robustness checks. 

I am standing for this role because I believe that psychology faces one of two possible futures. In one, we fail to reform our research culture and diminish. The legacy of psychology will eventually be forgotten, along with its enormous potential in understanding the mind and helping society. In an alternate future, we seize this moment -- right now -- and lead the way in placing quality and reproducibility at the heart of our scientific mission. I’m reminded of that signature episode of Star Trek Voyager when the Doctor says to Harry Kim, desperately trying to alter the timeline and save his crew: “Somebody has got to knuckle down and change history, and that somebody is you”.

If that somebody is you then let’s do this together. Email editorsearch@psychologicalscience.org and support my nomination as the future editor of Psychological Science.

Tuesday, 6 November 2018