Saturday, 16 November 2013

Bringing study pre-registration home to roost


Earlier this year I committed my research group to pre-registering all studies in our recent BBSRC grant, which includes fMRI, TMS and TMS-fMRI studies of human cognitive control. We will also publicly share our raw data and analysis scripts, consistent with the principles of open science. As part of this commitment I’m glad to report that we have just published our first pre-registered study protocol at the Open Science Framework.

For those unfamiliar with study pre-registration, the rationale is simply this: that to prevent different forms of human bias creeping into hypothesis-testing we need to decide before starting our research what our hypotheses are and how we plan to test them. The best way to achieve this is to publicly state the research questions, hypotheses, outcome measures, and planned analyses in advance, accepting that anything we add or change after inspecting our data is by definition exploratory rather than pre-planned.

To many scientists (and non-scientists) this may seem like the bleeding obvious, but the  truth is that the life sciences are suffering a crisis in which research that is purely exploratory and non-hypothesis-driven masquerades as hypothetico-deductive. That’s not to say that confirmatory (hypothesis-driven) research is necessarily worth any more than exploratory (non-hypothesis driven) research. The point is that we need to be able to distinguish one from the other, otherwise we build a false certainty in the theories we produce. Psychology and cognitive neuroscience are woeful at making this distinction clear, in part because they ascribe such a low priority to purely exploratory research.

Pre-registration helps solve a number of specific problems inherent in our publishing culture, including p-hacking (mining data covertly for statistical significance) and HARKing (reinventing hypotheses to predict unexpected results). These practices are common in psychology because it is difficult to publish anything in ‘top journals’ where the main outcome was p >.05 or isn’t based on a clear hypothesis.

Evidence of such practices can be found in the literature and all around us. Just last week at the Society for Neuroscience conference in San Diego, I had at least three conversations where presenters at posters would say something like: “Look at this cool effect. We tested 8 subjects and it looked interesting so we added another 8 and it became significant”. Violation of stopping rules is just one example of how we think like Bayesians while being tied to frequentist statistical methods that don’t allow us to do so. This bad marriage between thought and action endangers our ability to draw unbiased inferences and, without appropriate Type I correction, elevates the rate of false discoveries.

In May, the journal Cortex launched a new format of article that attempts to solve these problems by incentivising pre-registration. Unlike conventional publishing models, Registered Reports are peer reviewed before authors conduct their experiments and the journal offers provisional acceptance of final papers based solely on the proposed protocol. The model at Cortex not only prevents p-hacking and HARKing – it also solves problems caused by low statistical power, lack of data transparency, and publication bias. Similar initiatives have been launched or approved by several other journals, including Perspectives on Psychological Science, Attention Perception & Psychophysics, and Experimental Psychology. I’m glad to say that 10 other journals are currently considering similar formats, and so far no journal to my knowledge has decided against offering pre-registration.

In June, I wrote an open letter to the Guardian with Marcus Munafò and >80 of our colleagues who sit on editorial boards. Together we called for all journals in the life sciences to offer pre-registered article formats. The response to the article was overall neutral or positive, but as expected not everyone agreed. One of the most striking features of the negative responses to pre-registration was how the critics targeted a version of pre-registration we did not propose. For instance, some felt that the Cortex model would prevent publication of serendipitous findings or exploratory analyses (it doesn't), that authors would be “locked” into publishing with Cortex (they aren’t), or that the model we proposed was suggested as mandatory or universal (it is explicitly neither). I would ask those who responded negatively to reconsider the details of the Cortex initiative because we don’t disagree nearly as much as it seems. In regular seminars I give on Registered Reports at Cortex I include a 19-point list of FAQs and response to these points, which you can read here. I will regularly update this link as new FAQs are added.
I believe we are in the early stages of a revolution in the way we do research – one not driven by pre-registration per se, and certainly not by me, but by the combination of converging future-oriented approaches, including emphasis on replication (and replicability), open science, open access publishing, and pre-registration. The pace of evolution in scientific practices has shifted up a gear. Clause 35 of the revised Declaration of Helsinki now explicitly requires some form of study pre-registration for medical research involving human participants. Although much work in psychology and cognitive neuroscience isn’t classed as ‘medical’, many of the major journals that publish basic research also ask authors to adhere to the Declaration, including the Journal of Neuroscience, Cerebral Cortex, and Psychological Science.

The revised Declaration of Helsinki has caused some concern among psychologists, and I should make it clear that those of us promoting pre-registration as a new option for journals had no role in formulating these revised ethical guidelines. However we shouldn’t necessarily see them as a problem. There are many simple and non-bureaucratic ways to pre-register research (such as the OSF), even if the journal-based route is the only to reward authors with advance publication.

One valid point that has been made in this debate is that those of us who are promoting pre-registration should practice what we preach, even when there is no journal option currently available (and for me there isn’t another option because Cortex – where I am section editor – is so far the only cognitive neuroscience journal offering pre-registered articles). Some researchers, such as Marcus Munafò, already pre-register on a routine basis and have done for some time. For my group it is newer venture, and here is our first attempt. Our protocol describes an fMRI experiment of response inhibition and action updating that forms the jumping off point for several upcoming studies involving TMS and concurrent TMS-fMRI. We are registering this protocol prior to data collection. All comments and criticisms are welcome.

Writing a protocol for an fMRI experiment was challenging because it required us to nail down in advance our decisions and contingencies at all stages of the analysis. The sheer number of seemingly arbitrary decisions also reinforced my belief that many, if not most, fMRI studies are contaminated by bias (whether conscious or unconscious) and undisclosed analytic flexibility. I found pre-registration rewarding because it helped us refine exactly how we would go about answering our research questions. There is much to be said for taking the time to prepare science carefully, and time spent now will be time saved when it comes to the analysis phase.

Most of the work in our first pre-registration was undertaken by two extremely talented young scientists in my team: PhD student Leah Maizey and post-doctoral researcher Chris Allen. Leah and Chris deserve much praise for having the courage and conviction to take on this initiative while many of our senior colleagues 'wait and see'.

Pre-registration is now a normal part of the culture in my lab and I hope you’ll consider making it a part of yours too. Embracing the hypothetico-deductive method helps protect the outcome of hypothesis-driven research from our inherent weaknesses as human practitioners. It also prompts us to consider deeper questions. As a community we need to reflect on what sort of scientific culture we want future generations to inherit. And when we look at the current status quo of questionable research practices, it leads us to ask one simple question: Who are we serving, us or them?

Wednesday, 7 August 2013

A quick wave to all my new twitter followers


Hello! I really hope you’ll enjoy our new blog over at the Guardian. It’s a privilege to be able to write about psychology for such a broad audience, and to do so alongside such talented colleagues as Pete Etchells, Thalia Gjersoe and Molly Crockett. I'll do my best not to disappoint!

NeuroChambers is my personal blog, where I write mostly about science-related things but occasionally post more personal stuff.

First, a bit about me. I’m a researcher at the Cardiff University School of Psychology. I’m originally from Australia, where I did a PhD about 10 years ago in an area called ‘psychoacoustics’ – the psychology of auditory perception. After that I got interested in the relationship between the brain and cognition, so I moved to an area called cognitive neuroscience, which bridges the gap between neurobiology and traditional experimental psychology. I now run a research group in Cardiff, where we use brain imaging and brain stimulation methods to understand human cognitive control and attention. At the moment I’m particularly interested in the psychology and neuroscience of response inhibition, impulse control, and addiction.

I started NeuroChambers in 2012 after taking part in a debate on science journalism at the Royal Institution. Following some energetic arguments in the press about the good, bad, and ugly of science reporting, we came to the conclusion that scientists and journalists need to cooperate far more constructively in the service of public understanding (you can watch the debate here and read more about it here). One area, in particular, that I feel scientists need to work on is the process of communicating science to non-scientists. And a great way to do this, of course, is through blogging.

There are four main types of article I post here on my personal blog: 

1. Research Briefings: these are (hopefully) accessible summaries of our recent research. Whenever we publish an article in a scientific journal that I think might have broader appeal, I write an overview of the work for a general audience. Here are a few I wrote about human vision, impulse control, and human brain stimulation. I’m not the only scientist to do this – Mark Stokes at Oxford University also does it over at Brain Box (and does it well!) 

2. Calls to Arms: I’m a psychologist and I think psychology is an important and fascinating discipline. But I’m actually quite critical about what passes for acceptable research practices these days, and lately I’ve been working on possible solutions. One approach I’ve been advocating is called study pre-registration. In short, what this means is that scientists should specify the predictions and statistical tests in their experiments before they conduct them. Doing so helps us stay true to the scientific method and avoid fooling ourselves into believing that we’ve discovered something real when in fact we're only staring at the reflection of our own bias. For me, study pre-registration is common sense but not everyone agrees. Psychological science is in the midst of a revolution, and revolutions are never easy. We’ll be writing more about this at Head Quarters as we gradually reform the field.

Another area that I’ve been fairly vocal about recently is the importance of evidence-based policy in government. Last year, Mark Henderson, head of Communications at the Wellcome Trust, published a very important book called the Geek Manifesto, which explains why science is so important and yet so undervalued in modern politics. Mark’s book inspired me and many other scientists to do something proactive to address this issue. Together with Tom Crick and several colleagues – as well as 60 generous donors from across the UK – I helped coordinate a campaign to send one copy of the Geek Manifesto to each elected member of the National Assembly for Wales. I’m also following up on this initiative with Natalia Lawrence at the University of Exeter. Natalia and I are aiming to establish a rapid-response ‘evidence information service’ for politicians and civil servants. 

3. Advice columns for students and junior scientists: These posts will have less general appeal as they're usually written for those already pursuing a career in science. Still, my most popular post on this blog has been a (probably overly) blunt list of do’s and don’ts for the aspiring PhD student. 

4. Whinges: I’ve lived in Britain long enough to cherish the art of a good whinge, and part of being a scientist is challenging bullshit. I occasionally write critical pieces questioning (what I see as) flawed or overegged science, or bad practice. You’ll see more of this style of piece over at Head Quarters as well.

Also, a warning. As you’ll have noted above, I’m a bit sweary at times (for which you can blame my Australian upbringing). Apologies in advance if I write or say something that offends! Don't worry, my Guardian posts will be more civilised - usually!

So that’s a quick overview of me and the things I write about at NeuroChambers. Meanwhile stay tuned for more posts at Head Quarters – we’ve got some exciting topics in the pipeline.

Finally, for no reason whatsoever, here’s a picture of our two cats...doing what cats do best. 

Wednesday, 19 June 2013

Research Briefing: Is there a neural link between ‘neglect’ and ‘pseudoneglect’?


Source Article: Varnava, A., Dervinis, M. & Chambers, C.D. (2013). The predictive nature of pseudoneglect for visual neglect: Evidence from parietal theta burst stimulation. PLOS ONE 8(6): e65851. doi:10.1371/journal.pone.0065851. [pdf] [data and analyses] 
-----------------------


I’m excited about this latest research briefing for several reasons.

First, as I’ll explain below, I think the study tells us something new about how the human brain represents space, with potential clinical applications in neuropsychology. Second, the study represents my group’s first excursion into the world of open access publishing and open science (including open data sharing) – something I feel strongly about and have committed to pursuing in our recently awarded BBSRC project. And finally, the manuscript itself has a rocky history that left me disillusioned with the journal Neuropsychologia and, soon after, motivated me to join others in calling for publishing reform. 

The Research 

Lets start by talking about the science. Our aim in this study was to test for a link between two types of visual spatial bias called ‘neglect’ and ‘pseudoneglect’.

Neglect (also known as ‘unilateral neglect’) is a neurological syndrome that arises after brain injury – most often due to a stroke that permanently damages the right hemisphere. Patients with neglect present with a striking lack of attention and awareness to objects presented on the left side of their midline. Such behaviours may include ignoring food on the left side of a dinner plate or failing to draw the left side of objects. Importantly, the patients aren’t simply blind on their left side. The visual parts of the brain are generally intact while the damage is limited to parietal, temporal, or frontal cortex.

Neglect has been studied for many years and we know a lot about how and why it arises. But one unanswered question is how the spatial bias of neglect relates to other spatial biases that are completely normal. We felt this was an important question because we don’t know enough at a basic level about how the brain represents space, so testing for neurocognitive links between spatial phenomena helps us build better theories. Furthermore, if there happens to be a predictive relationship between neglect and other forms of bias, we may be able to estimate the likely severity of neglect before a person has a stroke. This could have a range of useful applications in clinical therapy and management.

Enter ‘pseudoneglect’. Pseudoneglect is a normal bias in which people ignore a small part of their left or right side of space. One simple way to measure this is to ask someone to cross the centre of a straight horizontal line. Most people will misbisect the line to the left or right of its true centre. This effect is tiny (in the order of millimetres) but reliable.

In this study we wanted to know whether patterns of pre-existing bias, as reflected by pseudoneglect, predict the patterns of actual neglect following neurological interference. Of course, we couldn’t give our participants permanent brain injury, so we decided to use transcranial magnetic stimulation (TMS) to simulate some of the effects of a brain lesion. Using a particular kind of repetitive TMS called ‘theta burst stimulation’, we temporarily suppressed activity in parts of the brain while people did tasks that measured their spatial bias. To see if there was a link between systems, we then related these effects of TMS on spatial bias to people’s intrinsic pseudoneglect.

As expected by previous studies, we found that TMS of the right parietal cortex induced neglect-like behaviour – compared to a sham TMS condition (placebo), people bisected lines more to the right of centre, indicating that TMS caused a subtle neglect of the left side of space. This effect lasted for an hour (upper figure on the left). But what was particularly striking was that the effect only happened in the participants who already showed an intrinsic pattern of left pseudoneglect. In contrast, those with right pseudoneglect at baseline were immune to the effects of TMS (lower figure on the left).

There were a number of other aspects to the study too. We compared the effect of TMS using two different methods of estimating bias, and we also asked whether the TMS influenced people’s eye movements (it didn't). I won’t go into these details here but the paper covers them in depth.

What do these results mean? I think they have two implications. First, they provide evidence that neglect and pseudoneglect arise from linked or common brain systems for representing space – and they provide a biological substrate for this association in the right parietal cortex. Second, the results provide a proof of principle that initial spatial biases can predict subsequent effects of neurological interference. In theory, this could one day lead to pre-diagnostic screening to determine whether a person is at risk of more severe neglect symptoms in the event of suffering a stroke.

All that said, we need to be cautious. There is a world of difference between the subtle and reversible effects of TMS and the dramatic effects of brain injury. We simply don't know whether the predictive relationship found here would translate to patients – that remains to be established. Also, our study had a small sample size, has yet to be replicated, and provides no indication of diagnostic or prognostic utility. But I think these preliminary results provide enough evidence that this avenue is worth pursuing. 

Open Access, Open Science, and Publishing Reform 

Apart from the science, our paper represents a milestone for my group in terms of our publishing practices. This is our first article in PLOS ONE and our first publication in an open access journal. Also, it is our first attempt at open science. Interested readers can download our data and analyses from Figshare (linked here and in the article itself). I increasingly feel that scientists like me who conduct research using public funds have an obligation to make our articles and data publicly available.

This paper also represents a turning point for me in terms of my attitude to scientific publishing. We originally submitted this manuscript in 2012 to the journal Neuropsychologia, where it was rejected because some of our results were statistically non-significant. Rejecting papers on the basis of ‘imperfect’ results is harmful to science because it enforces publication bias and pushes authors to engage in a host of questionable research practices to generate outcomes that are neat and eye-catching. With some ‘finessing’ of the analyses, we could probably have published our paper in a more ‘traditional’ outlet. But we decided to play a straight bat, and when we were penalised for doing so I realised on a very personal level that there was something deeply wrong with our publishing culture. As a consequence I severed my relationship with Neuropsychologia.

A short time later, I was contacted by Sergio Della Sala, the Editor-in-chief of Cortex, who read my open letter to Neuropsychologia. Sergio very kindly offered me an associate editorship (never let it be said that blogging is a waste of time!) and together we built the Registered Reports initiative. Our hope is that this new option for authors will help reform the incentive structure of academic publishing. Since then we’ve been part of a growing movement for change, alongside Perspectives on Psychological Science and their outstanding Registered Replications project, the Open Science Framework, and a special issue at Frontiers in Cognition which has adopted a variant of the Cortex pre-registration model.

In early June this year, Marcus Munafò and I, together with more than 80 of our colleagues, published an article in the Guardian calling for Registered Reports to be offered by journals across the life sciences. I’m delighted to report that the journal Attention, Perception and Psychophysics and two other academic journals are now on the verge of launching their own Registered Reports projects.

My small part in this reform traces back to having this manuscript rejected by Neuropsychologia editor Jennifer Coull in September 2012. So, in a very true sense, I owe Jennifer a debt of gratitude for giving me the kick in the butt I needed. Sometimes rock bottom can be a great launching pad.

___

Wednesday, 10 April 2013

Scientific publishing as it was meant to be


Last October I joined the editorial board of Cortex, and my first order of business was to propose a new format of article called a Registered Report. The essence of this new article is that experimental methods and proposed analyses are pre-registered and peer reviewed before data is collected. This publication model has the potential to cure a host of bad practices in science.

In November the publisher approved the new article format and I’m delighted to announce that Registered Reports will officially launch on May 1st. I’m especially proud that Cortex will become the first journal in the world to adopt this publishing mechanism.

For those encountering this initiative for the first time, here are some links to background material:

1. The open letter I wrote last October proposing the idea. 
2. A panel discussion I took part in last November at the Spot On London conference, where I spoke about Registered Reports.
3. My freely-accessible editorial article where we formally introduce the initiative (March 2013).
4. **Update 03/05** Finalised author and reviewer guidelines. 
5. **Update 26/04**: Slides from my talk at Oxford where I spoke about the initiative.

Why should we want to review papers before data collection? The reason is simple: as reviewers and editors we are too easily biased by the appearance of data. Rather than valuing innovative hypotheses or careful procedures, we too often we find ourselves applauding “impressive results” or bored by null effects. For most journals, issues such as statistical power and technical rigour are outshone by novelty and originality of findings.

What this does is furnish our environment with toxic incentives. When I spoke at the Spot On conference last year, I began by asking the audience: What is the one aspect of a scientific experiment that a scientist should never be pressured to control? After a pause – as though it might be a trick question – one audience member answered: the results. Correct! But what is the one aspect of a scientific experiment that is crucial for publishing in a high-ranking journal? Err, same answer. Novel, ground-breaking results.

The fact that we force scientists to touch the untouchable is unworthy of a profession that prides itself on behaving rationally. As John Milton says in Devil’s Advocate, it’s the goof of all time. Somehow we've created a game in which the rules are set in opposition.

The moment we incentivize the outcome of science over the process itself, other vital issues fall by the wayside. A priori statistical power becomes neglected, as Kate Button and Marcus Munafo prove today in their compelling analysis of neuroscience studies (and see excellent coverage of this work by Ed Yong and Christian Jarrett).

With little chance of detecting true effects, experimentation reduces to an act of gambling. Driven by the need to publish, researchers inevitably mine underpowered datasets for statistically significant results. No stone is left unturned; we p-hack, cherry pick, and even reinvent study hypotheses to "predict" unexpected results. Strange phenomena begin appearing in the literature that can only be explained by such practices – phenomena such as poor repeatability, prevalence of studies that support stated hypotheses, and a preponderance of articles in which obtained p values fall just below the significance threshold. More worryingly, a recent study by John et al shows that these behaviours are not the actions of a naughty minority – they are the norm.


None of this even remotely resembles the way we teach science in schools or undergraduate courses, or the way we dress it up for the public. The disconnect between what we teach and what we practice is so vast as to be overwhelming. 

Registered Reports will help eliminate these bad incentives by making the results almost irrelevant in reaching editorial decisions. The philosophy of this approach is as old as the scientific method itself: If our aim is to advance knowledge then editorial decisions must be based on the rigour of the experimental design and likely replicability of the findings – and never on how the results looked in the end.

We know that other journals are monitoring Cortex to gauge the success of Registered Reports. Will the format be popular with authors? Will peer reviewers be engaged and motivated? Will the published articles be influential? This success depends on you. We'll need you to submit your best ideas to Cortex – well thought-out proposals that address important questions – and, crucially, before you’ve collected the data. We need your support to help steer scientific publishing toward a better future.

For my part, I’m hugely excited about Registered Reports because it offers hope that science can evolve; that we can be self-critical, open-minded, and determined to improve our own practices. If Registered Reports succeeds then together we can help reinvent publishing as it was meant to be: rewarding the act of discovery rather than the art of performance.
___
 
I am indebted to many people for supporting the Registered Reports initiative, and my sincere apologies if I have left anyone off this list. For generating or helping to inspire the ideas (for which I take no personal credit), I’m grateful to Neuroskeptic, Marcus Munafò, Pete Etchells, Mark Stokes, Frederick Verbruggen, Petroc Sumner, Alex Holcombe, Ed Yong, Dorothy Bishop, Chris Said, Jon Brock, Ananyo Bhattacharya, Alok Jha, Uri Simonsohn, EJ Wagenmakers, Eric Eich, and Brian Nosek. I’m grateful also to Toby Charkin from Elsevier for working hard to facilitate the administrative aspects of the initiative. I also want to thank Zoltan Dienes for joining the editorial board. Zoltan will provide expert advice as part of the initiative for studies involving Bayesian statistical methods and his paper on the advantages of Bayesian techniques over conventional NHST is a must-read. My thanks as well to many members of the Cortex editorial board for their advice and valuable consultation, including especially Rob McIntosh and Jason Mattingley, and to Dario Battisti for the cover art accompanying the Cortex editorial (pictured above). Finally, I am especially grateful to the Editor-in-chief of Cortex, Sergio della Sala for having the vision and courage to support this idea and see it to fruition. A determined and progressive EIC is crucial for the success of any new publishing format, particularly one as ambitious as Registered Reports.

Sunday, 3 March 2013

Research Briefing: How safe is transcranial magnetic stimulation?


Source Article: Maizey, L., Allen, C.P.G., Dervinis, M., Verbruggen, F., Varnava, A., Kozlov, M., Adams, R.C., Stokes, M., Klemen, J., Bungert, A., Hounsell, C.A., Chambers, C.D. (2013). Comparative incidence rates of mild adverse effects to transcranial magnetic stimulation. Clinical Neurophysiology, 124, 536-544.  [pdf] [monitoring forms]

 -----------------------

When I moved to Cardiff University back in 2008, the first thing I did was set up two labs for doing human transcranial magnetic stimulation (TMS). I’d been using TMS since 2002 and it was (and continues to be) a major part of my research programme. Unlike brain imaging techniques such as fMRI or MEG, TMS interferes with brain activity. This means that the effect of TMS on behaviour can tell us which parts of the brain are necessary for different cognitive functions. In my lab we use TMS to study processes such as perception, attention, consciousness, decision-making, and response inhibition.

In the process of setting up TMS – a new technique for Cardiff at the time – I had to submit a lengthy application for ethics approval. After several weeks of discussion, the committee and I decided that building a TMS lab offered the opportunity to do some novel research on the side effects of brain stimulation.

Since it was developed in 1985, TMS has been generally considered safe for human use. Serious adverse effects, such as seizures, are rare, and few incidents have been reported since international guidelines for TMS safety were established in 1998 (updated in 2009). However, TMS has been suspected to cause a range of more mild adverse effects, such as headache and nausea. Much less is known about these lesser side effects, even though they can be very unpleasant for participants.

So back in 2008 we decided to put in place a system for monitoring side effects. After every experimental session involving TMS, participants were given a form to complete that listed a series of possible symptoms occurring within 24 hours of the session (the forms can be downloaded here). Then, when the participant returned for their next session, we collected and archived these forms. Over several years of TMS experiments – and many different variants of the technique – we amassed more than 1000 such forms from over 100 unique participants. Last year, after four years, we decided we had enough data to commence the analysis.

I’m now happy to report that the paper documenting this analysis has appeared in the journal Clinical Neurophysiology, written primarily by my PhD student, Leah Maizey. To our knowledge this paper reports the largest TMS safety study yet conducted by a single research team.

Overall, participants in our study reported mild adverse effects (or MAEs) following ~5% of sessions, although 39% of participants reported at least one MAE at some point during their experimental regime. When MAEs did occur, the most common was headache (41%). Rates of adverse effects were higher for active TMS compared to sessions involving ‘sham’ (placebo) TMS, although a small number of adverse effects could nevertheless be attributed to coincidence or placebo effects. 

Two other findings are notable and may be of special interest to TMS researchers. First, MAEs were more likely to occur following a participant’s first session, even controlling for various extraneous factors. We believe this tendency could be explained by anxiety when receiving TMS for the first time, so steps taken by researchers to ensure that participants are relaxed and comfortable are likely to help.

Second – and most striking – nearly 80% of MAEs were reported after participants had left the laboratory at the end of their session. We don’t have a good explanation for why this is, but 80% is too big to ignore. Maybe the physiological aftereffects of TMS are longer lasting than is generally assumed, or maybe the immediate aftereffects can have knock-on effects to other physiological systems. This was a serendipitous finding, so it will be important to see whether other researchers can independently replicate such long-lasting effects.

The good news for TMS researchers is that our study adds to a body of evidence that TMS is safe for human use under carefully controlled conditions. The adverse effects we did observe were mostly very minor (no seizures!) and only a few participants withdrew from the experiments. Our main recommendation is that it would be useful for the TMS community to monitor adverse effects more closely and to adopt standard methods for doing so. We provide relevant monitoring forms as part of our paper.

---


* Special thanks to Matthew Rushworth for helpful discussion at the outset of this project.

Tuesday, 19 February 2013

Rejected!


Whether you’re a teenager getting dumped or a Nobel prize nominee trundling in second, rejection sucks.

I’m none of the above (I used to be an expert at one of them) but today I had a funding application rejected by the Wellcome Trust

The application was for a research fellowship in basic biomedical science. I was planning to study in two parts, first, how training people to inhibit actions toward food and alcohol changes their brain chemistry and physiology, and second, how we might combine brain stimulation with inhibition training to help people recover from alcohol addiction and obesity. I felt these were closely linked themes: a basic strand followed by an applied strand that took the results to the streets.

I felt a bit like Sauron and the One Ring with this application, but with a bit less malice. Into it I poured everything I had done and learned over the last ten years of my research. I used all of my (very helpful and generous) connections and collaborators to devise a project that included such aspects as:
  • a mass online internet experiment that would have been hosted by the Guardian and provided the world’s largest study of human inhibition to date
  • the use of simultaneous brain stimulation (TMS) and brain imaging (fMRI) to study the effects of inhibition training on key connections in the brain
  • randomised controlled trials on the effects of brain stimulation and inhibition training in alcoholism and obesity (a promising combination)
Still, my application wasn’t good enough – not even to make it to interview – and there’s a lesson in that. Science doesn’t care about effort, only about outcomes. I won’t quote the feedback from the Wellcome Trust's Expert Review Group, as it is intended to be confidential (for all concerned). But suffice to say, I felt the single paragraph of feedback rather misunderstood the project and made some factual errors. Of course, this is not the Committee’s fault – it is mine. In science, if you fail to communicate your message clearly then you have only yourself to blame.

I’ve had a lot of rejections in my career far more than I've had successes and I think you can learn a lot about yourself in terms of how you deal with them. In the junior years they feel like getting shot (sometimes stabbed), but with time the trauma gives way to the gentle thud of "not good enough" meteorites bouncing off your own rhinoceros hide. 

A few tips for beginners: 

1)   Remember it probably isn’t personal. Even if the reasons for rejecting your application are unfounded or based on a misunderstanding, it’s rare for decisions to be driven by personal grudges.

2)   The decision makers are human like you. They will make mistakes. Sometimes those mistakes will go in your favour and the panel will overlook a genuine weakness in your application. Other times they will pounce on non-existent problems. We have to accept that this decision process is noisy, like every other biological system.

3)   The basis for decisions is never entirely random, so getting things wrong helps you get them right next time. When I repackage and resubmit my application somewhere else, I'll use the feedback from the Trust to make it stronger. Never just blindly resubmit your application; always try to learn something from the rejection and improve it. Dealing constructively with rejection will make you a better scientist. 

4)   Resist the urge, implicitly or explicitly, to take out your disappointment on others. This is a surprisingly easy trap to full into and I suspect many scientists do. Next time a grant application (particularly one from the Wellcome Trust) lands on my desk to review, I might be tempted to treat it particularly harshly because I feel I was treated the same way. Or, what if I happen to be editing a manuscript submitted to Cortex or PLOS ONE by a member of the panel? We must resist being led by (natural) negative emotions because "an eye for an eye" is the anithesis of science.

5)   Finally, remember that reviewers and panel members are ultimately doing you a favour, whatever the outcome. They took the time to read something you wrote. They thought about it and gave you feedback on it. This is actually a pretty remarkable thing and we should be grateful.

So, my feeling about today’s grant rejection is that yes, it sucks! And yes I think the committee made a mistake because I could have settled all those concerns within the first minute of an interview (and yes, of course, I would say that!)

But I’m also grateful for the feedback and I recognise that comparing funding applications is difficult and noisy. Could my application have been stronger? Nope, I gave it everything I had. Could I have done a better job reviewing grants than this panel? No, definitely not. Science is a human enterprise on all fronts.

So I'm going to wallow for another day or so, then I'm going to scrape myself off the floor and rework that application.