Friday 30 March 2012

Brain stimulation to solve impossible problems?

No. But this is what happens when a flawed scientific experiment is combined with generous helpings of public-relations nonsense.

Richard Chi and Allan Snyder from the University of Sydney have claimed that a form of brain stimulation known as transcranial direct current stimulation (tDCS) can help people “escape the tricks our minds impose on us”. An extraordinary piece of PR puffery from by the University of Sydney headlines with: 'Impossible' problem solved after non-invasive brain stimulation.

These are precisely the kinds of claims that damage public understanding of science. I'm getting deja vu just writing this.

In their latest outing, published in the journal Neuroscience Letters, Chi and Snyder asked whether tDCS could help people solve a puzzle called the 'nine dots problem', which requires people to connect a 3 x 3 grid of dots using four straight lines. They report that participants who received tDCS of their anterior temporal lobes were better able to solve the puzzle than those who had ‘sham’ stimulation. They conclude that this effect was “due to inhibiting [brain] networks associated with top down imposition of prior knowledge, knowledge that inclines us to join the dots up within the square.”

Really? I've read their paper several times now, back to front, even sideways a couple of times. And I still can't find any evidence to support this claim. Instead all I found was a long list of flaws. Here are some of the worst offenders.

1. No control task for effects of "prior knowledge"

To conclude that tDCS disrupted the imposition of “prior knowledge”, they would need to include a control task that is matched on every possible dimension (task difficulty, attentional requirements, etc.) but which isn’t prone to the imposition of “prior knowledge”. They didn’t. They also fail to provide any cogent theoretical framework for what neural mechanisms are responsible for “imposing prior knowledge”. In the Discussion, the authors seem to acknowledge this lack of experimental control: "While there could well be alternative explanations, we would like to emphasize that our results that tDCS enabled over 40% of participants to solve the nine-dot problem stand on their own." I beg to differ.

2. Actually, no suitable control tasks at all

Participants could have performed better because tDCS increased their alertness or arousal, or altered their attentional state in any one of many possible ways. But the authors included no control tasks to test whether these processes were affected, either positively or negatively. 

3. Claims based on mysterious pilot data

The authors describe in handwaving fashion a pilot study that compared different forms of brain stimulation, and then conclude that this pilot study confirms their previous findings. But the results of the pilot study are not shown and are not even statistically analysed. Did the reviewers of this paper even read it?

4. Dubious statistical analysis

Their use of one-tailed significance testing is highly questionable. A directional prediction alone is not sufficient justification for one-tailed tests. This is is because one-tailed testing is only legitimate if the opposite result to that expected is either impossible or irrelevant. Neither is the case here, or arguably ever in cognitive neuroscience: had brain stimulation impaired rather than enhanced performance then the authors would naturally still have interpreted their data. The authors also appear confused about the nature of probabilities. For the key analysis, comparing the number of participants who solved the puzzle between the sham and experimental groups, they report a p value of .018, and then conclude bizarrely that the likelihood of observing the results by chance was less than one in 10,000. Pardon?

5. Bizarre inclusion of additional participants

The actual experiment was undertaken on 22 participants. Eleven of these participants were in the control group (they received ‘sham’ tDCS in which the current is switched off soon after being started), and the other eleven were in the tDCS group. But in the Results, the authors suddenly add another 11 participants to the sample without providing any detail of the experimental procedures.

6. I don't even know what to call this problem. Appeal to...prose?

Part of the results consists of an anecdote of what one participant had to say about his "cognitive style". This section is rather optimistically titled “Case Report”.

I am honestly struggling to find any reasonable explanation for how this paper could have survived even a modestly critical peer-review process. I could give this paper to any one of my PhD students and they would find more flaws than I've even listed here. But perhaps I shouldn’t be surprised, as Snyder's group have previously published a similarly uncontrolled study, combined with a healthy dose of exaggeration and media hype. I wonder if the Editor of Neuroscience Letters would care to enlighten us on what went wrong this time.

For my part, I would advise scientists, journalists and the public to treat this piece of research by Chi and Snyder with extreme caution.

The press office at the University of Sydney also has much to answer for. Many of us in the scientific community are striving to improve the quality of science news by ensuring that we avoid exaggeration and hyperbole in our dealings with the media. It helps nobody when press offices produce cynical puff pieces with stupid, sensationalistic headlines.

Monday 26 March 2012

You can't replicate a concept

Recent argy-bargy about a failed replication has exposed a disturbing belief in some corners of psychological research: that one experiment can be said to “conceptually replicate” another, even if it uses a completely different methodology. 

John Bargh, a professor of psychology at Yale, made waves recently with a stinging attack on virtually everyone associated with a failed attempt to replicate one of his previous findings. The specifics of this particular tango de la muerte can be found elsewhere, and I won’t repeat them here, except to say that I thought Bargh’s misrepresentation of the journal PLoS One was outrageous, offensive, and an extraordinary own goal.

That aside, Bargh-gate has drawn out a more important issue on the idea of “conceptual replication”. Ed Yong's article, and the comments beneath, exposed an unusual disagreement, with some (including Bargh himself) claiming that Bargh et al.'s original findings had been replicated at length, while others claimed that they had never been replicated.

How is this possible? Clearly something is awry.

All scientists, and many non-scientists, will be familiar with the basic idea of replication: that the best way to tell whether a scientific discovery is real is to repeat the experiment that originally found it. Replication is one of the bedrocks of science. It helps scientists achieve consensus and it acts like an immune system, eliminating findings that are irreproducible due to methodological error, statistical error or fraud.

It also goes without saying that the most important aspect of replication is to repeat the original experiment as closely as possible. This is why scientific journal articles contain a Method section, so that other scientists can precisely reproduce your experimental conditions.

Enter the notion of “conceptual replication”. If you are a scientist and you’ve never heard of this term, you are not alone. The other day I did a straw poll of my colleagues who are mostly experimental psychologists and neuroscientists and got blank looks in response.

The basic idea is this: that if an experiment shows evidence for a particular phenomenon, you can “conceptually” replicate it by doing a completely different experiment that someone – the experimenter, presumably – believes measures a broadly similar phenomenon. Add a pinch of assumption and a healthy dose of subjectivity, and viola, you’ve just replicated the original ‘concept’.

I must admit that when I first heard the term “conceptual replication”, I felt like shining a spotlight on the clouds and calling for Karl Pilkington. Psychology is already well known for devaluing replication and we do ourselves no favours by attempting to twist the notion of replication into something it isn’t, and shouldn’t be.

Here are four reasons why.

 1. Conceptual replication is assumption-bound and subjective

From a logical point of view, a conceptual replication can only hold if the different methods used in two different studies are measuring the same phenomenon. For this to be the case, definitive evidence must exist that they are. But how often does such evidence exist?

Even if we meet this standard (and the bar seems high), how similar must the methods be for a study to qualify as being conceptually replicated? Who decides and by what objective criteria?

2. Conceptual replications can be “unreplicated”


A reliance on conceptual replications can be easily shown to produce absurd conclusions.

Consider the following scenario. We have three researchers, Smith, Jones, and Brown, who publish three scientific papers in a sequence.

Smith gets the ball rolling by showing evidence for a particular phenomenon.

Jones then comes along and uses a different method to show evidence for a phenomenon that looks a bit like the one that Smith discovered. The wider research community decide that the similarity crosses some subjective threshold (oof!) and so conclude that Jones conceptually replicates Smith.

Enter Brown. Brown isn’t convinced that Smith and Jones are measuring the same phenomenon and hypothesises that they could actually be describing different phenomena. Brown does an experiment and obtains evidence suggesting that this is indeed the case.

We now enter the ridiculous, and frankly embarrassing, situation where a finding that was previously replicated can become unreplicated. Why? Because we assumed without evidence that Smith and Jones were measuring the same phenomenon when they were not. It’s odd to think that a community of scientists would actively engage in this kind of muddled thinking. 

3. Conceptual replications exacerbate confirmation bias


Conceptual replications are vulnerable to a troubling confirmation bias and a logical double-standard.

Suppose two studies draw similar conclusions using very different methods. The second study could then be argued to "conceptually replicate" the first.

But suppose the second study drew a very different conclusion. Would it be seen to conceptually falsify the first study? Not in a million years. Researchers would immediately point to the multitude of differences in methodology as the reason for the different results. And while we are all busily congratulating ourselves for being so clever, Karl Popper is doing somersaults in his grave.

4. Conceptual replication substitutes and devalues direct replication


I find it depressing and mystifying that direct replication of specific experiments in psychology and neuroscience is so vital yet so grossly undervalued. Like many cognitive neuroscientists, I have received numerous rejection decisions over the years from journals, explaining in reasonable-sounding boilerplate that their decision "on this occasion" was due to the lack of a sufficiently novel contribution

Replication has no place because it is considered boring. Even incremental research is difficult to publish. Instead, reproducibility has been trumped by novelty and the quest for breakthroughs. Certainty has given way to the X factor. At dark moments, I wonder if we should just hand over the business of science to Simon Cowell and be done with it.

The upshot


First, we must jettison the flawed notion of conceptual replication. It is vital to seek converging evidence for particular phenomena using different methodologies. But this isn’t replication, and it should never be regarded as a substitute for replication. Only experiments can be replicated, not concepts.

Second, journals should specifically ask reviewers to assess the likely reproducibility of findings, not just their significance, novelty and methodological rigor. As reviewers of papers, we should be vocal in praising, rather than criticising, a manuscript if it directly replicates a previous result. Journal editors should show some spine and actively value replication when reaching decisions about manuscripts. It is not acceptable for the psychological community to shrug it's shoulders and complain that "that's the way it is" when the policies of journal editing and manuscript reviewing are entirely in our own hands.

Psychology can ill afford the kind of muddled thinking that gives rise to the notion of conceptual replication. The field has taken big hits lately, with prominent fraud cases such as Diederik Stapel producing very bad publicity. The irony of the Stapel case is that if we truly valued actual replication, rather than Krusty-brand replication, his fraud could have been exposed years sooner and before he had made such a damaging impact. This makes me wonder how many researchers have, with the very best of intentions, fallen prey to the above problems and ‘conceptually’ replicated Stapel's fraudulent discoveries?

Thursday 15 March 2012

It's not a job.


One of the greatest misconceptions about science is that it is a job.

As Steven Siegel said in Under Siege (that timeless cinematic masterpiece etched into the brain of every boy who was 15 years old in 1992, mostly because of the cake scene): "It's not a job, it's an adventure".

Actually I think he was referring to being a navy seal rather than a scientist. But let's not quibble.

Being a scientist is a lifestyle. And it isn’t even a single lifestyle, it is several lifestyles rolled into one. In fact, I’m not even sure how many lifestyles – I’ve never sat down and counted them. This, above all, is the main reason I enjoy being a scientist: because it is truly impossible to get bored.

One moment I can be meeting with a PhD student about their latest experiment or set of results. The next I can be explaining lateralization of brain function to a couple of hundred undergraduates while they watch me having my motor cortex stimulated with electric currents. The next I can be spitballing ideas with a colleague over a whiteboard or a pint (or both). The next I can disappear into a world of seclusion and questionable musical taste while writing a grant application or research paper. The next I can be standing in the stunning Palazzo Fedrigotti in Rovereto, giving a talk on how the parietal cortex enables selective attention. Running through of all these activities is a thread that unites scientists everywhere: that child-like buzz of discovering things about nature that we simply didn’t know before.

Being a panelist at the Royal Institution this week for Alok Jha's event on science journalism reminded me what a privilege it is to be part of such a diverse and interesting profession. Here I am in the lecture theatre where Faraday spoke, engaging in a public forum with such luminaries in science journalism and communication as Ananyo Bhattacharya, Fiona Fox, Ed Yong, and Alice Bell.

Someone pinch me.

The Cardiff team at the Royal Institution debate on 13 March. From left, Emma Cheetham, Petroc Sumner, Jacky Boivin, and Fred Boy.
Me and Ananyo chatting before kick off. When they heard I was speaking, the Ri really should have warned the audience to bring their snow goggles.
I did a lot more listening than talking and it was a fascinating and humbling experience. I want to extend a big thank you to Jayshan Carpen for his excellent organisation of the event, and to the many scientists, journalists and press officers who attended. It was a stimulating discussion, and stay tuned for Alice Bell’s wrap-up piece, coming soon to the Guardian Science blog.

Alok Jha introduces the debate. He's really not as sinister as he looks in this photo.

Perhaps I’m being being overly optimistic – I’ve been told this is one of my character flaws – but I think this debate could be a defining moment. The main reason I'm positive is that the argument has shifted from some fairly heated finger-pointing to a focus on self-scrutiny. 

For me, the whole story began with a spirited debate in the Guardian about copy checking and the importance of accuracy as a starting point in science journalism. We were prompted to enter the discussion after we had a pretty diabolical experience with some newspapers in the reporting of our previous research. The omnipresence of the Leveson Inquiry raised the room temperature, and it was an exciting and at times heated debate. It has now become more measured and constructive, with each side asking itself what changes it can implement to improve the standard of science news coverage.

Me. Talking.
Changing yourself isn't easy, but what is most important is that we see it as possible. I refuse to buy into the argument of disempowerment: that scientists and journalists are pawns on some enormous chessboard controlled by remote gods in the clouds, with any attempt to implement change perceived as naïve, foolish, and futile. In the words of Frank Costello in The Departed, “I don’t want to be a product of my environment, I want my environment to be a product of me”. This is street wisdom at its best. Scientists need to realize that there is much we can do to change our environment and improve the quality of science news. We made nine suggestions in our pre-debate Guardian article, and as the vivacious Fiona Fox declared on Tuesday night, the answer is to “Engage, Engage, Engage!”

In my presentation at the debate I suggested three ways that scientists can improve the quality of science news. The first was to accept ultimate accountability for the quality of our own press releases, including a specific section: "What this study does not show". As scientists we need to work closely with press officers and respect their unique talents, but we also need to take as much ownership of press releases as we do our own research articles. This argument for accountability echoes that of Ed Yong, who proposes the same for journalists: that regardless of any and all obstacles, those who pen news stories are responsible for the quality of what they pen. There may be explanations for why this can fail, but there are no excuses. These are vital aspirations on both sides.

Our second recommendation was that more scientists should blog. Blogs have many benefits, and one is to provide a supplemental knowledge base for journalists. I will be using this blog to practice what I preached - and I've also just now joined Twitter.

Finally, we emphasized how important it is that scientists are vocal in challenging bad science and bad science reporting. On this we need to stand up in public and be heard. Ben Goldacre is a pioneer in neighbourhood watch, as is the brilliant Dorothy Bishop, but they shouldn’t be going it alone. I made a start on this last year in the Guardian, and I'll be doing more of it. And one additional point I should have made on Tuesday night is that all UK scientists should join the database for the Science Media Centre. They are a remarkable group of people who do vital work. Please support them!

The three things scientists could start doing now to improve the quality of science news coverage.

So what will my blog be about? I plan to write articles on science from many different angles, from evidence-led discussions about how the human brain exercises impulse control, free will, and attention, to behind-the-scenes insights into what it's like to be me: to work as a researcher and group leader at the School of Psychology at Cardiff University. I’ll provide slides on talks I’ve given and some photos and videos too. You can find more about my specific research and publications here.

I’ll address general myths and misconceptions about science as they become topically relevant. When we have a key paper published, I’ll blog about it and provide some additional context and insights for journalists and other interested readers. I’ll talk about science politics and my experiences engaging with industry.

Along the way I’ll try to offer some advice to up-and-coming scientists and postgraduate students. I’ll critique what I see as bad science (with the disclaimer that you mustn’t expect me to be as intelligent or articulate as Ben Goldacre!) and I’ll tell you about mistakes I’ve made in this profession and what I learned from them. I’ll write about science politics and the peer review process, and I will try to give frank and accurate insights into the inner workings of the science ‘machine’. Some of these posts might surprise readers who are outside science. I’ll also offer my opinions on various issues relating to science journalism.

I’m going to be irreverent and some of what I write will make me a troublemaker. I will also occasionally rant. As an Australian, I am prone to dropping the occasional expletive so apologies in advance if you have a sensitive disposition.

For those seeking originality, I can’t promise that this blog will raise any issues that you haven’t seen or read about before. I grinned when I read a tweet from Stephen Curry about my three points at the Ri debate: Chris Chambers so far a master of the bleeding' obvious in his presentation in pts 1-3”. Very true – in fact this is true of all nine points that we raise. When it comes to exploring ways that scientists can help improve science journalism, I’m not going to parade in front of you as some kind of revolutionary thinker. I’m not an expert on science and media, and I'm not a genius. I’m just a regular scientist who cares about this issue and wants to get involved.

Post-debate discussions. Ananyo Bhattacharya in the foreground. In accordance with best practice, the discussions swiftly adjourned to the pub.

This is why we’re commencing research of our own on how science is represented in press releases. Rather than standing on a soapbox, we’re going to let our evidence do the talking, and you can follow the progress of our research at www.insciout.com. Don’t hesitate to contact me if you have any ideas about our research or would like to join our team.

Is this whole discussion of science journalism an exercise in navel-gazing? Are we naive to think we can change anything? Was the debate destined to be a “dog and pony show”, as predicted by one Guardian reader?

I would answer No on all fronts, but I’ll let you be the judge of that. All I will say is this: if we keep an open mind and a clear focus on what we want to achieve, and how, who knows where this could lead.

I hope you enjoy my blog.