Friday, 30 March 2012

Brain stimulation to solve impossible problems?

No. But this is what happens when a flawed scientific experiment is combined with generous helpings of public-relations nonsense.

Richard Chi and Allan Snyder from the University of Sydney have claimed that a form of brain stimulation known as transcranial direct current stimulation (tDCS) can help people “escape the tricks our minds impose on us”. An extraordinary piece of PR puffery from by the University of Sydney headlines with: 'Impossible' problem solved after non-invasive brain stimulation.

These are precisely the kinds of claims that damage public understanding of science. I'm getting deja vu just writing this.

In their latest outing, published in the journal Neuroscience Letters, Chi and Snyder asked whether tDCS could help people solve a puzzle called the 'nine dots problem', which requires people to connect a 3 x 3 grid of dots using four straight lines. They report that participants who received tDCS of their anterior temporal lobes were better able to solve the puzzle than those who had ‘sham’ stimulation. They conclude that this effect was “due to inhibiting [brain] networks associated with top down imposition of prior knowledge, knowledge that inclines us to join the dots up within the square.”

Really? I've read their paper several times now, back to front, even sideways a couple of times. And I still can't find any evidence to support this claim. Instead all I found was a long list of flaws. Here are some of the worst offenders.

1. No control task for effects of "prior knowledge"

To conclude that tDCS disrupted the imposition of “prior knowledge”, they would need to include a control task that is matched on every possible dimension (task difficulty, attentional requirements, etc.) but which isn’t prone to the imposition of “prior knowledge”. They didn’t. They also fail to provide any cogent theoretical framework for what neural mechanisms are responsible for “imposing prior knowledge”. In the Discussion, the authors seem to acknowledge this lack of experimental control: "While there could well be alternative explanations, we would like to emphasize that our results that tDCS enabled over 40% of participants to solve the nine-dot problem stand on their own." I beg to differ.

2. Actually, no suitable control tasks at all

Participants could have performed better because tDCS increased their alertness or arousal, or altered their attentional state in any one of many possible ways. But the authors included no control tasks to test whether these processes were affected, either positively or negatively. 

3. Claims based on mysterious pilot data

The authors describe in handwaving fashion a pilot study that compared different forms of brain stimulation, and then conclude that this pilot study confirms their previous findings. But the results of the pilot study are not shown and are not even statistically analysed. Did the reviewers of this paper even read it?

4. Dubious statistical analysis

Their use of one-tailed significance testing is highly questionable. A directional prediction alone is not sufficient justification for one-tailed tests. This is is because one-tailed testing is only legitimate if the opposite result to that expected is either impossible or irrelevant. Neither is the case here, or arguably ever in cognitive neuroscience: had brain stimulation impaired rather than enhanced performance then the authors would naturally still have interpreted their data. The authors also appear confused about the nature of probabilities. For the key analysis, comparing the number of participants who solved the puzzle between the sham and experimental groups, they report a p value of .018, and then conclude bizarrely that the likelihood of observing the results by chance was less than one in 10,000. Pardon?

5. Bizarre inclusion of additional participants

The actual experiment was undertaken on 22 participants. Eleven of these participants were in the control group (they received ‘sham’ tDCS in which the current is switched off soon after being started), and the other eleven were in the tDCS group. But in the Results, the authors suddenly add another 11 participants to the sample without providing any detail of the experimental procedures.

6. I don't even know what to call this problem. Appeal to...prose?

Part of the results consists of an anecdote of what one participant had to say about his "cognitive style". This section is rather optimistically titled “Case Report”.

I am honestly struggling to find any reasonable explanation for how this paper could have survived even a modestly critical peer-review process. I could give this paper to any one of my PhD students and they would find more flaws than I've even listed here. But perhaps I shouldn’t be surprised, as Snyder's group have previously published a similarly uncontrolled study, combined with a healthy dose of exaggeration and media hype. I wonder if the Editor of Neuroscience Letters would care to enlighten us on what went wrong this time.

For my part, I would advise scientists, journalists and the public to treat this piece of research by Chi and Snyder with extreme caution.

The press office at the University of Sydney also has much to answer for. Many of us in the scientific community are striving to improve the quality of science news by ensuring that we avoid exaggeration and hyperbole in our dealings with the media. It helps nobody when press offices produce cynical puff pieces with stupid, sensationalistic headlines.

2 comments:

  1. 2. Actually, no suitable control tasks at all
    Participants could have performed better because tDCS increased their alertness or arousal, or altered their attentional state in any one of many possible ways. But the authors included no control tasks to test whether these processes were affected, either positively or negatively.

    Wouldn't this be a good example of when the authors could have used conceptual replication to good effect? If it's really "top down knowledge" that's the key variable here, it would help to show it using a new task that relies on top down knowledge, but does not rely on other candidate variables, like attention.

    Off the top of my mind, how about the Navon letters task (for an example, see this article: http://www.wired.com/wiredscience/2011/11/need-to-create-get-a-constraint/). It's a task where participants are presented a large letter made up of small letters, and the question is whether they automatically respond to the large or small letter. If the tDCS makes people less likely to see the big letter, doesn't that provide additional evidence of the authors' claim? If it does not, doesn't that undermine their claim?

    ReplyDelete
  2. Thanks for this patient critique, Chris. Cathartic for others too - and we didn't have to do the work!

    Your criticisms apply just as much to other papers from this group too (see list at http://www.centreforthemind.com/publications/publications.cfm ). And as you ask: How did any of these papers survive peer review? What were the reviewers and editors at Neuroscience Letters, PLoS One, Brain Research etc thinking of?

    One thing that's different about this particular example of journalistic puffery of neuroscience research, though, is that this group would have been in favour of it, since they do a lot of such puffery themselves. See http://www.centreforthemind.com/whoweare/index.cfm which is hilarious if one is in the right mood. So my guess is that the people who published this research would have been perfectly happy with what the University of Sydney wrote.

    ReplyDelete