Wednesday, 17 February 2016

My commitment to open science is valuing your commitment to open science


tl:dr – to be shortlisted for interview, all future post-doctoral vacancies in my lab will require candidates to show a track record in open science practices. This applies to two posts I am currently advertising, and for all such positions henceforth. 

Twitter never ceases to amaze me. The other day I posted a fairly typical complaint about publication bias, which I expected to be ignored, but instead it all went a bit berserk. Many psychologists (and other scientists) are seriously pissed off about this problem, as well they should be.

My tweets were based on a manuscript we just had rejected from the Journal of Experimental Psychology: Applied because the results were convincingly negative in one experiment, and positive but “lacked novelty” in the other. Otherwise our manuscript was fine – we were complimented on it tackling an important question, using a rigorous method, and including a thorough analysis.

But, of course, we all know that good theory and methodology are not enough to get published in many journals. In the game of academic publishing, robust methods are no substitute for great results.

The whole experience is both teeth-grindingly frustrating and tediously unremarkable, and it reminds us of three home truths:

1) That this can happen in 2016 shows how the reproducibility movement still exists in an echo chamber that has yet to penetrate the hermitically sealed brains of many journal editors.
2) To get published in the journals that psychologists read the most, you need positive and novel results.
3) This is why psychologists p-hack, HARK and selectively publish experiments that “work”.

So what, I hear you cry. We’ve heard it all before. We’ve all had papers rejected for stupid reasons. Get over it, get over yourself, and get back to cranking the handle.

Not just yet. First I want to make a simple point: this can’t be explained away as a “cultural problem”. Whenever someone says publication bias is a cultural problem, all they are really saying is, “it’s not my problem”. Apparently we are all sitting around the great Ouija board of Academia, fingers on the glass, and watching the glass make stupid decisions. But of course, nobody is responsible – the glass just moved by itself!

Publication bias isn’t a cultural problem, it is widespread malpractice by senior, privileged individuals, just as Ted Sterling defined it back in 1959. Rejecting a paper based on results is a conscious choice made by an editor who has a duty to be informed about the state of our field. It is a choice that damages science and scientists. It is a choice that punishes honesty, incentivizes dishonesty and hinders reproducibility.

I’m a journal editor myself. Were I to reject a paper because of the results of the authors’ hypothesis tests, I would not deserve to hold such a position. Rejecting papers based on results is deliberate bias, and deliberate bias – especially by those in privileged positions – is malpractice. 

How to change incentives 

Malpractice it may be, but publication bias is acceptable malpractice to many researchers, so how do we shift the incentives to eliminate it?

Here are just three initiatives I’m part of which are helping to incentivize open practices and eliminate bias: 

Registered Reports: many journals now offer a format of article in which peer review happens before data collection and analysis. High quality study protocols are then accepted before research outcomes are known, which eliminates publication bias and prevents many forms of research bias. To date, more than 20 journals have joined the Registered Reports programme, with the first ‘high-impact’ journal coming on board later this year. 

TOP guidelines: more than 500 journals and 50 organisations have agreed to review their adherence to a series of modular standards for transparency and reproducibility in published research. For background, see our TOP introductory article. 

PRO initiative: led by Richard Morey of Cardiff University (follow him), this grassroots campaign calls for peer reviewers to withhold comprehensive review of papers that either fail to archive study data and materials, or which fail to provide a public reason for not archiving. You can read our paper about the PRO intiative here at Royal Society Open Science. If you want to see open practices become the norm, then sign PRO.

Registered Reports, TOP and PRO are much needed, but they aren’t enough on their own because they only tackle the demand side, not the supply side. So I’m going to add another personal initiative, following in the (pioneering) footsteps of Felix Schönbrodt. 

Hiring practices 

If we’re serious about research transparency, we need to start rewarding transparent research practices at the point where jobs and grants are awarded. This means senior researchers need to step up and make a commitment.

Here is my commitment. From this day forward, all post-doctoral job vacancies in my research group, on grants where I am the principal investigator, will be offered only to candidates with a proven track record in open science – one which can be evidenced by having pre-registered a study protocol, or by having publicly archived data / materials / code at the point of manuscript publication.

This isn’t me blowing smoke in the hope that I’ll get some funding one day to try such a policy. I’m lucky enough to have funding right now, so I’m starting this today.

I am currently advertising for 2 four-year, full-time post-doctoral positions on my European Research Council Consolidator grant. The adverts are here and here. Both job specifications include the following essential criterion: “Knowledge of, and experience applying, Open Science practices, including public data archiving and/or study pre-registration.” By putting this in the essential criteria, it means I won’t be shortlisting anyone who hasn’t done at least some open science.

Now, before we go any further, lets deal with the straw man that certain critics are no doubt already building. This policy doesn’t mean that every paper published by an applicant has to be pre-registered, or that every data set has to have been archived. It means that the candidate must have at least one example of where at least one open practice has been achieved.

I also realise many promising early-career scientists won’t have had the opportunity to adopt open practices, simply because they come from labs that follow the status quo. We all know labs like this; I used to work in a place surrounded by them (hell, I used to be one of them) – labs that chase glamour and status, or that just don't care about openness. It’s not your fault if you’re stuck in one of these labs. Therefore I’ve included a closing date of April 30 to give those so interested the time to generate a track record in open science before applying. Maybe it's time to test your powers of persuasion in convincing your PI to do something good for science over and above furthering their own career.

If you’re a PI like me, I humbly invite you to join me in adopting the same hiring policy. By acting collectively, we can ensure that a commitment to open science is rewarded as it should be.

Thursday, 26 November 2015

It's nice to be nice but it's more important to be honest

Science is hard, and if you're on the receiving end of criticism it can be especially hard. As scientists we need to have thick skins because we deal with harsh criticism every day - we are bombarded with critical comments from reviewers (usually anonymously) when they tear down our latest grant applications or papers. We get critical questions at conferences. We argue with our friends, colleagues, and people we don't even know. We disagree a lot. We get frustrated. It's fair to say that disagreement and frustration are hallmarks of the job.

As a junior scientist this can take some getting used to. Most of the time the disagreement is good-natured, but occasionally it can creep into the personal.

This morning we saw an example of this when a prominent study just published in PNAS drew some flack on Twitter. Small N, no replication, big story.  Personally I saw it as just another day at the office -- just another unremarkable exemplar of the low empirical standards we set for ourselves in cognitive neuroscience. I realise that sounds harsh but that's just how I feel about it. We need to set higher standards, and step one is being publicly honest about our reactions to published work.

Our field is peppered with small studies pumped out by petty fiefdoms, each vying for a coveted spot in high impact journals so we can have careers and get tenure and maybe make a few discoveries along the way. It would be disingenous to say that I'm any different. I've got my own fiefdom, just like the rest. It's no less petty; I am no better than anyone else.

When I look at fMRI studies like the one this morning, I see how far we need to come as a field. Does that sound arrogant? I don't care. I wrote about this recently because reproducibility is a huge problem in biomedical science and something a lot of people (but not enough) are working hard to fix. It is a bigger problem than anyone's ego, bigger than anyone's career.

Some folks get upset at the direct nature of post publication peer review. They might know the scientists involved; they might think they're careful; they might like them. And they might think such criticism is an attack on the integrity of the researchers -- that robust post-publication-peer-review, pointing out probable bias or low reproducibility, is tantamount to an accusation of misconduct. 

This is false because questionable practices aren't the same as fraud and bias isn't the same as misconduct. Much, if not most, research bias happens unconsciously. It can and does distort our results despite our best efforts because we're humans rather than robots. I believe many in our community are not only blind to unconscious bias, they're blind to the possibility of unconscious bias. They think that because they're careful that their studies are robust. But once you know the extent of your own bias it changes your mindset in a deep way. We learned this in my lab some time ago, which is why we now pre-register our studies.

Twitter is a great social leveller, allowing all kinds of voices to be heard. This is tremendous for science because it adds a layer of immediacy and diversity to peer review that busts conventions and blows traditional (stuffy) forms of interaction, and traditional hierarchies, right out the window. 

So while I agree with the sentiment that it's nice to be nice to each other, I believe it's even more important to be honest. If you wave what I see as bullshit in my face I will probably call it bullshit, and I expect you to do the same to me. In fact I expect you to do the same for me because by being honest you are doing me a favour.

Thursday, 30 July 2015

Be sure to have your say on the future of the Guardian Science blog network

It is one of the great privileges of my career to be a writer on the Guardian science blogs, where I contribute mainly to the psychology blog Head Quarters. It's nearly two years since we launched Head Quarters, and it has been great fun for all of us on the writing team (Thalia, Pete, Molly, and me) and, most importantly, I hope readers have enjoyed our posts.

The science blogs overall have been a great success for the Guardian, and we're now entering an interesting period where the structure of the blog network is being reviewed and some aspects may be revised. We now need your input as readers to ensure that any changes we make are the right ones, so if you read any of the science blogs, please have your say by completing our reader's survey

Next year we will also be launching an exciting citizen science platform, fronted by a new section of the Guardian tentatively called "Guardian Experiments". The platform will provide a workspace for hosting large-scale online research studies, including (for instance) psychology experiments, polls, and citizen science initiatives. We'll provide regular updates on this initiative, and in the meantime you can read about one of the initial projects we plan to launch on the platform here.

Tuesday, 16 June 2015

The first rule of Tim Hunt is…


I see a lot of people at the moment saying we should stop talking about the Tim Hunt affair and focus on the Real Issues facing women in science. As though condescending arseholes at the top of the profession aren't one of those issues.

Even Brian Cox is doing it. Lucky I don’t idolise anyone or my illusions might just be shattered.

All of you saying we should move on, or that the response to Hunt was “disproportionate” (if I never hear that word again it will be too soon) need to take a good hard look.

Many of us will only “move on” from Tim Hunt as soon as there is a serious recognition that Hunt’s remarks at the WCSJ were serious and damaging enough to warrant the sanctions that have been applied. Spare me the world’s smallest violin, but a white male professor FRS Nobel Laureate having an unremunerated honorary position taken away together with a couple of positions of influence on the ERC and Royal Society does not an excommunication make. I don’t want to hear any more self-pitying bullshit about him being “hung out to dry” or “removed from society”.

I haven’t said much about the Tim Hunt affair. To be honest I’ve been busy listening to the reactions from others, particularly women in science. And as a privileged white male professor at a leading UK university I honestly don’t feel that my opinion counts for much. But I do have one, and for what it’s worth here it is:

1 – Hunt’s comments were unacceptable and stupid. He has yet to offer a full apology, which just shows how little recognition he has of sexism in science. Oh but he's old, right, so that's ok? Fuck that. My dad is the same age as Hunt, has one less Nobel prize, grew up in 1950s Australia (AKA Betty Crocker Central) and could teach him a thing or two about equality.

2 – the prompted resignations from UCL, Royal Society, and the ERC were appropriate. Some have criticised them for being too quick. Bullshit. They were fast because the case was clear. They did the right thing and I applaud them.

3 – there has been no witch hunt, no lynch mob, no burnings or beheadings. Just people, including lots of women scientists, expressing their displeasure with Tim Hunt’s comments on social media. And often with great humour.

4 – I am deeply disappointed by some of the defences of Hunt emerging from various Establishment figures, publicly and in private. A lot of these defences are being expressed behind the scenes and consist of “He’s a nice guy; he has no media training and was lost at sea; I’ve never seen any evidence of him behaving in a sexist manner so everything is fine”. Many of these people are sending these messages in the hope that the recipients will use their influence to defend him on their behalf. Stop it. If you want to defend Tim Hunt, at least have the spine to do it yourself.

5 – To those calling for more evidence of wrongdoing before "condemning" Hunt, just stop. The comments are evidence enough that he is not fit to hold ambassadorial roles in science. Being a great scientist does not justify being a purveyor of 1950s sexism.

6 – Those telling us to move on or pay attention to something else would do well to examine the privilege of their own vantage point. Why exactly do you want to move this debate on so quickly? And here's some fun Bingo to play while you’re at it.

7 – We are all sexist. I know I am because I was raised in 1980s Melbourne surrounded by gender stereotypes and it is an ongoing battle combatting these in work and life. Avoiding benevolent sexism is particularly challenging. I will be working hard to teach my 9-month old son to fight these stereotypes as he grows up, rather than accept them as I did.

8 – Fuck off, Boris Johnson. You tedious populist fart. There, that was easy.

9 – Athene Donald has published a fantastic list of actions we can all take to further the cause of women in science. My only proviso is that she predicates it all on a very shaky defense of Hunt, who is clearly her friend. But the list is excellent and I’ve reproduced it below without the unnecessary "Hunt is a really nice guy" baggage:

  • Call out bad behaviour whenever and wherever you see it – in committees or in the street. Don’t leave women to be victimised;

  • Encourage women to dare, to take risks;

  • Act as a sponsor or mentor (if you are just setting out there will still always be people younger than you, including school children, for whom you can act);

  • Don’t let team members get away with demeaning behaviour, objectifying women or acting to exclude anyone;

  • Seek out and remove microinequities wherever you spot them;

  • Refuse to serve on single sex panels or at conferences without an appropriate level of female invited speakers;

  • Consider the imagery in your department and ensure it represents a diverse group of individuals;

  • Consider the daily working environment to see if anything inappropriate is lurking. If so, do something about it.

  • Demand/require mandatory unconscious bias training, in particular for appointment and promotion panels;

  • Call out teachers who tell girls they can’t/shouldn’t do maths, physics etc;

  • Don’t let the bold (male or female) monopolise the conversation in the classroom or the apparatus in the laboratory, at the expense of the timid (female or male);

  • Ask schools about their progression rates for girls into the traditionally male subjects at A level (or indeed, the traditionally female subjects for boys);

  • Nominate women for prizes, fellowships etc;

  • Tap women on the shoulder to encourage them to apply for opportunities they otherwise would be unaware of or feel they were not qualified for;

  • Move the dialogue on from part-time working equates to ‘isn’t serious’ to part-time working means balancing different demands;

  • Recognize the importance of family (and even love) for men and women;

  • Be prepared to be a visible role model;

  • Gather evidence, data and anecdote, to provide ammunition for management to change;

  • Listen and act if a woman starts hinting there are problems, don’t be dismissive because it makes you uncomfortable;

  • Think broadly when asked to make suggestions of names for any position or role.

 



Monday, 30 March 2015

Why I am resigning from the PLOS ONE editorial board

Today I tendered my resignation as an Academic Editor at PLOS ONE. 

It's a slightly sad day for me. As I explained to Damian Pattinson in an email, I remain as much a supporter of the PLOS ONE mission as when I joined the editorial board over two years ago. PLOS ONE has done more than any other journal to combat publication bias and to normalise open data practices. Sure, the PLOS ONE mechanism doesn't always work perfectly, but in terms of philosophy it is light years ahead of most other journals in the social and life sciences.

The reason I'm leaving PLOS ONE isn't because they did anything wrong (although it must be said that the volume of editorial requests is unfeasibly high). Instead, the Registered Reports initiative is really starting to gain traction and I am increasingly finding myself helping other journals launching the initiative or even serving on editorial boards that are offering the format. So I have decided to focus my efforts on editing for journals that offer, or plan to offer, Registered Reports. For now, at least, PLOS ONE isn't willing or able to do so. Meanwhile, the list of adopting journals continues to grow; the latest exciting addition is Royal Society Open Science, which will be launching Registered Reports across all sciences later this year.

In the interests of transparency I should say that, for the same reason that I am leaving PLOS ONE, I also declined last week to join the editorial board of Nature Scientific Reports. Upon being invited to join their editorial board, I responded that I would be happy to do so if they would consider offering Registered Reports, and that I would be delighted to help them set up the format. I had hoped their response might be positive given the stated mission of the journal to avoid setting "a threshold of perceived importance to the papers that it publishes; rather, it publishes all papers that are judged to be technically valid." Unfortunately they responded: "We have considered venturing into the world of registered reports, but it isn’t something we’re able to get involved with right now." 

Fair enough, but then I'm afraid I can't (in good conscience) join your editorial board. A growing number of journals claim to celebrate scientific validity and transparency above the standard values (like "novelty" and "impact" of findings) -- in fact, the banner of transparency could almost be said to be in vogue right now -- but I find that the real litmus test is whether such journals are willing to accept papers before the results are known. If not then some small part of them still wants to selectively publish "good results". There is no room for fine print on the transparency banner.

What I have recently done is join the editorial board of Collabra, an interesting new open access journal being launched by the University of California Press. Collabra have agreed to offering Registered Reports and we will be updating the Open Science Framework information hub for Registered Reports as soon as there is further news. 

So - my thanks and a fond farewell to PLOS ONE. And my message to any other journals: if you want Chris Chambers on your editorial board (not that anyone really should of course!) then you need to either offer Registered Reports or plan to do so in the future. 

Trust me, you won't regret it.*

___________


* Well, you won't regret offering Registered Reports. I, on the other hand, am an entirely different matter...

Monday, 19 May 2014

Comments on study pre-registration and Registered Reports


** You can download our 25-point Q&A about Registered Reports here **

As part of today's Guardian post on study pre-registration in psychology, I sought feedback on three questions from a number of colleagues. Due to space constraints I couldn’t do their insights justice, so I’ve reproduced their complete answers below. 

At the bottom of the post I've included a full list of journals offering Registered Reports and related initiatives. Enjoy! 

Question 1: What would you say to critics who argue that pre-registration puts "science in chains"? Are their concerns justified? 

Professor Dorothy Bishop, University of Oxford 

I think there's a widespread misunderstanding of pre-registration. It's main function is to distinguish hypothesis-testing analyses from exploratory analyses. It should not stop exploratory research, but should make it clear what is exploratory and what is not. Most of the statistical methods that we use make basic assumptions that are valid only in a hypothesis-testing context. If we explore a multidimensional dataset, decide on that basis what is interesting, and then apply statistical analysis, we run a high risk of obtaining spurious 'significant' findings. Currently science is not so much in chains as bogged down in a mire of non-replicable findings, and we need to find ways to deal with this. I increasingly find myself reading papers and wondering just what I can believe - particularly in areas of neuroscience where there are huge multidimensional datasets and multiple researcher degrees of freedom in choosing how to analyse findings. I would not insist that pre-registration is mandatory, but I think it's great to have that option and I hope that as the new generation of scientists learn more about it, they will come to embrace it as a way of clarifying scientific findings and achieving better replicability of research. 

Professor Tom Johnstone, University of Reading 

I think the concern that scientists have of being "put in chains" is understandable. We've all probably had the frustrating experience of confronting a reviewer or editor who believes there's one way, and one way only, to collect data or perform analysis, for example. Creativity and adaptive thinking and problem solving are very much a part of science, and mustn't be stifled. 

Yet the solution is to make sure that the move towards pre-registration is accompanied by an expansion of the ways in which researchers can openly report innovative exploratory research, and the iterative development of new methods. As you've pointed out, if we didn't try to shoehorn all of our research into the hypothesis-testing model, then we'd relieve a lot of the pressure for people to engage in post hoc hypothesis creation. 

Dr Daniël Lakens, Eindhoven University of Technology

Science is like a sonnet. There is a structure within which scientists work, but that does not have to limit our creativity. As Goethe remarked: ‘In der Beschränkung zeigt sich erst der Meister’ - Mastery is seen most clearly when constrained. 

Dr Brendan Nyhan, Dartmouth College 

I think the idea that pre-registration will put “science in chains” is attacking a straw man. No one is proposing that it should be the only way to conduct research. There will still be every opportunity to pursue unanticipated findings. The widespread availability of pre-registered journal articles will more clearly distinguish between true hypothesis-testing and exploratory research. For instance, a researcher might observe an unanticipated result and then pre-register a replication study to test the effect more systematically.

Professor Dan Simons, University of Illinois 

Frankly, this criticism is nonsense. Pre-registration just eliminates the ability to fool yourself into thinking some post-hoc decision was actually an a-priori one. Specifying a plan in advance just means that you actually did plan your "planned" analyses. As psychologists, we should know how easily we can convince ourselves that the analysis that worked was the logical one to do, after the one we first thought to try didn't work. If your theory makes a prediction, you should be able to specify it in advance and you should be able to specify what outcomes would support it. Yes, it takes more work up front to pre-register a plan. But, if you truly are conducting planned analyses, all you are doing is shifting when you do that work, not what you're doing.  

Nothing about pre-registration prevents a researcher from conducting additional exploratory analyses that were not part of the registered plan. Pre-registration just makes clear which analyses were planned and which ones were exploratory. How does that constrain science in any way? 

Question 2: Do you think pre-registration will influence the future of publishing in psychology, neuroscience and beyond?  

Professor Tom Johnstone, University of Reading 

I do think that the move towards registered studies will be of benefit to science, not only because it will encourage better research practice, but also because it will lessen the file-drawer problem by ensuring that "null" results are published. It will also hopefully catalyse a shift towards more informative statistics than standard NHST. That's not to say there won't be problems; undoubtedly there will be (concerns about research timelines especially for junior researchers need to be tackled head-on, for example). 

Dr Daniël Lakens, Eindhoven University of Technology

It will complement the way we work in important ways. Especially in ‘hot’ research areas, which are at a higher risk of increased Type 1 errors (Ioannides, 2005), pre-registration will greatly facilitate our understanding of how likely it is things are true. 

Dr Brendan Nyhan, Dartmouth College 

Pre-registration could transform the future of publishing if funders, government agencies, reviewers, editors, and tenure and promotion committees demand it. The movement will only succeed if it changes expectations about research credibility among a wider group of scholars and stakeholders than its most devoted advocates. It should also take further steps to broaden its appeal to researchers - most notably, by encouraging journals to adopt formats like Registered Reports that reduce risk to scholars concerned about their ability to publish pre-registered null results given the publication biases in scientific journals. 

Professor Dan Simons, University of Illinois 

Pre-registration effectively eliminates hypothesizing after the results are known. It keeps us from convincing ourselves that an exploratory analysis was a planned one. It is perhaps the best way to keep yourself from inadvertent p-hacking and to convince others that your hypotheses predicted rather than followed from your results. Ideally, more journals will begin reviewing the registered plans as the basis for publication decisions. Doing so would effectively eliminate the file drawer problem. If a study is well designed, its results should be published.  

Question 3: Why do you think psychology and neuroscience are spearheading these initiatives, rather than other sciences? 

Professor Dorothy Bishop, University of Oxford 

I think there are two reasons. First, most psychologists (though not neuroscientists in general) get a good grounding in statistics at undergraduate level, so they have been quicker to appreciate the problems that are inherent in 'false positive psychology'. Second, psychologists study how people think and are aware of how easy it is to deceive yourself at all kinds of levels: after all, one of the first things that many students learn about is the Muller-Lyer visual illusion, where you are convinced that two lines are different lengths when in fact they are the same. That should make us more vigilant about always questioning whether our findings are correct; we are taught to look for counter-evidence rather than just confirming our pre-conceptions. 

Professor Tom Johnstone, University of Reading 

As to why this is being lead by psych/neuro, hard to say. Probably a case of the right combination of factors coinciding (e.g.recent high-profile spotlight on QRP and fraud in social psychology, links to medical research and associated ethics, in which registration has been recently enforced, a few people willing to actively push this forward), plus peculiarities of psych research compared to some other disciplines (for example, speaking with my physics training hat on, the almost complete reliance on NHST in psychology and neuroscience, rather than accurate quantitative description of effects, and the almost total lack of replication). There is, I think, a research culture difference here. That will be difficult to change, but one has to start somewhere. 

Dr Daniël Lakens, Eindhoven University of Technology

According to Parker (1989), ‘psychology is in a continuous crisis’. Psychology has a tradition of self-criticism. It is sometimes remarked that psychology’s greatest contribution is methodology (e.g., Scarr, 1997), so it is not surprising we are on the forefront of methodological improvements in the current debate about ways to improve our science.

Dr Brian Nosek, University of Virginia

The reproducibility challenges facing science are strongly influenced by the incentives and social context that shape scientists' behavior.  Understanding and altering incentives, motivations, and social context are psychological challenges.  Psychologists are ahead because they are just applying their domain expertise on themselves. 

Links to Registered Reports initiatives and related formats 

Journal: AIMS Neuroscience 
Detailed guidelines: http://www.aimspress.com/reviewers.pdf (Nb. The AIMS website is currently down but I am told it will be back up soon).
Editorial: http://orca.cf.ac.uk/59475/1/AN2.pdf 

Journal: Attention, Perception and Psychophysics
Detailed guidelines: http://link.springer.com/content/pdf/10.3758%2Fs13414-013-0502-5.pdf 

Journal: Journal of Experimental Psychology: General 
Announcement inviting registered replications: http://www.apa.org/pubs/journals/xge/ 

Journal: Perspectives on Psychological Science 
Guidelines: http://www.psychologicalscience.org/index.php/replication 
Guidelines: To come...

Friday, 31 January 2014

Research Briefing: Does TMS-induced ‘blindsight’ rely on ancient reptilian pathways?


Source Article: Allen C.P.G., Sumner P., & Chambers C.D. (2014). Timing and neuroanatomy of conscious vision as revealed by TMS-induced blindsight. Journal of Cognitive Neuroscience, in press.  [pdf] [study data]  

-----------

One of the things I find most fascinating about cognitive neuroscience is the way it is shaping our understanding of unconscious sensory processing: brain activity and behaviour caused by imperceptible stimuli. Lurking below the surface of awareness is an army of highly organised activity that influences our thoughts and actions.

Unconscious systems are, by definition, invisible to our own introspection but that doesn’t make them invisible to science. One simple way to unmask them is to gradually weaken an image on a computer screen until a person reports seeing nothing. Then, when the stimulus is imperceptible, you ask the person to guess what type of stimulus it is, for instance, whether it is “<” or “>”. What you find is that people are remarkably good at telling the difference. They’ll insist they see nothing yet correctly discriminate invisible stimuli much higher than predicted by chance – often at 70-80% correct. It’s really quite head-scratching.

Back in the 1970s, a psychologist named Larry Weiskrantz found that this contrast between conscious and unconscious processing was thrown into sharp relief following damage to a part of the brain called the primary visual cortex (V1). Weiskrantz (and later others) found that patients with damage to V1 would report being blind to one part of their visual field, yet, when push came to shove, they could discriminate stimuli above chance or even navigate successfully around invisible objects in a room. He coined this intriguing phenomenon “blindsight”.

Since then, blindsight has drawn the attention of psychologists, neurologists and philosophers. One of the major debates in the literature has centred on the neurophysiology of the phenomenon: how, exactly, is this unconscious vision achieved? Blindsight proved that information was somehow influencing behaviour without being processed by V1.

Two schools of thought took shape. One argued that, during blindsight, unconscious information reached higher brain systems by activating spared islands of cortex near the damaged V1. An opposing school argued that the information was taking a different road altogether: an ancient reptilian route known as the retinotectal pathway, which bypasses visual cortex to reach frontal and parietal regions.

In our latest study, published in the Journal of Cognitive Neuroscience, we sought to pit these accounts against each other by generating blindsight in healthy people with transcranial magnetic stimulation (TMS). The study was originally conceived by Chris Allen, then a PhD student in my lab and now a post-doctoral researcher. We hadn’t used TMS like this before but we knew from the work of Tony Ro’s lab that it could be done with a particularly powerful type of TMS coil.

Knocking out conscious awareness with TMS was one thing – and apparently doable – but how could we tell which brain pathways were responsible for whatever visual ability was left over? Fortunately I’d recently moved to Cardiff University where Petroc Sumner is based. Some years earlier, Petroc had developed a clever technique to isolate the role of different visual pathways by manipulating colour. When presented under specific conditions, these coloured stimuli activated a type of cell on the retina that has no colour-opponent projections to the superior colliculus. These stimuli, known as “s-cone stimuli”, were invisible to the retinotectal pathway (1). We teamed up with Petroc, and Chris set about learning how to generate these stimuli.

Now that we had a technique for dissociating conscious and unconscious vision (TMS), and a type of stimulus that bypassed the retinotectal pathway, we could bring them together to contrast the competing theories of blindsight. Our logic was this: if the retinotectal pathway is a source of unconscious vision then blindsight should not be possible for s-cone stimuli because, for these stimuli, the retinotectal pathway isn’t available. On the other hand, if blindsight arises via cortical routes at (or near) V1 then blocking the retinotectal route should be inconsequential: we should find the same level of blindsight for s-cone stimuli as for normal stimuli (2).

There were other aspects to the study too (including an examination of the timecourse of TMS interference), but our main result is summarised in the figure below. When we delivered TMS to visual cortex about a tenth of a second after the onset of a normal stimulus, we found textbook blindsight: TMS reduced awareness of the stimuli while leaving unaffected the ability to discriminate them on ‘unaware’ trials. 

Crucially, we found the same thing for s-cone stimuli: blindsight occurred even for these specially coloured stimuli that bypass the retinotectal route. Since blindsight occurred for stimuli that weren’t processed by the retinotectal pathway, our results allow us to reject the retinotectal hypothesis in favour of the cortical hypothesis. This suggests that blindsight in our study arose from unperturbed cortical systems rather than the reptilian route.

Our key results. The upper plot shows conscious detection performance when TMS was applied to visual cortex at 90-130 milliseconds after a stimulus appeared. Compared to "sham" (the control TMS condition), active TMS reduced conscious detection for both the normal stimuli and S-cone stimuli that bypass the retinotectal pathway. The lower plot shows the corresponding results for discrimination of unaware stimuli; that is, how accurately people could distinguish "<" from ">" when also reporting that they didn't see anything. For for both normal stimuli and S-cone, this unconscious ability was unaffected by the TMS. And because this TMS-induced blindsight was found for stimuli that bypass the retinotectal route, we can conclude that the retinotectal pathway isn't crucial for blindsight found here.







While the results are quite clear there are nevertheless several caveats to this work. There is evidence from other sources that the retinotectal pathway can be important and our results don’t explain all of the discrepancies in the literature. What we do show is that blindsight can arise in the absence of afferent retinotectal processing, which disconfirms a strong version of the retinotectal hypothesis.

Also, we don’t know whether the results will translate to blindsight in patients following permanent injury. TMS is a far cry from a brain lesion – unlike brain damage, it is transient, safe and reversible, which of course makes it highly attractive for this kind of research but also distances it from work in clinical patients. Furthermore, even though we can rule out a role of the retinotectal pathway in producing blindsight as shown here, we don’t know which cortical pathways did produce the effect. 

Finally, our paper reports a single experiment that has yet to be replicated – so appropriate caution is warranted as always.

Still, I’m rather proud of this study. I take little of the intellectual credit, which belongs chiefly to Chris Allen. Chris brought together the ideas and tackled the technical challenges with a degree of thoroughness and dedication that he’s become well known for in Cardiff. This paper – his first as primary author – is a nice way to kick off a career in cognitive neuroscience.


1. By “afferent” I mean the initial “feedforward” flow of information from the retina. It’s entirely possible (and likely) that s-cone stimuli activate retinotectal structures such as the superior colliculus after being processed by the visual cortex and then feeding down into the midbrain. What’s important here is that s-cone stimuli are invisible to the retinotectal pathway in that initial forward sweep. 

2. Stats nerds will note that we are attempting to prove a version of the null hypothesis. To enable us to show strong evidence for the null hypothesis, we used Bayesian statistical techniques developed by Zoltan Dienes that assess the relative likelihood of H0 and H1.