Wednesday, 18 July 2012

The Dirty Dozen: A wish list for psychology and cognitive neuroscience


It’s been quite a month in science. 

On the bright side, we probably discovered the Higgs boson (or at least something that smells pretty Higgsy), and in the last few days the UK Government and EU Commission have made a strong commitment to supporting open-access publishing. In two years, so they say, all published science in Britain will be freely available to the public rather than being trapped behind corporate paywalls. This is a tremendous move and I applaud David Willetts for his political courage and long-term vision.

On the not-so-bright side, we’ve seen a flurry of academic fraud cases. Barely a day seems to pass without yet another researcher caught spinning yarns that, on reflection, did sound pretty far-fetched in the first place. What’s that? Riding up rather than down an escalator makes you more charitable? Dirty bus stops make you more racist? Academic fraudsters are more likely to have ground-floor offices? Ok, I made that last one up (or rather, Neuroskeptic did) but if such findings sound like bullshit to you, well funnily enough they actually are. Who says science isn’t self-correcting?

We owe a great debt to Uri Simonsohn, the one-man internal affairs bureau, for judiciously uncovering at least three cases of fraudulent practice in psychological research. So far his investigations have led to two resignations and counting. Bravo. This is a thankless task that will win him few friends, and for that alone I admire him.

And as if to remind us that fraud is by no means unique to psychology, enter the towering Godzilla of mega-fraud – Japanese anaesthesiologist, Yoshitaka Fujii, who has achieved notoriety by becoming the most fraudulently productive scientist ever known.

(As an aside, has anyone ever noticed how the big frauds in science always seem to be perpetrated by men? Are women more honest or do they just make savvier fraudsters?)

Along with all the talk of fraud in psychology, we have had to tolerate the usual line-up of  ‘psychology isn’t science’ rants from those who ought to learn something before setting hoof to keyboard. Fortunately we have Dave Nussbaum to sort these guys out, which he does with a steady hand and a sharp blade. Thank you, Dave!

With psychological science facing challenges and shake-ups on so many different fronts, the time seems ripe for some self-reflection. I used to believe we had a firm grasp on methodology and best practice. Lately I’ve come to think otherwise.

So here’s a dirty dozen of suggested fixes for psychology and cognitive neuroscience research that I’ve been mulling over for some time. I want to stress that I deserve no credit for these ideas, which have all been proposed by others.

1.     Mandatory inclusion of raw data with manuscript submissions

No ifs. No buts. No hiding behind the lack of ethics approval, which can be readily obtained, or the vagaries of the Data Protection Act. Everyone knows data can be anonymised.

2.     Random data inspections

We should conduct fraud checks on a random fraction of submitted data, perhaps using the methodology developed by Uri Simonsohn (once it is peer reviewed and judged statistically sound – as I write this, the technique hasn’t yet been published). Any objective test for fraud must have a very low false discovery rate because the very worst thing would be for an innocent scientist to be wrongly indicted. Fraudsters tend to repeat their behaviour, so the likelihood of false positives in multiple independent data sets from the same researcher should (hopefully) be infinitesimally small.

3.     Registration of research methodology prior to publication

Some time ago, Neuroskeptic proposed that all publishable research should be pre-registered prior to being conducted. That way, we would at least know from the absence of published studies how big the file-drawer is. My first thoughts on reading this were: why wouldn’t researchers just game the system, “pre” registering their research after the experiments are conducted? And what about off-the-cuff experiments conjured up over a beer in the pub?

As Neuroskeptic points out, the first problem could be solved by introducing a minimum 6-month delay between pre-registration and data submission. Also, all prospective co-authors of a pre-registration submission would need to co-sign a letter stating that the research has not yet been conducted.

The second problem is more complicated, but also tractable. My favourite solution is one posed by Jon Brock. Empirical publications could be divided into two categories, Experiments and ObservationsExperiments would be the gold standard of hypothesis-driven research. They would be pre-registered with methods (including sample size) and proposed analyses pre-reviewed and unchangeable without further re-review. Observations would be publishable but have a lower weight. They could be submitted without pre-registration, and to protect against false positives, each experiment from which a conclusion is drawn would be required to include a direct internal replication.

4.     Greater emphasis on replication

It’s a tired cliché, but if we built aircraft the way we do psychological research, every new plane would start life exciting and interesting before ending in an equally exciting fireball. Replication in psychology is dismally undervalued, and I can’t really figure out why this is when everyone, even journal editors, admit how crucial it is. It’s as though we’re trapped in some kind of groupthink and can’t get out. One solution, proposed by Nosek, Spies and Motyl, is the development of a metric called the Replication Value (RV). The RV would tell us which effects are most worth replicating. To quote directly from their paper, which I highly recommend:

Metrics to identify what is worth replicating. Even if valuation of replication increased, it is not feasible – or advisable – to replicate everything. The resources required would undermine innovation. A solution to this is to develop metrics for identifying Replication Value (RV)– what effects are more worthwhile to replicate than others? The Open Science Collaboration (2012b) is developing an RV metric based on the citation impact of a finding and the precision of the existing evidence of the effect. It is more important to replicate findings with a high RV because they are becoming highly influential and yet their truth value is still not precisely determined. Other metrics might be developed as well. Such metrics could provide guidance to researchers for research priorities, to reviewers for gauging the “importance” of the replication attempt, and to editors who could, for example, establish an RV threshold that their journal would consider as sufficiently important to publish in its pages.

I think this is a great idea. As part of the manuscript reviewing process, reviewers could assign an RV to specific experiments. Then, on a rolling basis, the accepted studies that are assigned the highest weightings would be collated and announced. Journals could have special issues focusing on replication of leading findings, with specific labs invited to perform direct replications and the results published regardless of the outcome. This method could also bring in adversarial collaborations, in which labs with opposing agendas work together in an attempt to reproduce each other’s results.

5.     Standardise acceptable analysis practices

Neuroimaging analyses have too many moving parts, and it is easy to delude ourselves that the approach which ends up ‘working’ (after countless reanalyses) is the one we originally intended. Psychological analyses have fewer degrees of freedom but this is still a major problem. We need to formulate a consensus view on gold standard practices for excluding outliers, testing and reporting covariates, and inferential approaches in different situations. Where multiple legitimate options exist, supplementary information should include analyses of them all, and raw data should be available to readers (see point 1).

6.     Institute standard practices for data peeking

Data peeking isn't necessarily bad, but if we do it then we need to correct for it. Uncorrected peeking runs riot in psychology and neuroimaging because the pressure to publish and the dependence of publication on significant results has made chasing p-values the norm. We can see it in other areas of science too. Take the Higgs. Following initial hints at 3-sigma last year, the physicists kept adding data until they reached 5-sigma. The fact that their alpha is so stringent in the first place provides reassurance that they have genuinely discovered something. But if they peeked and chased then it simply isn’t the 5-sigma discovery that was advertised. (As a side note: how about we ditch Fisher-based stats altogether and go Bayesian? That way we can actually test that pesky null hypothesis)

7.     Officially recognise quality of publications over quantity

Everyone agrees that quality of publications is paramount, but we still chase quantity and value ‘prolific’ researchers. So how about setting a cap on the number of publications each researcher or lab can publish per year? That way we would truly have an incentive to make sure of results before publishing them. It would also encourage us to publish single papers with multiple experiments and more definitive conclusions.

8.     Ditch impact factor and let us never speak of it again

As scientists who purportedly know something about numbers, we should be collectively ashamed of ourselves for being conned by journal impact factors (IF). Nowhere is the ludicrous doublethink of the IF culture more apparent than in the current REF, where the advice from universities amounts to “IF of journals is not taken into account in assessing quality of your REF submissions” while simultaneously advising us to “ensure that your four submissions are from the highest impact journals”. Complete with helpful departmental emails reminding us which journals are going up in IF (which is all of them as far as I can tell), the situation really is quite stupid and embarrassing. Here’s a fact shown by Bjorn Brembs: IF correlates better with retraction rate than citation rate. We should replace IF with article-specific merits such as post-publication ratings, article citation count, or – shock horror – considered assessment of the article after reading the damn thing.

9. Open access publication

Much has been said and written in the last few days about open access, with the Government making important steps toward an open scientific future in the UK (I recommend following the blogs of Stephen Curry and Mike Taylor for the latest developments and analysis).  For my part, I think the sooner we eliminate corporate publishers the better. I simply don’t see what value they add when all of the reviewing and editing is done by us at zero cost.

10. Stop conflating research inputs with research outputs

Getting a research grant is great, but we need to stop counting grants as outputs. They are inputs. We need to start assessing the quality of science by balancing outputs against inputs, not by adding them together.

11. Rethink authorship

Academic authorship is antiquated and not designed for collaborative teams. By rank-ordering authors from first to last, we make it impossible for multiple co-authors to make a genuinely equal contribution (Ah, I hear you cry, what about that little asterisk that flags equal contributions? Well, sorry, but…um…nobody really takes much notice of those).

I think a better approach would be to list authors alphabetically on all papers and simply assign % contributions to different areas, such as experimental design, analysis, data collection, interpretation of results, and manuscript preparation. Some journals already do this in some form, but I would like to see this completely replace the current form of authorship.

12. Revise the peer review system

Independent peer review may the best mechanism we currently have for triaging science, but it still sucks. For one thing, it’s usually not independent. I often get asked to review papers by scientists I know or have even worked with. I’ve even been asked to review my own papers on occasion, and was once asked to review my own grant application! (You’ll be glad to know I declined all such instances of self-review). The review process is random and noisy, and based on such a pitifully small sample of comments that the notion of it providing meaningful information is, statistically speaking, quite ridiculous. 

I personally favour the idea of cutting down on the number of detailed reviewers per manuscript and instead calling on a larger number of ‘speed reviewers’, who would simply rate the paper according to various criteria, without having to write any comments. As a reviewer, I often find that I can form an opinion of an article relatively quickly – it is writing the review that takes the most time.

Last week, Paul Knoepfler wrote a provocative blog post proposing an innovation in peer review in which authors review the reviewers. Could this help improve quality of reviews? Unfortunately, I don’t think Paul’s system would work (see my comment on his post here), but perhaps some kind of independent meta-review of reviewers could also be a good idea in a limited number of cases. 
__

What do you think? Got better ideas? Please leave any comments below. 

** Update 18/7/12, 14:30: On the issue of the gender imbalance in academic fraud, Mark Baxter has kindly reminded me of this case involving Karen M. Ruggiero. 

Thursday, 14 June 2012

Research Briefing: Can boosting motor inhibition help us resist temptation?


A lot of enjoyable things in life are risky and potentially addictive. So how do we control our impulses? And why do some people find it harder to say ‘no’ than others?

In a recent study we asked whether a key to self-control could lie in an unexpected place: a corner of our cognitive system that controls motor actions. We found that when people did a simple task that required starting and stopping finger movements, they also took less risk when gambling. This effect lasted at least two hours after being trained in so-called ‘motor inhibition’.

Why should the act of inhibiting simple movements lead to more cautious gambling behaviour? We don't yet know, but our working hypothesis is that it boosts or primes an inhibition system in the brain that regulates a range of functions - including complex decision-making. By strengthening motor inhibition through the mental equivalent of a ‘gym workout’ we may be able to open new avenues for treating problem gambling and other addictions.

-----------------------------------------------------------------------------------------------------------
Source article: Verbruggen, F., Adams, R., & Chambers, C.D. (2012). Proactive motor control reduces monetary risk taking in gambling. Psychological Science, 23, 805-815. [pdf] [press release]
-----------------------------------------------------------------------------------------------------------

Imagine the following scenario. You are driving to meet your financial adviser for a meeting about your investments. Along the way you encounter a series of obstacles that cause you to drive with extra caution: roadworks, speed cameras, and intermittent bursts of rain. When you eventually arrive and sit down with your adviser, she asks how you would like to spread your reserves between a number of low- and high-risk options. Choosing isn’t easy – the higher risk investments could pay for that much-needed vacation in the Maldives, but the market is unpredictable and you could lose out. You make your choices.

Clearly this decision is complex and based on many different sources of information. But ask yourself: would your decision have been the same if the journey to the meeting had been free of obstacles? Intuitively, you’re probably thinking “Huh? I would select my investments rationally, why should the drive there make any difference?” And most people would agree with you – society reinforces the notion that being able to make decisions rationally and without bias is part of what ‘makes us human’.

There’s just one problem with this argument: it doesn’t quite fit the evidence. Previous research tells us that multitasking impairs cognition, and we also know that priming people in various ways can bias social attitudes and financial decisions that we would intuitively ascribe to our free will. A recent study, for example, found that priming people with the mere image of a thinking man reduced their religious beliefs. At the same time, taxing self-control can cause what social psychologists call ego depletion, reducing our ability to resist temptation.

So, are you still sure that your cautious driving would have no effect on your investment decisions?


Spreading caution around


If our ability to make rational decisions can be influenced by cognitive interference, then you might assume that such effects should impair decision-making. Some evidence does indeed suggest that taxing executive control can make it harder for people to inhibit impulsive choices, although not all studies agree.

But what if we could specifically tailor a kind of multitasking that would improve your decision-making? In other words, what if the interference somehow biased you to take less risk, like the example above with cautious driving? To test this idea, we designed a laboratory task that brings together two different forms of decision-making: monetary gambling and basic stopping of a motor response.

Here’s how it worked. On each trial of the task, people were presented with six options below a series of yellow bars. Each of these options was a number of points that could be won, which – depending on the condition – ranged from 2 to 448. Higher amounts were intuitively more attractive but, crucially, also had a lower chance of winning. And if you did lose the gamble then you forfeited half the amount wagered. So, for instance, if you picked ‘112’ you had only a 15% chance of winning the 112 points, but an 85% chance of losing 66 points. Whereas if you picked ‘2’ you had a 75% chance of winning those 2 points, and only a 25% change of losing 1 point.

We didn’t tell people the exact probabilities of winning or losing, but we did tell them that the chances of winning were lower for higher amounts. We then calculated a simple betting score by taking the average of the choices across the options: from 1 to 6, ranked in order of lowest risk to highest risk. This means that the higher the score, the more willing the participant was to take risks when gambling. At the end of the experiment participants were paid the overall amount they won, at a rate of 1000 points to £1.

On each trial of this task, participants were given a few seconds to reach a decision before the yellow bars started rising toward a white line. Once the bars reached the line they then pressed whichever key corresponded to their choice. This was followed by feedback as to how many points they won or lost on that trial, plus a readout of their overall points balance.

Our behavioural task for combining monetary gambling with motor inhibition. The upper panel (A) shows a typical sequence of stimuli on trials without motor inhibition (called  ‘no-signal’ trials). The first screen (left) presented the various possible choices, ranging from smaller low-risk amounts, to great high-risk amounts. The letters below each option reminded participants which key on the keyboard corresponded to which choice. After 3.5 seconds the bars began to rise and participants made their response when the bars reached the white line. They then received feedback indicating whether their wager was successful or not, and their overall points balance. The lower panel (B) shows a sequence of stimuli on a trial that involved motor inhibition (‘signal’ trials). Everything is the same as (A), except that the bars now turn black just before reaching the white line.

To test how stopping of simple responses (i.e. motor inhibition) interacts with gambling decisions, we introduced an additional catch. Sometimes the bars would turn black just before reaching the white line. On these trials, participants were told to stop whatever decision they had planned. If they stopped successfully then they would win points, but if they failed to stop they would lose points.

The critical manipulation in this experiment was the expectation of stopping. To achieve this we further split the task into blocks of trials in which participants either expected ‘stop signals’ to occasionally occur (dual-task blocks, so named because these blocks included two tasks, gambling and stopping) or in which they were told in advance that signals would never occur (single-task blocks, so named because these blocks only included the gambling task).

We then compared the average betting scores between the blocks, focusing specifically on the trials without stop-signals. This allowed us to directly compare the effect on gambling of either expecting or not expecting to stop a response, while keeping everything else the same. In other words, the only thing that differed between these two conditions was the participant’s cognitive expectations.

So what did we predict would happen? There are two main possibilities. On the one hand, when people were in dual-task blocks they were now dividing their attention between two tasks. It is possible that this state of divided attention and cognitive ‘load’ could interfere with decision-making in the gambling task, making it harder for people to resist the more tempting, higher-risk options. We called this hypothesis the interference account.

On the other hand, we also know that when people expect to stop a response they become more cautious in their motor control – mainly, they slow down. So could this state of motor cautiousness transfer or spread to other forms of decision-making? If so, then when people expect to stop their response in the dual-task blocks, they might actually become more cautious and so take less risk than in the single-task blocks. We called this hypothesis the transfer account.

So which hypothesis won in the contest between interference and transfer? The results clearly supported the transfer account. When people expected to stop their motor response, their betting score decreased by 10-15% compared with when they knew they wouldn’t have to stop. So when people were expecting that they might have to stop their response, they freely chose to place less risky bets.

To be sure that this effect was specific to motor inhibition, rather than attention or other general effects of cognitive load, we also tested another group of participants in a ‘double-response’ control condition. Rather than stopping their response on signal trials, participants in the double-response group made an extra response. The double-response group showed no such reduction in risky gambling (in fact, it increased slightly), which helps tie the effects in the stop group to inhibition. And to be sure that these findings weren’t a statistical fluke, we ran the whole experiment twice in different participants to replicate the main finding.

The results of our first experiment. The left figure (A) plots the average betting score in the two groups of participants (double-response vs. stop) and for the two different conditions (single task vs. dual task). A higher betting score indicates riskier betting behaviour. Notice how the betting score is reduced under dual task vs. single task conditions in the stop group only (arrow; red bar). The right figure (B) shows the distribution of choices in the stop group, from the lowest risk (1) to the highest risk (6). Notice how expecting to stop a response  in the dual-task condition increased the proportion of lowest-risk responses (arrow) compared to blocks where participants never expected to stop (single task condition).

What do these results signify? From a theoretical perspective they reveal an overlap between different forms of inhibition: inhibition of motor responses causally shaped inhibition of risky gambling decisions. Previous studies have hinted at that such links might exist but much of this evidence relies on correlation rather than causation. For instance, people with a gambling addiction can sometimes show impairments in motor inhibition but it is unclear whether these problems are causally related.

Having uncovered evidence for a causal link we next asked whether training people in motor inhibition could have a more lasting effect. If so, this would suggest that the relationship between motor inhibition and risk-taking behaviour might be developed as a complementary therapy for addiction.


Bootcamp for inhibition?


In the next series of experiments we asked whether training people to stop responses could reduce risk-taking later in time. The idea was to train people for a short period (about 30 minutes) at motor inhibition, followed by monetary gambling. The gambling task was the same as described above but including the single-task blocks only, i.e. the bars never turned black and participants never expected to stop their responses while gambling.

We began by dividing people into three training groups. The stop group did a standard motor inhibition task, called the stop-signal task. The double-response group did a different (non-inhibition) task on the same stimuli. The control group didn’t do any training – they just skipped straight to the gambling task.

The stop-signal task is a workhorse of experimental psychology made famous by Gordon Logan, and one of the most straightforward and elegant tests of cognitive function. In our version of the task, participants were shown a shape on a computer screen (square or diamond) and were asked to identify the shape as quickly as possible by pressing one of two buttons, e.g. left button for a square vs. right button for a diamond.

On a random third of trials, the shape turned bold after a short delay. These trials are called ‘signal trials’ and the participant is instructed to try and stop their response. Successfully stopping your response is easy when the signal occurs immediately after the shape appears, but it becomes progressively more difficult as the delay between shape and the stop-signal is increased. This is because, at longer delays, you will be closer to executing your initial response by the time the signal occurs, so there is less time to countermand that response.

Our double-response group did a control task on the same stimuli: instead of trying to stop their response on signal trials, they instead executed a second response. So their task had similar attentional demands as the stop-signal task, but crucially without requiring motor inhibition.

The training phase in our second series of experiments. On ‘no-signal’ trials, participants decided as quickly as possible whether the stimulus was a square or a diamond. On a third of trials, the shape turned bold after a variable delay, termed a stimulus onset asynchrony (SOA). How participants responded on these ‘signal’ trials depended on which group they were in. Those in the stop group attempted to cancel their original response, while those in the double-response group made a second response. The numbers in the figure indicate the duration of the different events, in milliseconds.

So what might happen if we give participants the stop-signal task followed by the gambling task? If the effect of motor inhibition transfers over time to risk-taking behaviour then we expected training to make people more cautious in their gambling decisions, producing a similar effect to the first series of experiments. On the other hand, requiring people to continuously start and stop for 30 minutes might fatigue their inhibitory control and lead to increased risk-taking.

Once again the results were clear: motor inhibition training reduced risky gambling by 10-15%. Interestingly, we saw the same pattern even when we introduced a 2-hour gap between the end of the stop training and the start of the gambling task.

Training in motor inhibition reduced risk-taking in the gambling task by 10-15%. Note how the red bars are lowest when the gambling task immediately followed training (left set of bars), even after a two-hour delay was added between the training phase and the gambling phase (right set of bars).

A picture takes shape…


To summarise, we found that when people expected they might have to stop a motor response in a gambling task, they opted for less risky choices. And when we trained people to stop motor responses before doing the same gambling task, they also selected less risky options. This post-training aftereffect lasted for at least two hours. Overall then, these results indicate that these very different types of cognitive control are tightly coupled.  

Why such a link, you might ask. One possibility is that motor inhibition and risky decision-making draw on the same regulatory systems in the dorsolateral prefrontal cortex (DLPFC), a complex and mysterious part of the brain that coordinates a range of executive functions.

Of course, since these experiments are purely psychological, we can’t draw any conclusions about what might be changing in the DLPFC, but there are several possibilities to consider in future studies. For instance, recent work has found that more impulsive people tend to have lower levels of an inhibitory neurotransmitter called GABA in their DLPFC. Could motor cautiousness and inhibition training be somehow altering the expression of GABA in the DLPFC? Is motor cautiousness somehow tuning neural networks that regulate our behaviour, strengthening or biasing a computational ‘muscle’ that is used for decision-making? Perhaps inhibition training boosts the activity of DLPFC in regulating more primitive parts of the brain that respond to emotion and reward, such as the amygdala? Such questions are speculative, so to learn more we are now combining motor inhibition and gambling with a range of neuroscience methods, including transcranial magnetic stimulation (TMS), fMRI, simultaneous TMS-fMRI, and magnetic resonance spectroscopy.

As well as helping us understand more about cognitive control, our findings also have possible implications for treating gambling addiction. Related work by Katrijn Houben and Anita Jansen suggests that motor inhibition is linked to other compulsive behaviours, such as overeating and alcohol consumption. So could a regime of motor inhibition training help people overcome addiction? It seems possible, but we can’t claim from our results that motor inhibition provides a cure or treatment for any addiction. It is important to stress that all of the experiments in our study included healthy people only, and we currently have no data on whether motor inhibition training has any beneficial effect in a clinical situation. Furthermore, the effects we found are modest, just a 10-15% reduction in risk-taking. That said, we think the clinical angle is worth exploring and we may be able to tweak the design to make these effects larger and more clinically significant.

So can motor inhibition help us resist temptation? Possibly, yes. The next challenge is to figure out why and explore the implications – and applications – in clinical psychology and psychiatry.

____

* All comments and questions are welcome.

* Thanks to Frederick Verbruggen for comments on a previous draft of this post.

* The press release associated with this study follows a new format arising from the recent Royal Institution debate we took part in on science and the media, hosted by Alok Jha and Alice Bell, and also featuring Ed Yong, Fiona Fox, and Ananyo Bhattacharya.

Thursday, 31 May 2012

Tough Love II: 25 tips for early-career scientists


My first post in this series focused on how to make the most from your PhD. The PhD is a critical step in the career path of a scientist, but it is just that – a step. Doing well in your PhD will increase your chances of securing a good post-doc position but it won’t guarantee a successful academic career. It just buys you a ticket to the game.

So in this second ‘tough love’ post I'm going to focus on how you can get ahead in that game as an early-career post-doc. Let me say at the outset that a lot of this advice overlaps with my earlier blog post for PhD students, so I recommend you read that one first. There is also some useful advice for post-docs to be found here.

As with my earlier post, this advice is intended for readers who want a career in academia – i.e. those who want to be principal investigators (PIs) and run their own labs. And again, the guide is targeted to those in biomedical science, especially psychology and neuroscience. Some of the advice here is based on a seminar ‘How to get a research fellowship’, which I gave in 2010 at a Marie Curie FP7 Advanced Training Course on brain imaging.

Just who am I to be dishing out advice on how to succeed as an early-career scientist? The short answer is, nobody in particular! You can find out about my background here, but my track record is  nothing exceptional among PIs in my field. Maybe this is actually a good thing because it shows that an independent research career is achievable and doesn’t require special academic pedigree or genius. In brief, I did my post-doctoral research from 2002-2005 at the University of Melbourne, before moving to the UK in 2006 and taking up a BBSRC research fellowship at University College London. Since 2008 I've directed my own research group at Cardiff University. To date I have managed three post-doctoral researchers to the completion of 2-4 year contracts. So, overall, the advice stems from three sources of knowledge: things I’ve done myself, things I’ve seen others do, and things I’ve encouraged my own post-docs to do.

Your early post-doctoral years are formative. One of the troubling aspects of academia is that many good, even brilliant, scientists struggle to cope with the unrelenting pressure of post-doctoral science. The salary is modest at best, depressing at worst. The clock ticks faster than ever, and the pressure to publish hangs over everything like a merciless force of nature.

That’s the down side. On the up side, the publication pressure is certainly motivating and the post-doctoral life brings some pretty unique opportunities. I’ve heard it said that, second only to an independent research fellowship, the post-doctoral years provide the greatest professional freedom you can experience as a scientist. Emerging fresh from your PhD, you have a finely honed set of skills and knowledge, while at the same time you are (as yet) unencumbered by a heavy teaching load and the grind of administration. In many respects you are in the ‘zone’. Regardless of whether you succeed or fail, your time in the zone is limited, so make the most of it!

Before we get into the specifics, one final warning. This post is about the real, not the ideal. There are many absurd and unfair aspects of the research culture in academia. You aren’t going to solve them as an early-career scientist, so I’m not going to discuss them here. Succeeding in your post-doc is about learning the rules of the game, such as they are, rather than moaning about them or trying to change them. 
_____________________________
 
1.     Throughout your academic career, nothing – and I repeat nothing – is more important than your publication profile. To succeed you need to think like a farmer and build a pipeline that includes periods for seeding (design), growth (experimentation), and harvesting (analysis and write-up). During your post-doc, ensure that you always have at least one paper under review and one in preparation. This means you are always waiting for reviews and writing. Always. When you begin your post-doc this pipeline will probably consist of papers from your PhD. Maintaining this pipeline will ensure a healthy output, and if you starting falling behind then you need to take a good look in the mirror. Am I procrastinating about writing? Am I endlessly reanalysing old data rather than eyeing it pragmatically? Does my portfolio of studies include an adequate balance between slow-burn and rapid-fire experiments? Facing your problems is crucial; otherwise that blur in the corner of your eye will be your competitors racing past you.

2.     Be strategic about publishing. If, for instance, you have two studies that could be published either as two lower-impact papers or as one more definitive higher-impact paper, my general advice would be to combine them and shoot for a more prominent journal. You can always split them again later if the attempt fails. At the same time, keep an eye out for potboilers – results that will only ever be suitable for less prominent journals but which will be relatively quick and easy to publish. Don't waste precious time sending everything to Nature and Science.

3.     When it comes to publications, quality is paramount but that doesn’t make quantity unimportant. There is an obvious truth that junior researchers sometimes forget: publishing a lot of papers proves that you can write, and write fast. It declares to the world that you can communicate your science in an effective and efficient manner. This is absolutely crucial as you move forward in your academic career; you must be recognised by your peers and funders not only as an effective scientist, but also as a capable communicator.

4.     Aim to publish every year. Never allow a year's gap in your CV unless your career has been interrupted unavoidably or you have taken a justifiable career break. Otherwise a publication gap is extremely unattractive and akin to dangling a sign around your neck proclaiming “I struggle at publishing. You’ve been warned.” The rule of thumb in psychology and cognitive neuroscience is to publish four good papers per year, but this can vary. Whatever happens, be sure to publish something every year. If your experiments are slow or not yet producing publishable data, then publish a review paper.

5.     Aim for as many first-authorships as possible. It is crucial in your early professional life to build your own ‘brand’ as a scientist, and to do this you need to stamp out your intellectual contribution. I recommend always bringing up the issue of authorship in job interviews with prospective PIs. This will show that you are ambitious and serious about achieving the output that is crucial for both you and your PI. And unless there are no other job options available, don’t ever take a post-doc position where the PI cannot guarantee you first authorship on the majority (and preferably 100%) of papers stemming from your own work.

6.     Be sure to publish everything possible from your PhD. When you start your post-doc, you will probably have a lot of PhD experiments left to write up. Many post-docs have a strong aversion to doing this, bemoaning how much they are "over it" or how imperfect it all was. As natural as this instinct is, it must be firmly repelled because your future depends on maintaining your publication pipeline. If your PI is generous, s/he may be happy with you using some of your work hours for writing up PhD publications, but more often than not you will need to do this in your own time.

7.     Minimise time spent on collaborations where you are neither first nor second author. In psychology and neuroscience, second authorships carry modest weight. But at a post-doc level, anything lower down the list is basically just padding. Watch out if your CV starts to fill up with middle authorships at the expense of first authorships. Doing so will earn you a reputation as a technician or assistant. This can haunt you when applying for senior post-docs, fellowships or lectureships.

8.     Don’t waste time trying to be last author on papers. In the psych/biomedical world the final author position conveys seniority, but doing this as an early post-doc is like putting a toddler in a tuxedo. At best, readers will ignore it. At worst, you may come across as a careerist who elbowed their way into to a position they haven’t yet earned.

9.     Even though you are a junior member of the academic pyramid, it’s important to realise that you are no longer a PhD student or research assistant. As a post-doc you are a research professional on the road to autonomy. Your PI will expect you to work relatively independently, showing research initiative and leadership. You shouldn’t be merely dancing to the beat of your PI’s drum.

10.  Be proactive with the media and get to know your university press officer. Talking to journalists can be daunting at first, but it will help build your communication skills and your confidence. Discussing your own research with journalists is the gold standard, but it isn’t the only way you can interact. For instance, when emails get sent around your department from a journalist seeking  comments on a particular issue, come forward if you know about the topic (and remember, you don't need to the world leader in the subject to have something worthwhile to say). If you’re based in the UK then register with the Science Media Centre. Many of your academic colleagues will, by default, shy away from such opportunities. But they do so at their peril, not least because funders are increasingly regarding public engagement as an important responsibility of professional scientists. Speaking up in the media is not without risks, but it is one way of fulfilling your public engagement obligations, while providing evidence of your independence and communication skills to a future employer or fellowship panel.

11. When it comes to running experiments, the motto of the day is parallelise, parallelise, parallelise. A great way of achieving this is to embrace the opportunities afforded by student supervision. Get involved in the co-supervision of PhD students with your PI, and be proactive in the supervision of undergraduate projects. Take on capable students as voluntary interns to run side projects. In my experience there is no shortage of intelligent and motivated undergraduates who are willing to give up their free time to get involved in research – it helps them and it can help you.

12.  One of the best nuggets of advice I got as a post-doc was to aim to become the first to do something. It doesn’t have to discovering the Higgs boson, but aim to put your name to something new. This might be a new technique or behavioural task that overcomes limitations in existing paradigms, or a whole new way of thinking about a problem.

13.  Start a blog. It’s a great way of communicating with the public and honing your writing skills. It’s for you to decide whether this is best done under your real name or under a pseudonym. There are sensible arguments on both sides and many excellent pseudonymous bloggers in psychology and neuroscience (e.g. Neuroskeptic and scicurious). But keep in mind that working behind a pseudonym offers fewer benefits for your own career and limits your ability for public outreach.

14.  If you are employed on a grant held by your PI then negotiate carefully about which experiments are ‘hardwired’ and which can be modified through your input. This is a delicate piece of diplomacy. A good PI will listen if you bring creativity and insight to the table; on the other hand, s/he may be committed to prioritising certain aspects of the research grant even if your idea is better. So don’t be disheartened if your idea is shelved. Have a thick skin and return to it later.

15.  It may seem obvious but be sure to arrange regular meetings with your PI. If s/he is extremely busy then those meetings may be infrequent, but try to ensure regularity and keep in touch by email. Speaking with my PI hat on, I can say that I very much like being updated about progress and important developments without having to ask.

16.  Aim to write at least one successful grant application during your post-doc, either as PI or Co-I. Aside from the direct benefits to your research, being awarded grants is a sign of independence, creativity, and leadership potential.

17.  Unlike a PhD, a post-doc position is a job – at least that’s what your Human Resources division will tell you. However, if you go into a post-doc job interview with that mindset then you will struggle to compete with those who have similar CVs but approach science with zeal. A wise mentor once told me that when PIs appoint post-docs, they aren't looking for slaves; they're looking for junior versions of themselves. And since few PIs have a 9-5 mentality, few will employ post-docs with one.

18.  Aim to give at least 2-3 talks per year outside your own department. You don't need to wait for an invitation; you can always contact the seminar organisers directly and put yourself forward. Give local seminars as well. It is important to get noticed both within your own department and beyond.

19.  Networking in science is crucial and not always easy. The two factors that I think are most important are your publication record and your social confidence. There is some great advice about how to network effectively over at Scicurious’ blog. One comment in the discussion stood out for me, As soon as you publish noteworthy papers as a first- or senior-author, people will want to talk to you.” As a post-doc I found this to be true. So my advice is to publish hard and well. Then try to go to one good conference per year and meet people both independently and through your PI. Organising seminars can also be a great way to build important links with other researchers. When your PI has a visiting collaborator or speaker, get to know them. Join in with dinners and drinks at the pub. Don’t be shy.

20.  Review papers for journals. If you haven't yet published enough to be invited to review, tell your PI that you would be happy to review papers that s/he is sent. Trust me, your PI will thank you!
           
21.  Give yourself time to think. It’s easy to get swept up in the “chickens go in, pies come out” mentality of academia, but knowing when to stop and think is crucial for having much sought ‘Eureka’ moments. Give yourself time for reading and pondering.

22.  Keep your website up to date. It never ceases to amaze me how many junior post-docs are sloppy with their web pages, failing to update their lists of publications, talks, or other achievements. If you don’t manage your public image, nobody else will do it for you.

23.  Start developing your own big ideas for the future. As a post-doc I kept a notebook of random musings and it’s amazing how many of them bore fruit in later years, leading to successful fellowship and grant applications.
           
24.  For better or worse, funding agencies are increasingly seeking to defer a portion of research costs to the private sector. If your PI has industry links then take advantage of opportunities to talk to / work with industry during your post-doc. These can be valuable links to forge as you advance in your career.

25.  Finally, as you progress through your post-doc, be aware of the double-edged sword that is ‘larger than life’ syndrome. If your supervisor is famous then you will have a stronger chance of publishing in prominent journals, but many readers will also attribute the work to your PI. As a post-doc, publishing in good journals is of course paramount, but there are some techniques you can deploy to draw attention to yourself. The key one is to ensure that you are both the first author and the corresponding author. This means you will be the point of contact for reprint requests and media enquiries about your work.

So there it is. I hope you found it helpful. Much of this advice is common sense - and it isn't even particularly 'tough', in fact. As a post-doctoral scientist, you’ve passed through a major bottleneck in science. If you do good science and publish well, you’ll go even further and could be a PI within 5 years.

Good luck! Please do comment and leave any of your own tips for post-doctoral success.