tag:blogger.com,1999:blog-5747401412673787565.post9076050446412220787..comments2024-03-18T06:19:28.852+00:00Comments on NeuroChambers: Tough love for fMRI: questions and possible solutionsChris Chambershttp://www.blogger.com/profile/10437328364681252945noreply@blogger.comBlogger24125tag:blogger.com,1999:blog-5747401412673787565.post-58523442906749159152015-06-18T10:41:25.846+01:002015-06-18T10:41:25.846+01:001) Tim Hunt has, for decades, mentored and support...1) Tim Hunt has, for decades, mentored and supported women in science. He has done more for women than 99.9% of those who called for his resignation.<br /><br />2) Tim Hunt "was always immensely supportive of the ERC’s work around gender equality" (Dame Athene Donald)<br /><br />3) Tim Hunt made an experience-based assertion, based on over half a century of experience, that men and women working together in labs can be emotionally distracting for both sexes.<br /><br />4) Tim Hunt commented that a problem he has had, working in labs in the past, is that women tend to cry more when confronted with criticism. Nevertheless he fully supports women in science. “No one seems to mention his main speech in Korea in which, according to the ERC President, he was ‘very supportive towards women in science and he said that he hoped there was nothing that barred women from science’” (Dame Athene Donald). He simply believes, based on his own considerable experience, that single sex labs are more conducive to good scientific research.<br /><br />5) We may disagree with what Tim says, but we should defend to the death his right to say it.<br /><br />Please read the other side of the story here and, if you agree, sign the petition to help reinstate Sir Tim Hunt:<br /><br />http://www.ipetitions.com/petition/bring-back-tim-hunt#scrollTo-upvote-1653069<br /><br />(Posted by an ordinary chap and advocate of human rights for both sexes).Crusaderhttps://www.blogger.com/profile/02494583090398605053noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-14483925635287833722014-04-29T17:24:37.898+01:002014-04-29T17:24:37.898+01:00Another latecomer to this discussion. Interesting ...Another latecomer to this discussion. Interesting points raised but not all of them are unique to neuroimaging (more on that below). <br /><br />Cost: The idea that fMRI is expensive is often raised, but if you've ever opened a bioscience supplier catalogue you'd realize that in relative terms, fMRI is not all that costly (certainly not anywhere near the costs of particle physics, for instance), and indeed the bulk of most grant funding is in staff, not scanning. There is however a problem with traditional ways to charge for scanner time, in that most centres set prices high enough to recover their costs on the assumption that studies use only a small number of subjects - and doing so actively discourage researchers from including large enough samples. A much better model would be to charge a fixed price for a given project and within that allow unlimited (within reason) scanning toward that project. Such an approach would go some way toward addressing the lack of power. After all, a scanner costs no more to run than to keep on standby, and many MRI scanners are chronically underused (for example at night). Unfortunately I don't know of any scanning centre in the UK or elsewhere that use this model.<br /><br />Looking at the rest of the points, none of them really have anything to do specifically with fMRI. For example, there is no doubt that pressures to publish provide massive incentives toward fraud but as far as I can tell it is no more widespread in the fMRI community than elsewhere (but your point about the pressures to publish in high-impact journals is absolutely right and is definitely a major problem).<br /><br />In fact, most of the critique above relates to who does the research and the way they do it, not the technique itself. And much of it can be summarised as simply being "poor science" or "science poorly done". Pre-registration to me sounds like admitting that neuroimaging scientists are unable to do honest science unless they are shamed into it and sets a slightly disturbing precedent (ie if you choose not to pre-register your study, then that must mean you are a fraud). Surely it would be better to train people how to do statistics properly?<br /><br />But I think Mike Cohen made a very good point that the issue is the lack of neuroscience underpinning much of fMRI work is at the core of the problem. On that note, I was a bit surprised that there was no mention of what in my mind is by far the biggest issue with the method - that the signal measured is only indirectly related to neural activity, represents a population average, and is very difficult to link with more direct measurements - issues that no amount of data sharing or pre-registration will address. There is an urgent need for more research in this field, but unfortunately the bulk of fMRI researchers seem only too happy to ignore these issues. <br /><br />But even if one had a good and reliable way of linking fMRI data to neural activity, the application of cognitive science models to studying brain function is only going to be as useful as the extent to which those models actually map on to neural processing mechanisms. This implies that there needs to be a willingness to recognise that many of those models are likely to be fundamentally incorrect (other than as purely descriptive ones). Indeed it is probably not much of an exaggeration that the majority of the poor fMRI studies that people focus on are just those that blindly set out to test some favourite cognitive science model. Conversely, the best imaging tends to be that which is tightly linked to neuroscience. Fundamentally, this is a problem with psychology itself which needs to embrace neuroscience rather than turning its back on it (the neuron 'envy' that Ramachandran talks about), and that psychologists need to learn and understand neuroscience (but conversely, neuroscientists need to get over their knee-jerk disdain for neuroimaging as a method and accept that it has some small virtues). Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-56600881236052316442014-03-23T14:49:46.709+00:002014-03-23T14:49:46.709+00:00Thanks Mike, great comment. In full agreement.Thanks Mike, great comment. In full agreement.Chris Chambershttps://www.blogger.com/profile/10437328364681252945noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-29051159773795256782014-03-18T13:17:28.666+00:002014-03-18T13:17:28.666+00:00Sorry, I'm about two months late to come acros...Sorry, I'm about two months late to come across this post. Web-wise, I'm still in the late 90's. <br /><br />I agree with most of your points, although I share the reservations about fewer larger-scale studies that other commenters pointed out. However, I was a bit surprised to see that a search for the word "theory" on this page returned zero hits. I think the statistical power and multiple comparisons issues will only ever get worse, and boosting the N won't really help. Consider that in a small number of years, a standard single-subject dataset may include 1+ million voxels sampled at 2 Hz, and standard data analyses will include mass-univariate, connectivity, space-based analyses like MVPA, and probably some others. Thus, the "effective power" will get much smaller even if the number of subjects increases. I write "effective" power because, to my knowledge, power analyses are done using a single statistic (e.g., t-test), but let's be realistic here: If you (don't worry, not the accusatory "you," the general "you") want a finding that doesn't show up in a mass-univariate analysis, you'll also try connectivity (perhaps PPI, perhaps DCM), MVPA, etc. Thus, I would argue that boosting the N won't really help address the issue of the unreliability, low power, gooey interpretability, and large multiple-comparisons problem. <br /><br />Instead, I believe the problem is theoretical. Cognitive neuroscience is largely (though not entirely) deprived of useful and precise theories. A "soft" theoretical prediction that brain area A should be involved in cognitive process X is easy to confirm and difficult to reject. The level of neuroscience in cognitive neuroscience has not increased much in the past two decades, despite some amazing discoveries in neuroscience. FMRI data will become richer and more complex, and the literature will receive more and more sophisticated data analyses. If theories are not improved to be more precise and more neurobiologically grounded, issues of low power and multiple comparisons will only increase. <br /><br />Thanks for keeping this blog, btw<br />Mike<br />Anonymoushttps://www.blogger.com/profile/02165075731052801403noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-3809118466851181192014-03-14T04:56:32.381+00:002014-03-14T04:56:32.381+00:00Problem 2 -- fMRI reliability isn't that poor,...Problem 2 -- fMRI reliability isn't that poor, the way people look at reliability is! <br />in a paper last year http://www.sciencedirect.com/science/article/pii/S1053811912010890 we show that you can get descent reliability (and yes I think it is worth looking at 'raw' data, beta, T, thresholded maps rather than only one ; and ICC as the last useful measure) <br />anyhow your link point to something of interest - as in our paper some paradigm have low reliability some don't ..and the causes can be differentDr Cyril Pernethttps://www.blogger.com/profile/11198127125335091943noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-27126516975386082872014-01-25T13:07:59.705+00:002014-01-25T13:07:59.705+00:00Thanks, great comment. There is much to agree with...Thanks, great comment. There is much to agree with here. The main downside I see with small studies is low power, which limits the ability to answer any questions at all. So there is a trade off between, on the one hand, preserving a tapestry of creativity and innovation by supporting lots of small groups, and on the other hand answering anything at all. Answering questions is what big studies do best. But of course, whether those questions are the right questions - and whether large groups stifle innovation - is another issue entirely. I'm not sure they do, but I agree it is a question worth asking.Chris Chambershttps://www.blogger.com/profile/10437328364681252945noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-40987924568460133942014-01-25T12:56:40.114+00:002014-01-25T12:56:40.114+00:00"if we worked together". Yes, and no. Fi..."if we worked together". Yes, and no. First, fewer and bigger teams and huge projects can also mean that we are putting all our eggs in one basket. The US has tons of megateams, for example, but if you look at the data, the UK is doing much better than the US in terms of money-spent/productivity ratio. In fact, the UK is doing better than anyone else. So, arguing that we have to save money by having fewer, bigger teams etc is misleading. What is needed is more funding. Second, we have greed and a credit allocation problem. The big fish want to be even bigger. Since they are big, they feel they can impose their wishes on everyone else. Therefore, there is no incentive for smaller and creative teams to join a mega team who would push to get control and credit for the entire project and subsequent funding stream. Third, what the Government, and most people, don't get is that science as a whole is the best algorithm we have to gather new knowledge about the world. What societies should fund is the algorithm itself, not select some of the little units (i.e., scientists) that implement the algorithm. All units are necessary, as whole, to implement knowledge discovery. Think of it as the algorithm implemented by a colony of ants to forage. The algorithm works as a whole, even though most explorer ants discover nothing at all. It's not their fault, but in doing their share, they contribute to the implementation of the algorithm. Societies need to understand that most experiments do not work in reality, but the system works as a whole. Some scientists will be lucky and will run into something important. Most won't. And it's nobody's fault, and it should not determine promotions, or redundancies, especially not on a short time scale. Without a system that understands this basic fact, anything else is just a band-aid. <br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-85080923026541363742014-01-20T18:54:24.133+00:002014-01-20T18:54:24.133+00:00The biggest problem is the error in the data. The ...The biggest problem is the error in the data. The next biggest problem is the error generated by processing of the data. Statistical problems are tertiary.<br /><br />Fixing the data will mean more expensive scanners and peripheral hardware. We have to stop buying our hardware right off the shelf of diagnostic imaging manufacturers. Building costume hardware to meet the specifications necessary for sufficient sensitivity is the norm in science. <br /><br />Also fixing the data may mean that the subjects of investigation be restricted to a much smaller subset - ones that can tolerate head restraint sufficient to decrease motion generated error sufficiently. What is sufficient that? That needs much investigation. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-81043947408046247532014-01-20T18:02:47.521+00:002014-01-20T18:02:47.521+00:00The first sentence in "Problem 2" does n...The first sentence in "Problem 2" does not make sense. You say that 'Evidence from structural brain imaging implies that most fMRI studies have insufficient sample sizes....' You then cite a paper that included a large number of neuroscience studies (many of them structural imaging studies). Structural and functional brain data are not modeled or analyzed in a similar way. The techniques are much different. I have no doubt that you could make an argument that fMRI studies are underpowered (the argument has been made before), but your current point is a little disingenuous. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-37059793680102304152014-01-19T20:29:23.511+00:002014-01-19T20:29:23.511+00:00This post was on the tendency to do multi-way ANOV...This post was on the tendency to do multi-way ANOVA in ERP studies, without correcting for the number of effects and interactions bit.ly/1aBb8Gz. I've seen as many as 5 or 6-way ANOVAs in that field, which is really setting yourself up for finding spurious effects.<br />The processing pipeline flexibility also applies in ERP: it's accepted practice by many to select your filter, time window, electrode for analysis, method for identifying peaks etc after scrutinising the data. Referencing method can also dramatically alter the results. It gets worse still if people start analysing frequency bands, where results can depend heavily on things like method of wavelet analysis, and there are lots of ways of defining frequency bands. This paper says a bit about this kind of issue in the context of MMN: 1.usa.gov/LpQHpK A lot of people in the ERP field really don't recognise the problem: I've been asked by reviewers (and editors), for instance, to analyse a different electrode because 'it looks like something is going on there', when instead I've based my selection on prior literature.deevybeehttps://www.blogger.com/profile/15118040887173718391noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-8616882271417614332014-01-18T08:55:07.675+00:002014-01-18T08:55:07.675+00:00Thanks Michael. I think all of these arguments (ex...Thanks Michael. I think all of these arguments (except #1) hold for EEG. You might be interested in this comment by Matt Craddock: http://blogs.discovermagazine.com/neuroskeptic/2012/06/30/false-positive-neuroscience/#comment-795790144. <br /><br />Others (including Dorothy Bishop) have also written about this.Chris Chambershttps://www.blogger.com/profile/10437328364681252945noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-55255065199188878932014-01-17T15:39:17.687+00:002014-01-17T15:39:17.687+00:00Hi Chris--
Really nice post. I tweeted it, along ...Hi Chris--<br /><br />Really nice post. I tweeted it, along with a question--did you think some of the same arguments can be made about my method of choice, EEG/ERPs? I think so, but I wondered if you have given this any thought?Michael Inzlichthttps://www.blogger.com/profile/04112899533466803858noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-36186292964724708022014-01-17T13:16:42.531+00:002014-01-17T13:16:42.531+00:00Resize your window to closely frame the text- then...Resize your window to closely frame the text- then the text should be entirely on the lighter background.<br /><br />Hope that helps!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-72690452808507369442014-01-17T11:55:20.640+00:002014-01-17T11:55:20.640+00:00Thanks for this comment. You make an important poi...Thanks for this comment. You make an important point.<br /><br />I'm not arguing against the use of metrics per se. The fact is that metrics are essential for judging some aspects of science (and scientists), particularly by non-specialists. But we need to recognise the limitations of metrics and choose the best possible ones. Journal level metrics are terrible indicators. There is no correlation, for instance, between journal impact factor (IF) and the citation rates of individual articles - but there is a correlation between IF and retraction rates due to fraud. <br /><br />In terms of shortlisting down from 200 applicants, then for research potential I would focus on article level metrics, h-index, and the m-value (the rate of increase in h-index). I might also ask candidates at the initial application stage to write a short section on how often, and in what contexts, their work has been independently replicated by other research groups.<br /><br />These aren't perfect indicators by any stretch. There really is no substitute for having a specialist read the work, but article level metrics are much better than assessing candidates based how often they publish in prestigious high IF journals that, more than anything, are slaves to publication bias.Chris Chambershttps://www.blogger.com/profile/10437328364681252945noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-35895820682746266992014-01-17T11:40:17.588+00:002014-01-17T11:40:17.588+00:00Works fine on my ChromeWorks fine on my ChromeAnonymoushttps://www.blogger.com/profile/03877191914977447939noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-67782566368017612632014-01-17T11:23:11.897+00:002014-01-17T11:23:11.897+00:00Just look here: http://en.wikipedia.org/wiki/List_...Just look here: http://en.wikipedia.org/wiki/List_of_scientific_publications_by_Albert_Einstein his papers in good chunk are quite short. Not saying that they are lacking in content, but physics is quite different form neuroscience. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-53048828578217932302014-01-17T00:04:18.210+00:002014-01-17T00:04:18.210+00:00related to Problem 4:
Einstein published 300 scien...related to Problem 4:<br />Einstein published 300 scientific papers... 60 years ago!!!<br />So stop talking about publication pressure, pls...!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-25700473338965062342014-01-16T23:25:13.214+00:002014-01-16T23:25:13.214+00:00Great post. But I'm not sure I understand the ...Great post. But I'm not sure I understand the solution to problem 4. We currently have a faculty search with 200 applicants. How can we efficiently create a shortlist of applicants without using heuristics. Read the papers? In practice, no. Most of us went into science because we love our research, mentoring and teaching, not because we want to spend all our evenings and weekends evaluating others. I agree on the problem I'm not sure I see an easy solution. I'd love to hear other ideas. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-61724810681143614152014-01-16T20:11:21.838+00:002014-01-16T20:11:21.838+00:00Damn, sorry about that. I don't use Chrome mys...Damn, sorry about that. I don't use Chrome myself (Firefox on Mac). Blogspot sucks.Chris Chambershttps://www.blogger.com/profile/10437328364681252945noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-25029585611586304792014-01-16T19:24:00.734+00:002014-01-16T19:24:00.734+00:00This is a great post. Just wanted to give a heads ...This is a great post. Just wanted to give a heads up that it didn't load correctly for me: the text is aligned left so that a good chunk of it is over the dark background and almost unreadable. Anyone else have that problem? I'm using chrome. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-1788968512070572822014-01-16T19:20:13.233+00:002014-01-16T19:20:13.233+00:00Thanks for the clarification, Chris.Thanks for the clarification, Chris.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-51501093312604818912014-01-16T18:43:50.476+00:002014-01-16T18:43:50.476+00:00Thanks that's a good point. To be clear, I did...Thanks that's a good point. To be clear, I didn't say they were justifiable - I said they could be considered to be justifiable. Big difference. And clearly they are considered to be justifiable by many scientists (and peer reviewers) or we wouldn't have a problem.Chris Chambershttps://www.blogger.com/profile/10437328364681252945noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-90914134692082610832014-01-16T18:39:40.303+00:002014-01-16T18:39:40.303+00:00In problem 3 you stated that "Even the simple...In problem 3 you stated that "Even the simplest fMRI experiment will involve dozens of analytic options, each which could be considered legal and justifiable." The problem is that researchers in this field are not aware that your statement is incorrect. Most of the preprocessing steps are not "justifiable" in that many are known to be wrong in principle and many are simply unverified. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-56400776119969551172014-01-16T18:29:47.570+00:002014-01-16T18:29:47.570+00:00You forgot what is likely the biggest problem in f...You forgot what is likely the biggest problem in fMRI - the data. The data and the processing of the data - AKA the methods. For example, subject motion is a huge problem that will not be solved by any of your proposed solutions. Start with the data first.Anonymousnoreply@blogger.com