tag:blogger.com,1999:blog-5747401412673787565.post8848551014220400147..comments2023-11-14T08:25:32.374+00:00Comments on NeuroChambers: Bringing study pre-registration home to roostChris Chambershttp://www.blogger.com/profile/10437328364681252945noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-5747401412673787565.post-80689403107803086192013-11-19T11:56:40.830+00:002013-11-19T11:56:40.830+00:00Yes, pretty much. I was thinking of Observations m...Yes, pretty much. I was thinking of Observations more for things that you find that you weren't setting out to find. But purely exploratory work would also fit in that category.drbrocktagonhttps://www.blogger.com/profile/15225859145004971487noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-48506494153907108972013-11-19T10:23:07.964+00:002013-11-19T10:23:07.964+00:00Hi Jona! As you say, with both Bayes and frequenti...Hi Jona! As you say, with both Bayes and frequentist methods one can use "inference by intervals": Divide the possible DV values into two regions, a null region and an alternative region; if the confidence/credibility interval lies entirely in one of the regions, accept the corresponding hypothesis (null region hypothesis or alternative hypothesis). (See: http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/Dienes%20BF%20tutorial.pdf) Normal two-tailed significance testing is a degenerate version of this where the null region is a point. In this case, the alternative hypothesis (which consists of all values except an infinitesimally small point) is unfalsifiable, so by Popperian standards is not even science. Things get better when the null point is extended to a region. BUT one in general still has to strictly respect stopping rules in order to avoid large error probabilities, and this is true for both Bayesian and orthodox intervals. One can add a further restriction: make sure the confidence/credibility interval is not much larger than the null region; then one can check after every data point if one likes and still have good error probabilities.<br />Bayes has more up its sleeve than inference by intervals. One can test a point null using Bayes factors. In this case, a point null becomes falsifiable (given a convention like: reject the null if B > 3). Point nulls may never be literally true, but they can be so close to the truth as to be a fine approximation. (Baguley in his 2012 book on p 369 gives the example of an ESP experiment with 28,000 subjects yielding a confidence interval [.496, .502], with a nominal point chance baseline of .5.) Thus, point nulls, tested with Bayes factors, have their place too. Cheers! ZoltanZoltan Dieneshttps://www.blogger.com/profile/06362331761449744269noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-10194501214432937052013-11-18T18:37:43.284+00:002013-11-18T18:37:43.284+00:00One thing I noticed when working with Prof. Dienes...One thing I noticed when working with Prof. Dienes was that fundamentally, the problem with stopping rules isn't Bayesian vs. Frequentist, but Parameter Estimation vs. Hypothesis Testing. If I get it right, a badly-formulated H0 will be sensitive to "researcher DoF stopping" with certain Bayesian hypothesis tests, too; but a sensible stopping rule in a frequentist framework won't be. If your 0 is "exactly 0" and you compare it to "any other value than 0", you're, fundamentally, not doing sensible science (note that barely any Bayesian out there right now would use such a badly-designed H0). In contrast, a stopping rule such as "once the CI is narrow enough to fit inside a certain window" is not biased towards rejecting H0. The problem is one of circularity - if your stopping rule implicitly favours one outcome over the other (for example, in your standard frequentist test, because you may only ever reach outcome 1, never outcome 2).<br /><br />In sum, as a Popperian, I think parameter estimation over hypothesis testing is the more important Battle, rather than Bayes over Frequentist. Yes, a Credible Interval may be somewhat more intuitive than a Confidence Interval, but either is better than any hypothesis test in most cases.Anonymoushttps://www.blogger.com/profile/12084233690561315010noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-72609937366238584912013-11-16T22:34:37.579+00:002013-11-16T22:34:37.579+00:00Thanks Jon. I was chatting with Tom Johnstone (Rea...Thanks Jon. I was chatting with Tom Johnstone (Reading) at SFN and he argued that one beneficial side effect of Registered Reports could be that it pushes the community to place higher value in overt exploratory research. I found myself nodding in agreement -- non-hypothesis driven research has great value in pushing back the frontiers. The problem is that our community has a pre-existing bias against it which forces researchers to pretend their research is confirmatory when it isn't, and that they are more certain of their results than they really need to be (or should be).<br /><br />This in turn gets me wondering whether we should be also launching an "Exploratory Reports" format: articles with only general questions (and with no hypotheses). They would report potential new phenomena or findings of interest and would provide the perfect material for later Registered Reports. Unless I'm mistaken, is quite similar to your original idea of Experiments and Observations?Chris Chambershttps://www.blogger.com/profile/10437328364681252945noreply@blogger.comtag:blogger.com,1999:blog-5747401412673787565.post-34956410271931782172013-11-16T21:03:03.763+00:002013-11-16T21:03:03.763+00:00Chris. Thanks for writing this and thanks to you, ...Chris. Thanks for writing this and thanks to you, Marcus and others for pushing this forward. The discussions around pre-registration have been a real eye-opener for me.<br /><br />The difficulty that I and I think others have with preregistration is that we're often not sure *exactly* what it is we're looking for before we start.<br /><br />And this I think is your point. <br /><br />It means that anything we do find should be considered exploratory, even if it was generally in line with predictions.<br /><br />It also means that the results would need to be replicated before we could have confidence in them. But at least the second time around we'd be able to pre-register the report, saying exactly what we were looking for.<br /><br />If nothing else, the movement towards preregistration should hopefully give greater status to replication attempts. Because these would often be the first time the study was conducted under preregistration conditions.<br /><br />But this is going to require a culture change, particularly given the importance (career-wise) attached to publishing in the right journals - ie those that prioritise novelty above anything else. The Cortex initiative is a really important step in this direction.<br /><br />Hopefully we get to a situation where, instead of asking "but was it peer reviewed" before we believe something, we ask "but was it preregistered?"drbrocktagonhttps://www.blogger.com/profile/15225859145004971487noreply@blogger.com