Follow the science?

We report the results of a forecasting experiment about a randomized controlled trial that was conducted in the field. The experiment asks Ph.D. students, faculty, and policy practitioners to forecast (1) compliance rates for the RCT and (2) treatment effects of the intervention. The forecasting experiment randomizes the order of questions about compliance and treatment effects and the provision of information that a pilot experiment had been conducted which produced null results. Forecasters were excessively optimistic about treatment effects and unresponsive to item order as well as to information about a pilot. Those who declare themselves expert in the area relevant to the intervention are particularly resistant to new information that the treatment is ineffective. We interpret our results as suggesting that we should exercise caution when undertaking expert forecasting, since experts may have unrealistic expectations and may be inflexible in altering these even when provided new information.

Even at current margins, researcher fallibility remains an undertreated topic.  It should be at the center of any approach to method or philosophy of science, rather than the abstract principles we are usually fed.  In any case, that is from a new paper by Mats Ahrenshop, Miriam Golden, Saad Gulzar, and Luke Sonnet.

Via the excellent Kevin Lewis.

Comments

Comments for this post are closed