Confirmation bias refers to the well-known phenomenon that we tend to favor information that confirms our beliefs. A while ago, I blogged about potential confirmation bias in statistical analysis, and steps that are taken by large projects such as the LHC to avoid this. Another place where confirmation bias can become relevant for scientific studies is during the process of data-aquisition, for example when we have to decide things like “is this shade tolerant plant growing on a shady location?”, “which crown form do I see for this tree which typically has a round crown?”, “what bird species do I hear in this place where European Robins were reported last year?”, or “do nestmates that shouldn’t be agressive behave aggressive or not?”
Whether there is a confirmation bias in studies published on the last question was analyzed by Ellen van Wilgenburg and Mark Elgar in a meta-analysis published recently in PLOSone (ht Volker Nehring). As they explain
Nestmate recognition experiments typically involve intra- and inter colony aggression assays with the a priori expectation that there should be little or no aggression among nestmates. Since little or no aggression is expected among nestmates, we expect aggression to be less frequently reported in trials involving nestmates that are not conducted blind, compared with those conducted blind – that is, the experimenter has no knowledge of whether the ants involved in the assay comprise nestmates only, or a mixture of nestmates and non-nestmates.
The result looks pretty impressive – a huge difference between blind and non-blind studies
The authors remark that
Less than a third of the studies in our sample were conducted blind, a statistic similar to that published over 20 years ago for this kind of research . This is surprising, since confirmation bias is widely documented, and textbooks on scientific methods and experimental design encourage blind experimentation –. While the nature of some experiments or sampling observations in animal behaviour would make it technically impossible to conduct them blind, there may be other explanations why blinding is so rare. Some researchers may choose to conduct open trials in the belief that the behaviour in question is easy to classify and therefore not prone to bias. Such a view is most likely mistaken, as confirmation bias occurs more or less unintentionally and scientists generally do not distort data intentionally .
While it is plausible that studies of behavior are particularly prone to confirmation bias because researchers often have to classify complex behavioral patterns into simple categories, I would think that the issue is there in other (plant) studies as well, starting with the selection of plots, and continuing with things like taxonomic identification or measurement of “non-metric” variables. In some cases, it will be impossible to remove this bias, in others (I’m thinking about fertilization experiments, for example), it would be no problem … still, I’m not sure how many plant experiments are actually conducted blind, would be interesting to hear opinions from people that have a better overview about that.
As a last comment, when teaching experimental design, an example that really worked great for me to demonstrate the importance of confirmation bias is the case of classic rock songs played backwards, such as the example shown in this video
Impressive, isn’t it?