Confirmation bias refers to the well-known phenomenon that we tend to favor information that confirms our beliefs. A while ago, I blogged about potential confirmation bias in statistical analysis, and steps that are taken by large projects such as the LHC to avoid this. Another place where confirmation bias can become relevant for scientific studies is during the process of data-aquisition, for example when we have to decide things like “is this shade tolerant plant growing on a shady location?”, “which crown form do I see for this tree which typically has a round crown?”, “what bird species do I hear in this place where European Robins were reported last year?”, or “do nestmates that shouldn’t be agressive behave aggressive or not?”
Whether there is a confirmation bias in studies published on the last question was analyzed by Ellen van Wilgenburg and Mark Elgar in a meta-analysis published recently in PLOSone (ht Volker Nehring). As they explain
Nestmate recognition experiments typically involve intra- and inter colony aggression assays with the a priori expectation that there should be little or no aggression among nestmates. Since little or no aggression is expected among nestmates, we expect aggression to be less frequently reported in trials involving nestmates that are not conducted blind, compared with those conducted blind – that is, the experimenter has no knowledge of whether the ants involved in the assay comprise nestmates only, or a mixture of nestmates and non-nestmates.
The result looks pretty impressive – a huge difference between blind and non-blind studies
In the study, blind experiments were significantly more likely to report aggression in the controls than those not conducted blind (11 out of 15, or 73% versus 9 out of 42 or 21%, P<0.001). doi:10.1371/journal.pone.0053548.g001
The authors remark that
Less than a third of the studies in our sample were conducted blind, a statistic similar to that published over 20 years ago for this kind of research [18]. This is surprising, since confirmation bias is widely documented, and textbooks on scientific methods and experimental design encourage blind experimentation [12]–[18]. While the nature of some experiments or sampling observations in animal behaviour would make it technically impossible to conduct them blind, there may be other explanations why blinding is so rare. Some researchers may choose to conduct open trials in the belief that the behaviour in question is easy to classify and therefore not prone to bias. Such a view is most likely mistaken, as confirmation bias occurs more or less unintentionally and scientists generally do not distort data intentionally [45].
While it is plausible that studies of behavior are particularly prone to confirmation bias because researchers often have to classify complex behavioral patterns into simple categories, I would think that the issue is there in other (plant) studies as well, starting with the selection of plots, and continuing with things like taxonomic identification or measurement of “non-metric” variables. In some cases, it will be impossible to remove this bias, in others (I’m thinking about fertilization experiments, for example), it would be no problem … still, I’m not sure how many plant experiments are actually conducted blind, would be interesting to hear opinions from people that have a better overview about that.
As a last comment, when teaching experimental design, an example that really worked great for me to demonstrate the importance of confirmation bias is the case of classic rock songs played backwards, such as the example shown in this video
Impressive, isn’t it?
What is find even more interesting is that with experience researchers tend to develop an intuition for what may or may not work in an expected way – prior to setting up an experiment. So, basically, the entire experimental design is then builded up on top of these intuitive “gut feelings”. It’s good – because it decreases chances of getting “no results” result, but confirms the reality as we know it. This is why it’s great to have young PhDs who haven’t yet developed such a “skilled” approach:)
LikeLike
Hi Anna,
I guess as long as the gut feeling is right, i.e. the experiment is clean and the effect is really there, it doesn’t really seem confirmation bias to me, but probably rather a “testing the known” bias, which is yet another problem.
I think new ideas a clearly great and important, but I wouldn’t dismiss the “skill” you are speaking about too readily, it may in many cases not be ecological “conservatism” why their experiments work, but rather a lot of experience with what can statistically be show and what not. It’s very common that PhD students propose experimental setups for which an experienced researcher can immediately say that one cannot expect significant results with proposed sample size.
So, I think there is this two things we have to separate, good experimental and statistical skills (which everyone should listen to), and conservatism of ideas (which one should listen to, but not always follow).
LikeLike
There is, perhaps, a variation on this theme. In discussing divergent reactions to our report of a “weak and variable” relation between species richness and ecosystem productivity, we made the distinction between “theory demonstrations” versus “theory investigations” (http://www.sciencemag.org/content/335/6075/1441.3.full.pdf). In the first of these there is commonly a selective process of attempting to find supporting examples wherever they may crop up. Everyone should recognize that these will be biased exercises, but with some justification since an adherent to a theory may be looking to see when and where predictions apply. “Theory investigations,” in contrast, have a different motivation: to evaluate the explanatory adequacy and limitations of theories so as to improve them. Here the standard for both objectivity and investigative adequacy is higher I think.
Just a thought about a different perspective on the context behind some long-time debates.
Jim Grace
LikeLike
Hi Jim, interesting points … I think it’s natural that, if one has derived a new theory, one looks for support, but I wonder whether it’s really good practice.
The reason is that a “theory demonstration”, even if labeled as such, is clearly designed to help single individuals to create attention for their ideas. I don’t really see how science as a whole benefits of them. It may be necessary to do a bit of “cherry picking” to help a good idea across the barrier of conservatism, but apart from that, I think at any stage of discovery we should consider all data and all possible alternative explanations; we might have much less of a mess of alternative theories and indices floating around if people would do this from the beginning.
LikeLike
Florian, I agree with you completely. I like the distinction as a way of trying to create awareness in those who don’t understand how powerful confirmation bias is — sort of a Trojan horse concept. But, the mess created by theory confirmation bias is a huge problem I think. My colleagues and I are deep in it right now and the difficulty of getting people to even admit basic truths is depressing.
LikeLike
Pingback: Friday links: confirmation bias confirmed, peak reading, diatom art, and more | Dynamic Ecology
As I said over at Dynamic Ecology today, I’d be curious to know the range of reactions to this study from researchers working in this area. What proportion would have the following reactions?
1. “Yup, that’s why we always do our studies blind.”
2. “Wow, guess I’d better start doing my studies blind.”
3. “This just shows what happens if you observe animal behavior without sufficient training and experience, like me and my students have. Blinding is for people who don’t really know their organism.”
4. “Ok, but this just shows behavioral studies are hard, you can’t expect them all to be perfect, or make the perfect the enemy of the good-enough.”
5. “Whatever. Even non-blind studies get the direction of the effect right on average, so this is just nitpickers carping about trivialities.”
LikeLike
haha, well, you could of course always ask over at Dynamic Ecology?
Careful not to verbally bias the bias questions though, and of course there may always be people who say 1 and think 3 😉 . I don’t envy the colleagues from social science, in ecology at least our subjects themselves should be unaffected by our own biases.
LikeLike
Yes, I freely admit that my phrasing of the possible reactions is both tongue in cheek and likely to bias what people would admit to. 🙂 But in seriousness, it would be interesting to know about this. Scientists really do vary in their attitudes about this and other methodological issues. It would be interesting to quantify that variation.
LikeLike