Statistical analysis with blinded data – a way to go for ecology?


Justitia, 1556, by Maarten
van Heemskerck

In my last post about the Higgs rumors, I referred to an excellent blog post by Matt Strassler that features a long comment exchange between him and Peter Woit about the legitimation for leaking information about the experimental results before the data analysis has been completed. One thing that made me thinking was Matts point about “blinding the data”. From the context, I could understand what they referred to, but confirming my intuition on Wikipedia made me aware how common such a blinded analysis seems to be in particle physics. From the article about blind experiments:

Modern nuclear physics and particle physics experiments often involve large numbers of data analysts working together to extract quantitative data from complex datasets. In particular, the analysts want to report accurate systematic error estimates for all of their measurements; this is difficult or impossible if one of the errors is observer bias. To remove this bias, the experimenters devise blind analysis techniques, where the experimental result is hidden from the analysts until they’ve agreed—based on properties of the data set other than the final value—that the analysis techniques are fixed.

They give an example for that

One example of a blind analysis occurs in neutrino experiments, like the Sudbury Neutrino Observatory, where the experimenters wish to report the total number N of neutrinos seen. The experimenters have preexisting expectations about what this number should be, and these expectations must not be allowed to bias the analysis. Therefore, the experimenters are allowed to see an unknown fraction f of the dataset. They use these data to understand the backgrounds, signal-detection efficiencies, detector resolutions, etc.. However, since no one knows the “blinding fraction” f, no one has preexisting expectations about the meaningless neutrino count N’ = N x f in the visible data; therefore, the analysis does not introduce any bias into the final number N which is reported.

That seems to me a very reasonable approach for ecology as well, actually really for every type of experimental or empirical work, but particularly those that draw on large datasets and databases, so what you might call synthesis ecology.

I wonder if anyone has seriously done this or at least thought about it in ecology … ? And my cynical self wonders by how much the percentage of significant result would drop when everyone was blinding his data before the analysis ;).

8 thoughts on “Statistical analysis with blinded data – a way to go for ecology?

  1. Pingback: Should ecologists do blinded data analyses? | Dynamic Ecology

  2. I see what you’re suggesting as formalizing the separation of exploratory and confirmatory analyses in the practice of ecology. We already act as though this is what we do, but it would be very nice to see it broadly practiced and rewarded. I don’t know, but would be very interested in figuring out, what effect there is on bias in testing from using the posterior based on the exploratory part of the data for priors in a Bayesian analysis.

    Intuitively if you use almost the entire data set for exploration then your analysis would be almost entirely exploratory, and if you assume a model based on the experiment and take your priors from elsewhere, then your analysis would be entirely confirmatory. Maybe there’s a happy medium? It would probably make people more inclined to carry the analysis “blinded” if the exploratory part of the data weren’t “just thrown away”.

    Like

    • hmm … I agree that exploratory analysis is a typical application in ecology, and it’s a question how to deal with hypotheses that result from such analysis.

      However, this is more of an additional point. As far as we speak about a blinded analysis as described above, there is no exploratory part really, the assumption is that the hypothesis / model is already fixed before the analysis. The question we want to answer is whether we can reject this hypothesis, respectively, what are the effect sizes/parameters of the model. To that end, only a subset of the data is provided to the analysts to fix the statistical assumptions (e.g. statistical error of the measurement device).

      Like

    • Sorry, just to add on this – the point really is that the blinded data is informative only about things that relate to the error model, but not about parameters that relate to the actual physical effect sizes that we want to estimate.

      So, you couldn’t really use it as a prior for the subsequent analysis of the effect size, the whole point about the blinding is that the analysis can’t get any sensible conclusions from the data about the parameters that we actually interested in, so that he can’t adjust model assumptions to make those parameters fit better to what he believes is true.

      As far as error parameters such as variance parameters are concerned, we could of course code this in a prior, but usually the model is then later applied to the whole dataset anyways, including the former subset, so it’s not really necessary.

      Like

      • That first point doesn’t sound quite right. The blinding is very nicely done in that you can’t say anything about the physical parameters you want to estimate (because you don’t know the fraction of data you have), but in carrying out the complete analysis you do know how much data was witheld so (conditional on f) the analysis on the blinded data is informative.

        That said, I didn’t catch on initially that the point of doing blinding this way was then to re-use the whole data set for the final analysis. That makes a lot of sense (too).

        Like

        • OK, what’s clear is that you can’t specify a correct prior based on the blinded data only, because this is what’s blinding supposed to achieve. Of course, once someone tells you about the fraction f, you can include this in your analysis, recalculate your results and then use this as a prior, but note that by doing so, you have included additional information that has “unblinded” the data for you.

          Given the practical problems of correctly transferring posterior MCMC samples into new priors, however, I don’t think it makes an awful lot of sense to go that road, in most cases it is probably easier to redo the analysis on the full dataset.

          Like

  3. Pingback: Trust and trustability « theoretical ecology

  4. Pingback: Seing is believing – or is believing seing? Confirmation bias in ecological data | theoretical ecology

Leave a comment