Last year, I blogged about “Correlation and process in species distribution models: bridging a dichotomy”, a paper that we published in a recent special issue of the Journal of Biogeography.
Broadly speaking, the paper discusses properties and issues around different approaches to modeling species distributions, with a focus on the extent to which those models merely describe associations between species and their environment (correlative) as opposed to including explicit causal mechanisms (process-based). It is, among other things, the fact that we concentrate mostly on this particular aspect what I interpret as the main objection of a correspondence by Kritikos et al., which was just now published alongside with a response by us.
Both comments are fairly short, and I don’t want to repeat the points made in detail here (although I’d be happy to comment on them). Much of the discussion seemed to circle around words, classifications and focus. I can’t say that I was particularly persuaded to deviate from the angle taken in the original paper, but I think that’s fair enough, I usually find it really useful as a reader to also get a range of opinions on a topic, which a single paper simply can’t provide.
Some other aspects of the exchange, however, are maybe more among the things on which some agreement would actually be useful, although I wasn’t sure whether we reached it. One particular aspect I want to highlight is the often repeated claim that mechanistic models are in some sense intrinsically superior and better at extrapolation than correlative models due to their mechanistic nature. I think much could be said about that already (see e.g. here), but the additional question that pops up in the context of this paper is: what about calibrated mechanistic models? Kritikos et al. state that:
Fitted process-based models such as CLIMEX and STASH (Sykes et al., 1996; Sutherst et al., 2007) are able to draw on the strengths of both correlative and mechanistic modelling paradigms. They allow the modeller to inductively fit ecologically relevant range-limiting functions to species distribution data in a similar manner to many correlative methods.
Admittedly, we make a similar point with a more Bayesian twist in a recent paper on Bayesian calibration of process-based models, where we write in the conclusions:
The importance of prior knowledge about parameters, and also about model structure, however, will remain an area where DVMs differ significantly from correlative modelling approaches. We therefore think that inverse modelling methods will not, as one might fear, reduce DVMs to merely a ‘very complicated’ version of a correlative model that is blindly adjusted to data.
While the latter two statements do express what one would hope for as a process-based modeler (and from a Bayesian viewpoint, I maintain that this hope is, in principle, justified because mechanisms are simply another word for strong prior information, which should improve the inference), I think it is not unreasonable to take a more critical look at this question, which we do in the response to Kriticos et al.:
Fitted process-based models may create an illusion of predictive power by reference to their mechanistic underpinning, but if the process-based model structure and independent ecological knowledge do not sufficiently constrain potential outcomes, fitting the model parameters to observed species distributions may produce drawbacks in terms of transferability and extrapolation that are similar to those in purely correlative models. Therefore, we maintain that fitted process-based models lie somewhere in between completely correlative and completely forward process-based models.
In the end, it all boils down to structural rigidity and correctness as well as to model sensitivity. Do we have sufficient control about these in practical situations? Not sure, but I think it’s a point deserving discussion at a time where the statistical and the mechanistic camp are getting ever closer together through the possibility of more complicated statistical model structures on the one hand and the possibility to statistically fit mechanistic models on the other.
Possibly, the continuum could also be infinite towards the direction of process-based models. An explicitly modeled process (e.g. plant growth as a function of light, water, nutrients, …) could as well be regarded a correlative model for the different sub-processes involved (e.g. light adsorption, water and nutrient uptake, internal transport, …), with these sub-processes again being correlative models of further sub-processes involved, and so forth. (Comparable to the philosophical concept of holons, which are always a whole and a part at the same time, and hence in an infinite continuum towards both directions. However, by defining species distributions as the system-level property of interest, the model continuum ends in one direction with completely correlative SDMs.) Therefore, one could also try to distinguish models (in a continuum) by: To what extent are processes modeled explicitly, i.e. described with an explicit correlative submodel (or how fundamental do we get to describe what happens at the most significant level)? And a second question could be: To what extent is fitting involved, since even a purely correlative model might not be fitted but forward imposed in the simplest case (i.e. another continuum, but closed at both sides).
LikeLike
Not surprisingly, it was a heated discussion to decide on how to define “process” and “mechanism”, and whether there is a difference between those two. In an earlier version of the ms, we had this quote, “one man’s mechanism is another man’s phenomenology” preceding the paper, but that got dropped at some stage, I actually don’t know why. In the end, we went with this definition of “biological meaning”, which is a bit tautological to my mind, but many others have defined it like that as well.
The “fundamental” is also a good point, boils down to the questions about the scale of the mechanism and the scale of the emergent pattern, and about how many levels of emergence you have inbetween. We had all sorts of discussions about that as well, e.g. are models that postulate causality, but on an aggregate level (e.g. dynamic range models), fundamentally less “process-oriented” than those that model individuals and scale up later?
In the end, I would say that the position on the process-correlation continuum is really not much more than an indicator which includes different things, as we also acknowledge in the reply. Somehow it should reflect the extend to which we belief a correlation to be causal as well as emergence, scaling, generality, and also our uncertainty about those, which relates to the fitting. The fitting could be a different axis indeed, but as we argue in the reply, we think it also does affect the position on the process-correlation continuum, because we have to implicitly assume that if you fit, you are not 100% sure about your processes.
It sounds all pretty hand waiving, but on the other hand: the placement of the different models on the continuum created the least discussion among us, and actually I would think that if you would put a number of people in a room, they would come up with a similar ordering. And anyway, the sense of this whole exercise doesn’t seem to me to put a number such as 2.356 on each model, but rather realize that statistical and process-based models, particularly when the latter are calibrated inversely, are not that different any more, and that there are many newer approaches now that we would classify as a mixture between the “pure” process thinking a la Bossel, and a “pure” correlative thinking where you try to concentrate only on the data.
LikeLike