There’s been too many interesting papers in the last weeks for a lone blogger to cover, so I’m glad that other interesting stuff has been discussed elsewhere, e.g. by EEB & flow on the controversy around “Novel Ecosystems”, NIBMS explaining us why there may be no species below 1 mm, and Nature reporting on the Tara Oceans project.
Plus, I wanted to mention a recently published preprint by Frederic Barraquand et al. on “Lack of quantitative training among early-career ecologists: a survey of the problem and potential solutions”, which makes an interesting empirical reply to E.O. Wilson’s claim that math may be of lesser importance for ecologists. Fred has been harassed by me to write a guest post about his paper after he returns from his holidays, and as there are certain indications that I was successful, I will leave it there for the moment.
The paper I want to talk about in a bit more detail now would have certainly passed under my radar if google scholar wouldn’t have suggested it to me. In fact, it took me a while to realize why it appeared in my suggestions. After skimming over it, however, I became quite interested – Lavine et al. use “synthetic likelihood” as one of their methods to fit a stochastic epidemiological model to data.
The idea of synthetic likelihoods is very simple (note that synthetic likelihood is the term used by Wood 2010; we call the same thing parametric likelihood approximations in out 2011 EL review): if the exact likelihood for a stochastic simulation is intractable, one creates samples for the data by simulating from the stochastic model, fits a distribution to those samples, and reads of the likelihood from this distribution. We use the same method method in a paper that I blogged about recently with very favorable results. It seems from the references that this kind of inference is not uncommon in epidemiological modeling, which was interesting to see – I haven’t seen this in ecology so far, although we argue that there may be ample potential of this method due to lower computational costs compared to alternatives such as approximate Bayesian computation (ABC).
What was also new to me is that this approach is implemented in the pomp package for R, which implements a number of non-standard methods for fitting state-space problems – I clearly should have a closer look at that, looks really interesting.