Notes from France

View from just below the Mont Sec, close to Grenoble

View from just below the Mont Sec, close to Grenoble

I’ve just returned from two weeks in France, the first week on the International Statistical Ecology Conference 2014 in Montpellier, and the second at the Laboratoire d’Écologie Alpine (LECA) in Grenoble, visiting the groups of Wilfried Thuiller and Sébastien Lavergne, which was both great.

Some impressions from the ISEC:

  • First of all, my compliments to the local organizers, who handled their job with enchanting Mediterranean charme.
  • For the future: don’t be afraid to present at an ISEC if you’re not a hardcore statistician. The majority of people were stats users rather than developers or statisticians, and it was overall a very friendly crowd.
  • Nearly everyone was using hierarchical Bayesian models.
  • In general, the presentations confirmed one problem I see with the current practice of Bayesian inference, which is that it allows specifying and fitting very sophisticated models, but involves few checks of the assumptions of those models. I’m thinking about checking residuals, model selection, cross validation. All possible, but rarely done in practice, clearly also because of the computational burden. I see this as a certain problem, because what are all the advances in model specification worth if you can’t systematically validate them.
  • There were surprisingly few talks on ABC and related topics. I would have expected that more people are working on that. But we had two great keynotes on simulation-based inference, one by Marc Beaumont on ABC, and one by Simon Wood on his synthetic likelihood approach. I have argued earlier that the parallels between them are not sufficiently recognized. Neither of the keynotes made an attempt to bridge this gap though, although Marc Beaumont made a few comments in this direction. Marc also presented some new ABC application together with Richard Sibly, using ABC to fit an earthworm model. I guess it’s this model but I’m not sure. Looked interesting.
  • Perry de Valpine gave a great plenary in general. He also presented a new Bayesian modeling framework, NIMBLE, that uses the BUGS model specification, translates this in C++ similar to STAN, but offers the possibility to specify your own sampling / simulation algorithms. Hence, a kind of hybrid between a general programming language and a DAG model specification language. Seemed worth trying out.
  • On the topic of new frameworks: quite a bit of fuzz about AD model builder. I had heard of it but haven’t used it yet. From what I understand, it does fast MLE inference for nonlinear or hierarchical models via Laplace approximation. People I talked to were very positive about it, but I am still a bit skeptical whether a Laplace approximation is stable for more complicated problems. Another thing to try out.
  • Ben Bolker gave a keynote on statistical machismo, citing the discussion initiated by Brian McGill as well as his own recent post on statistical software over at Dynamic Ecology. In the beginning I thought this could get controversial, but then it went into all too familiar directions for my taste. At least, it got a good discussion going on why we, the editors and reviewers, are always pushing for new methods (not that this hasn’t been said before).
  • And many other interesting talks I won’t be able to cover here, for example keynotes by Marti Anderson, Nicholas Gotelli and Chris Wikle, and a great number of other interesting talks.

I was very happy to have my own talk on inference in chaotic state space models right after Simon’s keynote that largely dealt with the same topic. The talk was motivated by the story around this comment that we sent to PNAS last year, but concentrates more on the underlying problem of estimating state-space models when the dynamics are chaotic. If you are interested, here are the slides

3 thoughts on “Notes from France

  1. Reasonable criticism. Unfortunately I’m not a very extreme person, so it’s harder for me to say really controversial things. What are some controversial opinions that you might have expressed if you were giving a version of that talk?

    • Hi Ben,

      I would think of it more as personal taste than as criticism. I thought your keynote was a good one, enjoyably presented, easy to follow, balanced with many good points, and a lively discussion in the end. Probably a more appropriate choice than any controversy I might raise.

      And anyway, I was not looking for controversy, but do feel that it is important that we get to a more tangible agreement about what is appropriate model / method complexity. If I take up Brian’s points, he was naming 3 things in particular that were all over the place at ISEC: a) observation models, b) spatial models, and, c) to a lesser extent, comparative analysis. So, where do we stand, are are we using too little, just enough, or too much of these methods? The answer is of course too little, doesn’t work anyway because autocorrelation is inhomogeneous, and just enough; but any other definite answer would probably have created just as much disagreement.

      I think the reason why we can’t agree, apart from people being invested into particular viewpoints, is that we don’t agree about the the purpose / ideal of statistical modelling. It would have interested me to see a poll of the percentage of people that agree with

      a) a good model/method should correctly represent the data-generating mechanism (structural realism)
      b) a good model/method should produce the lowest possible error in the inference when repeatedly applying it to new situations

      I feel that the dominant applied philosophy at ISEC was a), although I guess many people would also support the frequentist-type ideal b) (I liked Perry de Valpine comment about closet-frequentists). In an ideal world, there should be no trade-off between a) and b), but I feel in the practice, with little control on structural uncertainty, there may well be. As said above in the comment about the Bayesian analysis, I think (we) Bayesians are currently not doing the best possible job on quantifying this structural uncertainty, and the same may be true for more complex frequentist approaches.

  2. Pingback: Bayesian model checking via posterior predictive simulations (Bayesian p-values) with the DHARMa package | theoretical ecology

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s