10 thoughts on “Explaining the ABC-Rejection Algorithm in R

  1. Dear Florian,

    thank you for your transparent application of this alogrithem in R. I observed some inconsistencies in your attached R code- as provided it is not working because of changed variable names: observed summary is done out of “data” which schould be “observedData” and the deviation is calculated with “observedSummarystatistics” rather than observedSummary.
    – Hope iam right with my oberservations – otherwise i did not get the calculations and apologise for my comment.



    • Hi Matthias, thanks for telling me, yes, I had the old variable names still in my workspace and didn’t notice that I hadn’t changed them everywhere, a bit sloppy. I’m changing that as I write, thanks for the hint! F


  2. Pingback: Explaining the ABC-Rejection Algorithm in R ← Patient 2 Earn

  3. Pingback: struggling with problems already partly solved by others | Hypergeometric

  4. Pingback: A simple explanation of rejection sampling in R | theoretical ecology

  5. Dear Florian,

    Than you for very detailed R code, that has been very helpful after reading many theoretical papers about the topic. I have one question, regarding reference tables. For these example, the reference table is “fit”, is not it? What if I want to make a mode choice with abc? Let’s assume I want to compare m models, do I have to simulate “fit” m times and make a comparison afterwards?

    Thank you in advance,

    Best Regards



    • Hi Nutsa,

      if you want to compare 2 models, set up the ABC procedure for each model, decide on a prior model weight (typically equal), and count how often the different models are accepted. This is described in Toni, T. & Stumpf, M. P. H. Simulation-based model selection for dynamical systems in systems and population biology Bioinformatics, 2010, 26, 104-110

      The whole procedure has one problem, however – sufficiency of the summary statistics is not enough to ensure that the summary statistics will work for model selection. This was not recognized in the earlier papers. Thus, for a valid inference, you have to check (probably via simulations) that your summary statistics are suitable for model comparisons. A reference is Robert, C. P.; Cornuet, J.-M.; Marin, J.-M. & Pillai, N. S. Lack of confidence in approximate Bayesian computation model choice Proceedings of the National Academy of Sciences of the United States of America-Physical Sciences, 2011, 108, 15112-15117


  6. Dear Florian,

    Than you for the quick response. I have read about the problems of standars abc model selection and I am going to use the new approach abc Random Forest for that purpose, however I had difficulties in creating the so called “reference table” with model indexes and paramaters. I will read the paper from Toni, T. & Stumpf, M. P. H, than you for advice

    Best Regards


    • Hi Nutsa,

      I have seen the RF ABC papers, but haven’t read them in detail – does it indeed solve the problem discussed in Roberts et al. 2011?

      Just an idea – if you are looking at this anyway, maybe a guest post explaining / demonstrating the RF ABC in comparison to standard ABC model selection could fit nicely in here? Get in touch via email if you’re interested.

      Best, F


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s