95 thoughts on “A simple Metropolis-Hastings MCMC in R

  1. Pingback: Setting up OpenBugs and R « theoretical ecology

  2. Pingback: MCMC chain analysis and convergence diagnostics with coda in R « theoretical ecology

  3. Thank you very much for an excellent blog post about MCMC. I have noticed, however, a bit strange thing in my opinion: in the “prior” function you return a sum, not a vector, of obtained value (return(aprior+bprior+sdprior)). Should not that be a vector? otherwise how else would you call again this function when you do sampling?
    Anna
    PhD student in evolutionary biology

    Like

    • Hi Anna,

      glad you like the post.

      Regarding your question: in principle, I could also return a vector with the prior probability for each parameter separaetely, but later those would be summed up anyways.

      The posterior (which is what we are working on with the MCMC) is given by

      P(Parameters|Data) = L(Data|Parameters) * Prior(Parameters)

      where Prior(Parameters) is the prior probability that a certain combination of parameters will occurr together. As we assume here that the prior probabilities for each parameter are independent, Prior(Parameters) = Prior(Parameter1) * Prior(Parameter1) * … , or if you work with the log of the probabilities, it’s the sum – this is why I return the sum here, it’s basically already the joint prior for a certain combination of parameters to occurr together based on the individual priors we assumed.

      Cheers,
      Florian

      Like

  4. Wonderful post, it helped me a lot. But I have a question. In the section “Defining the prior”, you said “To make it easy, I used uniform distributions for all three parameters, but not that for the standard deviation”. For paramter “b”, you define bprior = dnorm(b, sd = 5, log = T) in the prior function. I’m confused.

    Like

  5. Pingback: A simple Approximate Bayesian Computation MCMC (ABC-MCMC) in R « theoretical ecology

  6. great post. It helped me a lot, but i have a question. How can we use mcmc when we want to estimate the parameters of a multiple regression with 4 variables for example?

    Like

  7. Hi Florian,

    Thank you very much for a very clear introduction on MCMC. However, after playing around with your code, I have noticed a problem:

    The performance of the algorithm seems to get progressively worse as the sample size increases. E.g. sampleSize <- 301 and sampleSize <- 3001. The acceptance rate becomes extremely low in the latter case. As the log-likelihoods become large negative numbers, so will their differences (i.e., posterior(proposal) – posterior(chain[i,]) is a very large negative number for most proposed steps).

    When sample size is large, it seems to be difficult to get the chains to mix well again by reducing the variances of the proposal distributions.

    Am I missing something, or is this a common problem? Is there a way to get around this problem, or is it a matter of find tuning the proposal variances?

    Best regards,
    Wilson

    Like

    • Hi Wilson,

      your observation is correct. The reason is that if you have more data, the posterior gets more narrow. As a result of that, the proposal function that I suggested is too wide and produces lots of rejections – adjust it and it will get better (professional samplers like JAGS do this automatically for you). I explain this in detail in a follow-up post https://theoreticalecology.wordpress.com/2011/12/09/mcmc-chain-analysis-and-convergence-diagnostics-with-coda-in-r/ . Also, note that with so much data, you initial guess might be much worse (in terms of relative goodness of fit) than before, so you might need to use a longer burn-in time. As a third comment, using that amount of data, it becomes very clear now that a flat prior is not uninformative, i.e. it carries a preference towards larger values for the standard deviation. If you want a more uninformative prior, a 1/sd prior scaling for the standard deviation would probably correct that and lead asymptotically to a better retrieval of the original parameters.

      Like

  8. Pingback: Olhando o MCMC mais de perto… « Recologia

  9. Hello Florian, the explanation is superb and very helpful!!! I have one question though, i was trying to use the same code to solve another(similar) simple regression question. I run into trouble especially in the proposal function. I suspect it is in the choice of the std deviations. In your example above, how did you choose the sd values sd= c(0.1,0.5,0.3), in the proposal function? Thank you in advance.

    Like

    • Hi Ngesa, first have a look at the last section of this post https://theoreticalecology.wordpress.com/2011/12/09/mcmc-chain-analysis-and-convergence-diagnostics-with-coda-in-r/ which explains the basic problem.

      Basically, you want your proposal to fit to the shape of the posterior you sample from – very brief, one could say that for unimodal posterior distributions, a proposal with a similar shape as the posterior (including correlations between parameters), but a bit shrunk as compared to the posterior usually works well.

      How much you should shrink depends on the number of parameters you vary in the proposal, the shape of the posterior and the how well your proposal mimics that shape – typically you will have to adjust the proposal after running your sampler for the first time (that is all right as long as you discard all samples during adjustment). As an initial guess (don’t quote me for that) I would start for a typical regression problem with 3-8 parameters and all parameters varied at once with 1/10th of the posterior uncertainty that I expect, but again, this depends a lot of how much parameters you have and how much correlation I would expect.

      For more details see

      Gelman, Andrew, et al. Bayesian data analysis. Chapman & Hall/CRC, 2004.

      Andrieu, Christophe, and Johannes Thoms. “A tutorial on adaptive MCMC.” Statistics and Computing 18.4 (2008): 343-373.

      Liked by 1 person

  10. I was wondering what if instead of log =T, I substitute log = F. How does that affect the over all results?
    If you can please explain then I would greatly appreciate it.
    Thank you

    Like

    • In principle it’s the same of course, it’s simply that computers have problems storing small numbers, as explained in the paragraph “why we work with likelihoods”

      Try it out, if you create sufficiently large data, you will get rounding or underflow problems when working with Log = F (of course you have to update the formulas, e.g. instead of probab = exp(posterior(proposal) – posterior(chain[i,])) you sould have probab = posterior(proposal) / posterior(chain[i,]). We sometimes do this as an exercise in our courses.

      Like

  11. this may be a basic question, but in the above you call a the intercept. isn’t it the slope of the regression line? forgive me if there is something obvious i am missing.

    Like

  12. Pingback: MCMC for maximum likelihood in R | Trees 'r' Stats

  13. Pingback: Running Bayesian models | Sam Clifford

  14. Super helpful post !
    I have been reading books, chapters, presentations, and papers about MCMC in the last 2 months. This post is just in time to clear up my cloudy neural network in my brain for MCMC.

    Like

  15. Pingback: MCMC | Letian's Blog

    • Looks roughly the same to me, except that you have a different size of your plot window, this may depend on your OS / editor / output device.

      I would say everything is fine. Small differences of the results are expected of course, because, as you say, we haven’t fixed the random seed, so each time we are running this we have a slightly different dataset.

      Like

  16. Pingback: A simple explanation of rejection sampling in R | theoretical ecology

  17. Pingback: Making the Most of MCMC | Sweet Tea, Science

  18. Thanks a lot for this very helpful post. I have just a doubt. Is it normal than the prior distributions are not updated through the iterations of the chain?
    Thanks again
    Fabio

    Like

  19. Amazing, great post, is it possible to re-calculate it by excel so it would make easy to understand for those who unfamiliar with R.
    thaks
    Haolia

    Like

    • Hi Haolia, glad you liked the post! About Excel – I guess it would be possible, but I have to admit I don’t see the point. Probably I would have to use Visual Basic to do this in Excel, and if you understand VBA, you should also be able to read R code. In general, if you want to do serious stats, the best thing to do is to stop using Excel for anything else than data entering, and get used to R.

      Like

  20. Great post. Really cleared things up for me. Question, however. If a non-symmetric proposal function was used a “correction factor” would be required, correct? I’m familiar with how to do it for a single parameter but, if required, how would it be implemented in the multi-parameter case you’ve described? Thanks.

    Like

    • Yes, if you have a non-symmetric proposal distribution g(x’|x) for going from x to x’, you will have to correct the Metropolis ratio with g(x’|x) / g(x|x’). This works for any number of dimensions.

      In practice, this is rarely used though. Most situations where a non-symmetric proposal would be useful can better be dealt with by applying a transformation on the parameter space.

      Like

  21. Hello, that’s great tutorial and sample. Thank you! But in the likelihood function, you are using the truth data x as the prediction. What should I use if x is unknown? I only have data y, and know that y is generated from an unkown x. Thanks.

    Like

    • Hi Jerome – in a regression, the assumption is that you know both the observed response (y) and the predictor (x), and what you want to estimate is the unknown effect of x on y.

      The problem you have seems different, but I’m afraid you will have to be a bit more specific about what you want to achieve to get sensible help.

      If you want to do something that is completely different from a regression, I would suggest that a forum such as stats.stackexchange.com/ is a more suitable place to get help than this post.

      Like

  22. Very helpful post! I read several articles and book chapters on MCMC, but this post makes it more concrete for me. Can you help me with a question though? Why didn’t you update each parameter separately? Thank you!

    Like

    • Glad this was useful to you Guoguo.

      About your question – sure, one can update one parameter at a time. This algorithm is called Metropolis-within-Gibbs. It is one of the many extensions of the standard Metropolis that aim at improving the speed of convergence for higher-dimensional more complex target distributions. For low-dimensional normal targets it will converge slightly slower though.

      Like

  23. Oh my GOD!! This is an extremely well executed post! Thank you very much to make it so clear! This is the exact example of how to be a teacher.

    Like

  24. Hi,

    thanks a lot for very helpful code. I just have one question. Which parts of the code except number of parameters and size of the matrix should be changed in order to make it suitable for estimating just one parameter?

    Like

  25. I’m not sure this is a Metropolis-Hastings implementation rather than Metropolis only. The former is for asymmetric distributions and includes the marginal probabilities in the accept/reject criteria ratio. In this case, because you are using symmetric distributions (normal and uniform) they are equivalent (I think anyway), but if you were to include asymmetric distributions (e.g. beta, gamma, lognormal) then this would not be a Metropolis-Hastings example.

    Your line of code below does not include the marginal probabilities.
    probab = exp(posterior(proposal) – posterior(chain[i,]))

    See Hobbs and Hooten 2015 Ch. 7. Please correct me if I’m wrong.

    Like

    • I guess that’s true – I didn’t code the correction term because I used a symmetric proposal (in this case, the correction is always 1), but I guess it might be more pedagogical to add it, or change the name.

      Note, however, that the correction works with the proposal distribution – you seem to refer to the prior distribution, the prior is accounted for here.

      I didn’t get what you mean by the “Your line of code below does not include the marginal probabilities. …”

      Like

  26. Excellent post with nice responses! Many of my questions were answered in the discussion below!

    However, I still don’t understand how you chose sd = c(0.1,0.5,0.3) in the proposal. I understand that you will adapt this based on your expectations on the uncertainty you expect:

    “As an initial guess (don’t quote me for that) [sorry I quoted you here…] I would start for a typical regression problem with 3-8 parameters and all parameters varied at once with 1/10th of the posterior uncertainty that I expect”

    How should I “guess” those values and how do you get a posterior uncertainty?

    Also, why are you comparing the probability of acceptance with a random uniform number (between 0 and 1)? Is it just a convenience in the coding? I bet that in JAGS or STAN it’s way more complex than that!

    if (runif(1) < probab){ # If the probability we get is higher than a random probability
    chain[i+1,] = proposal
    } else {
    chain[i+1,] = chain[i,]
    }

    Thanks again!

    Like

    • About the guessing – I often have a rough idea, but if you don’t know, would start with 1/20 of the prior or so.

      About the runif(1) < probab – yes, this is one option to code a draw from a Bernoulli trial, another option would be using rbinom(). As for any random number generation, there are many algorithms to do the job, and optimizing this is quite an art. I'm simply using this because this is a standard way to code this, and I wouldn't be surprised to see this code also in other software. I doubt this is a runtime-critical part of the code for any sampler.

      Like

  27. Hi. You use a different seed for every iteration of the random uniform. I’m wondering why you didn’t use the same seed for each random variate. Thanks, Brent

    Like

  28. Thank you for this concrete example of the code underlying MCMC. It was extremely helpful. It took me a while to “unpack” and understand your likelihood and prior functions, since I am not fully conversant with functions in R, but it was definitely worth the trouble.

    Like

  29. Thank you for this example, extremely useful for a newbie to R and MCMC. Could I ask how you would use this type of model practically.

    In my mind we could use MCMC to see reasonable acceptable results for y given the same x.

    To that end, how could we use our chain results to produce a sequence of y? Or would we need to edit the functions to produce a chain of y values?

    Please correct me if I have got this wrong!

    Like

  30. Pingback: Metropolis Hastings algorithm for double sigmoidal curve – program faq

  31. Terrific article. It’s so helpful to code an MCMC from scratch to really understand how it works.

    Sorry to be a code nit, but you should really pass the y vector as an argument into the likelihood function. It took me a while to figure out what y was within that function. (Also, x is used globally but then again locally for slopevalues, which was a touch confusing. And “sd” is already a base R function – consider calling it stdev or something.)

    The explanation of the mathematical process is really clear. Kudos on a great article.

    Like

  32. Pingback: #ENVR2018: A Recap – Sweet Tea, Science

  33. Hi, i’m fairly new to Bayesian and R, but i’m very studious in an attempt to climb this learning curve. Do you know of any great conceptual/intuitive books/video series that help teach R as it relates to bayesian?

    Like

  34. Thank you very much for this example.I wonder if you have any links to a simplified MCMC for deterministic SIR models?Where you are estimating gamma and beta?

    Like

  35. How about having only a prior of 1/sigma^2 for ε? I tried to make the algorithm work only using one prior on sigma, and I could not make it converge…

    Like

  36. I’ve been trying to get my head around Bayesian MCMC regression for a while. I’d grasped the basics of bayesian MCMC, but was struggling to apply this to regression. The following sentence really filled in the missing piece:

    we have to return the probability of obtaining the test data above under this model (this sounds more complicated as it is, as you see in the code, we simply calculate the difference between predictions y = b + a*x and the observed y, and then we have to look up the probability densities (using dnorm) for such deviations to occur.

    Thank you for your useful post!

    Like

  37. Pingback: A Slightly More Advanced MCMC Example – Data Science Austria

  38. Pingback: A Slightly More Advanced MCMC Example | R-bloggers

  39. Thanks for the excellent text. I suggest you to add “na.rm=T” at the line 8 of the loglikelihood script, in: “sumll = sum(singlelikelihoods, na.rm=T)”. I had some problems with NAs generation in my data. As well, I suggest also to remove the extra space between commas in the line 4 of the graphic script: “hist(chain[-(1:burnIn),1],nclass=30, , main=”Posterior of a”, xlab=”True value = red line” )”. It gives a warning message.
    I would like to ask you if I can use part of your script in my paper and how I have to refer you text. Is there a public paper with it? Or I can cite this page. Thank you.

    Like

    • Hi Victor,

      thanks for your comments and sorry for the late reply, I somehow missed this comment.

      Of course you can use this code for the paper, but may I suggest that the sampler we implement here https://cran.r-project.org/web/packages/BayesianTools/index.html would be more powerful and also stable. In these samplers, we also check for NAs etc. The code here is just meant as a demo.

      For the same reason, I’d prefer to keep the code as it is, it is really just a demo to understand the principles, not to be adjusted for all kinds of models (in which case I recommend to use the more “professional” samplers implemented in various R packages).

      Like

  40. Great post; Helping folks understand the black box of MCMC since 2012!

    Curious…why not remove duplicate values from chain output? In other words, what’s the motivation for repeating parameters when proposals are not accepted?

    Like

    • Hi Brian, sorry, I had somehow missed this comment – in MCMC sampling, you want to get an idea about the shape of the target distribution. In the MH sampler, the point is that you will stay at good values longer until you accept a proposal, thus adding more of this “good” values (as duplicates) to the chain. This is why these good values will show up as more prominent in the final plots.

      Like

  41. Great Post! You explained very well how the likelihood and prior distributions are used. One thing I’m not entirely clear on is the where the p(data) term in the denominator of Bayes equation is used?

    Like

    • Hi Trevor, p(data) can be expanded as p(data) = int dx p(data|x) d(x), and doing this we see that it is identical to the integral over the posterior. In other words, p(data) is just the normalization constant that is calculated when transforming the posterior samples into a density estimate.

      Like

  42. Hi there, I’m trying to do Metropolis Hastings on a time series of weather, giving a model of Y_t = Y_t-1 + N(0,1). Is this adaptable to that? Can I use the data I have to forecast future data using this method?

    Like

Leave a comment