It’s of course not possible in every case, but it allows you to systematically synthesise the evidence in your data with previous knowledge, if you want to do so. I think in some cases, it can be important to alert the reader to the fact that there is some evidence for X in the study, but that this evidence may not (yet) be enough to overrule expectations that come from other studies or biological plausibility. It systematises what any good discussion section is doing verbally.

LikeLike

]]>– we should avoid binary thinking and talk about evidence

– p-values can sort of be used to quantify evidence since there is a decent enough mapping to some bayes factors (minimum bayes)

– people are familiar with p-values but less so with bayes factors

– lets just use p-values then

and your point is – no, we should really not use this shortcut and instead use bayes factors. this seems reasonable

LikeLike

]]>LikeLike

]]>LikeLike

]]>LikeLike

]]>Hi Florian,

Ok, I’ll do that!

Thanks for the reply and for the post, again!

LikeLike

]]>Hi Bernado,

as shown in the picture, a Ridge regression is mathematically the same as a Bayesian regression with a normal prior on the regression slopes. The width of the prior is then the shrinkage penalty, the smaller the prior width, the larger the penalty.

The only difference from the Bayesian perspective is how you set the width of the prior (= shrinkage penalty). I would say that there are 2-3 solutions in practice:

1) set a light shrinkage penalty a priori (this is know as weakly informative priors). If you do this, most people don’t even use the word shrinkage, but effectively, if you do, you have to be less worried about overfitting / parameter selection in a GLMM setting

2) set a stronger shrinkage prior, and get the value from something else (e.g. cross-validation). This is rarely done in my experience

3) set adaptive shrinkage priors, where you make the shrinkage another parameter that is estimated, and set a common prior for the parameters. See for example https://mc-stan.org/rstanarm/reference/priors.html#hierarchical-shrinkage-family

I don’t really have a good review at hand. I would suggest to google for Bayesian shrinkage prior STAN or so to get examples with code.

LikeLike

]]>Thanks for the post!

I am quite new to shrinkage modeling – most of the studies I’ve done so far on ecology are either based on model selection of have some Bayesian elements – even though very basic, just to allow a little more flexibility over the frequentis approach.

I’ve been collaborating with statisticians who recommend and use methods with shrinkage (Lasso, Ridge), but they are still kind of “mysterious” in practice. Do you have any begginner’s reading suggestion on that?

Once this basics are understood, having a Bayesian version of that seems quite interesting!

LikeLike

]]>LikeLike

]]>Hi Johannes, thanks for the hints. I added the library(DHARMa), good point. The missing quote seems to be a problem of how wordpress renders the code, when you press “view raw” it’s there.

LikeLike

]]>