Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Gill's book, Bayesian Methods, is even more dismissive, and even hostile towards Frequentist methods. Whereas I've never seen a frequentist book dismissive of Bayes methods. (Counterexamples welcome!)

It boils down to whether you give precedence to the likelihood principle or the strong repeated sampling principle (Bayes prefers the likelihood principle and Frequentist prefers repeated sampling). See Cox and Hinkley's Theoretical Statistics for a full discussion, but basically the likelihood principle states that all conclusions should be based exclusively on the likelihood function; in layman's terms, on the data themselves. This specifically omits what a frequentist would call important contextual metadata, like whether the sample size is random, why the sample size is what it is, etc.

The strong repeated sampling principle states that the goodness of a statistical procedure should be evaluated based on performance under hypothetical repetitions. Bayesians often dismiss this as: "what are these hypothetical repetitions? Why should I care?"

Well, it depends. If you're predicting the results of an election, it's a special 1 time event. It isn't obvious what a repetition would mean. If you're analyzing an A/B test it's easy to imagine running another test, some other team running the same test, etc. Frequentist statistics values consistency here, more so than Bayesian methods do.

That's not to come out in support of one vs the other. You need to understand the strengths and drawbacks of each and decide situationally which to use. (Disclaimer: I consider myself a Frequentist but sometimes use Bayesian methods.)



> Whereas I've never seen a frequentist book dismissive of Bayes methods

Nearly every Frequentist book I have mentioning Bayesian method attempts to write them off pretty quickly as "subjective" (Wasserman, comes immediately to mind but there are others), which is falsely implying that some how Frequentist methods are some how more "objective" (ignoring the parts of your modeling that are subject does not somehow make you more object). The very phrase of the largely frequentist method "Empirical Bayes" is a great example of this. It's an ad hoc method that somehow implies that Bayes is not Empirical (Gelman et al specifically call this out).

Until very recently Frequentist methods have near universally been the entrenched orthodoxy in most fields. Most Bayesians have spend a fair bit of their life having their methods rejected by people who don't really understand the foundation of their testing tools, but more so think their testing tools come from divine inspiration and ought not to be questioned. Bayesian statistics generally does not rely on any ad hoc testing mechanism, and can all be derived pretty easily from first principles. It's funny you mentioned A/B tests as a good frequentist example, when most marketers absolutely prefer their results interpreted as the "probability that A > B", which is the more Bayesian interpretation. Likewise the extension for A/B to multi-armed bandit trivially falls out of the Bayesian approach to the problem.

Your "likelihood" principle discussion is also a bit confusing here for me. In my experience Fisherian schools tend to be the highest champions of likelihood methods. Bayesians wouldn't need tools like Stan and PyMC if they were exclusively about likelihood since all likelihood methods can be performed strictly with derivatives.


This sounds to me very much like a political debate between people arguing for the best method, rather than focusing on the results that you can get with either method.

As long as this debate is still fuelled by emotional and political discourse, nothing useful will come out of it.

What is really needed is an assessment which method is best suited for which cases.

The practitioner wants to know “which approach should I use”, not “which camp is the person I’m listening to in?”


"Whereas I've never seen a frequentist book dismissive of Bayes methods. (Counterexamples welcome!)"

Indeed! There's a lot of Bayesian propaganda floating around these days. While I enjoy it, I would also love to see some frequentist propaganda (ideally with substantive educational content...).


All of Statistics by Larry Wasserman is a great introductory book from the frequentist tradition that includes some sections on Bayesian methods. It's definitely not frequentist propaganda - more like a sober look at the pros and cons of the Bayesian point of view.


My first year of grad school I ordered a textbook but what I got was actually All of Statistics with the wrong cover bound on.

I skimmed through a couple chapters before returning it for a refund. I sometimes regret not keeping it as a curio, but I was a poor grad student at the time and it was an expensive book.


https://archive.org/details/springer_10.1007-978-0-387-21736...

Statistics & machine learning book authors seem to be really good at providing a free, electronic copy.


> Indeed! There's a lot of Bayesian propaganda floating around these days. While I enjoy it, I would also love to see some frequentist propaganda

I think that frequentist statistics doesn’t need marketing. It’s the default way to do statistics for everyone and, frankly, Bayesian software is still quite far away from frequentist software in terms of speed and ease of use. Speed will be fixed by Moore’s law and better software and easy of use will also be fixed by better software at some point. McElreath and Gelman and many others do a great job in getting more people into Bayesian statistics which will likely result in better software in the long run


A book by Deborah Mayo "Statistical Inference as Severe Testing" might fit.


I've read it. Unfortunately, I thought it was terribly written. Also, it's a philosophy book, not a guide for practitioners.


In my opinion books for practitioners is not the place for such discussions. Deborah's book might be poorly written, but if we want to go where the foundations of disagreements are we have to reach philosophy. Bayessian advocates are also often philosophers, like i.e. Jacob Feldman.

From theoretical statisticians Larry Wasserman is more on the frequentist side. See for example his response on Deborah's blog [1]. But he doesn't advocate for it in his books. So yeah, besides Deborah, I am not aware of any other frequentist "propagandist".

[1] https://errorstatistics.com/2013/12/27/deconstructing-larry-...


> Gill's book, Bayesian Methods, is even more dismissive, and even hostile towards Frequentist methods.

I'm skeptical of this because Frequentist (likelihood) methods are a special case of Bayesian methods, with flat/uniform priors for parameters (and the "flatness" of a parameter is dependent on your chosen parameterization anyway; it's not a fixed fact about the model). So it's reasonably easy to figure out when frequentist methods will be effective enough (based on Bayesian principles), and when they won't.


> I consider myself a Frequentist

Grab the pitchforks!


>Whereas I've never seen a frequentist book dismissive of Bayes methods.

I think it more has to do with the long history of anti-Bayesianism championed by Fischer. He was a powerhouse who did a lot to undermine its use. The Theory that Would Not Die went into some of these details.


Yeah, I mean...Fisher was a pretty big jerk. It seems like he got in fights with everyone!


Thank you! That's the kind of comments why I come here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: