6 min read

The Bayesian vs Frequentist Debate: Uniquenesses

The Bayesian vs Frequentist Debate: Uniquenesses
Source: https://xkcd.com/1132/. Frequentists please don't mind for the Bayesian inclination of this joke. As of now, I swing between the two ideologies, but let us accept it Bayesian jokes are fun.

Reference to past blog: In the previous blog in this series, I explained the two ways that Bayesians have approached the problem of statistical significance. In this blog, I open up my views on the Bayesian vs Frequentist debate.

In 2016, while I was doing independent research in statistics with my mentor, I was first introduced to Bayesian Statistics with a few introductory Coursera classes. A few days later I was working on a paper, my mentor asked me to calculate the probability of two noisy signals (effect) having the same information source (cause). I sat down for two days straight with the problem tumbling the assumptions and the variables in my head, when I realized that the problem could be solved with the only Bayesian Statistics equation that I had learned, the Bayes Rule. As it happened, the paper I was working on never got published but my Bayesian solution to the problem did.

Since then I have held a deep association with the Bayesian approach and I completely understand that my opinions might be slightly biased (something I am trying to balance). In this and a few upcoming blogs, I want to express my understanding of the Bayesian vs Frequentist debate. I understand that my views might be limited because it is such a vast field. I request the reader to correct me wherever he finds me wrong.

The Bayesian vs Frequentist debate is complex and multi-dimensional. Almost all problems in statistics that have a Frequentist solution also have a Bayesian solution. Proponents of both sides often find themselves in irreconcilable debates. There are also those who work on unifying both the fields and developing solutions that incorporate both perspectives. Finally, there are the epistemologists, who do not care about the methodology but say that the final learnings are the similar from both methods.

In this note, I am outlining a few interesting points of the uniquenesses of Frequentist and Bayesian methods that I have found in my research. I leave it up to the reader to choose a side and would try my best to equivocate.

Is truth subjective?

In my understanding, the most fundamental difference I have understood between the two approaches is that frequentists understand the randomness in the world and sought to resolve that randomness by their search for the true mean of a process. They believe that this true mean would always be a point estimate, an objective truth. The challenge of statistics as viewed by the Frequentists is that the truth can never be exactly captured with samples and hence they define confidence intervals. They believe in repeating a process "frequently" and observing the outcome to reach closer and closer to the objective truth. That's why I think they call it the Frequentist Statistics.

Bayesians on the other hand think of statistics from a different angle. They believe that we always have some belief about every parameter in the world. The belief represents how likely we feel an estimate can be. They believe that when we observe the world we update our beliefs from the learnings we gather. The challenge of statistics as viewed by the Bayesians is that truth itself is subjective and changes heavily across contexts. They believe in constantly updating their beliefs by holistically seeing how different is the observed evidence from their belief system.

But it is a hard debate to decide, that if stripped apart of randomness, is the truth really objective or subjective? It often feels that those are just the two ways to see the same thing, and in the end, it would not really matter. Hence, I myself swing towards epistemology (those who don't care about methodology) every now and then when I am amazed at how the results from both approaches actually came out to be the same a lot of times.

If the underlying truth behind everything is objective or subjective is a philosophical debate that one needs to decide. For instance, do you believe that when two people disagree on something, there is an absolute truth that can tell who is right only if it can be traversed (objectivity)? Or the absolute truth is contextually different for both the people (subjectivity)?

Occam's Razor

Source: https://arthagyaipcw.wordpress.com/2022/02/01/occams-razor/

The debate on Occam's Razor comes in when you look at the Bayesian vs Frequentist debate from the perspective of hypothesis tests. From a hypothesis standpoint, Bayesians often carry models in their minds and update the parameters as and when they observe the evidence. Frequentists, on the other hand, start with a hypothesis and stick with the simplest hypothesis until the deviations from that explanation become unignorable. Some believe that because Frequentists give unwarranted precedence to the null hypothesis, that is why they often take longer to conclude. Bayesians on the other hand are hypothesis-free, but rather depend on causal models such as the Bayesian Network to quickly converge on their parameters.

Occam's Razor is an interesting concept that justifies the Frequentist precendence to the null hypothesis. Occam's Razor says that as long as the evidence is not starkly against, the simplest model with the fewest parameters should be preferred for any causal phenomena.

Occam's Razor has a profound philosophy behind it as well. Randomness is so constructed that the deeper you look into it, the more patterns you find just by chance. For instance, as Nasim Nicholas Taleb tells, suppose you have an infinite number of monkeys typing for an infinitely long time on a type-writer. You can probabilistically argue that one monkey would definitely end up writing the Odyssey or the Illiad. Occam's razor protects us from such patterns observable by chance by always putting stronger precedence on simpler models that have lesser complexity.

The good thing about Frequentist Null Hypothesis Testing is that it follows the Occam's razor and protects us from seeing "by chance" winners.

Closing the backward probability loop

The third uniqueness in my discussion comes from the perspective of forward and backward probabilities. Recently, I realized a fundamental flaw with the Frequentist methodology that is worth understanding. I will explain the same with an example by Nasim Nicholas Taleb in the book Fooled By Randomness. Suppose there is a disease that happens with a 1 in 1000 probability in the population. Suppose you have a test that gives 5% false positives and no false negatives. So, if you have 1001 people tested (1 with the actual disease), you will have 51 positives in a population of 1001. An interesting question now comes up, if you calculate the backward probability of a positive result actually being a diseased patient (1/51), it is only 2%.

This example to me, clearly outlined the limitations of Frequentist p-values. I would explain with the cartoon I have shown in today's feature image. The Frequentist in the cartoon realizes that there is a fundamentally very low probability that the receiver would lie (1/36). Hence, concludes that it must be likely that the sun has exploded. He misses out on the fact that the probability for the sun to explode is much much lesser.

The event that you are observing might be very rare assuming the null hypothesis (the 5% false positive in the disease example, and the 1/36 chance that the receiver would lie) but the probability of the event assuming the alternate hypothesis might be even rarer (the disease occurrence in the population 0.1%, and the sun exploding).

It made me realize the impact of priors in our calculations. I also realised that a forward probability question is often incomplete without its backward probability counterpart. In the disease example, it does not suffice to just ask how many times the test will give a positive (effect) for a non-diseased patient (cause). One also needs to ask that out of all the positives given out, how many of those will be actually diseased? Often the answer to this comes from the fact that how prevalent the disease actually is in the population.

The way ahead

I leave it to the reader to consider the arguments between the Bayesian vs Frequentist debate and choose a side. I keep my priors wide on the topic and am willing to learn about both ideologies as I go.

In the next part of this series, I will outline the criticisms both methods have faced in the statistical community. Bayesians have often faced criticisms over their subjectivity and the need to define a prior. Frequentists have faced criticisms over mostly their p-values.