An Introduction to Bayesian Inference — Defining the MAP Estimate

Helene
9 min readDec 17, 2021

In the last article, we saw the difference between Frequentist and Bayesian Inference, and how much of Bayesian Inference is based on Baye’s Theorem. We also saw a few concrete examples of Bayesian Inference, for example estimating the mean of a normal distribution and estimating the bias of a coin. In this article, we will look closer at a central method in Bayesian Inference: Maximum a Posteriori (MAP).

Re-visiting Maximum Likelihood Estimation (MLE)

Before we move on to the actual goal of this article: Understanding the motivation and method behind MAP, we will first revisit the concept of MLE. We remember that MLE is simply a method where we determine the values for the parameters of our model. This is done in a way so that the chosen parameters maximize the probability (likelihood) that the found parameters in our model have generated our data. Let us take an illustrative example — imagine that we are given some data:

We are then interested in finding the parameters that are the most likely to have generated this data. Let us imagine that we are interested in finding the mean and standard deviation of a normal distribution that is the most likely to have generated our data. Let us see some examples:

--

--

Responses (1)