Member-only story

Machine Learning — The Intuition of Markov’s Inequality

Helene
4 min readSep 6, 2021

--

Photo by Ibrahim Rifath on Unsplash

Said in simple terms, probability theory is the mathematical study of the uncertain — it allows us (and thereby also the computer) to reason and make decisions in situations where complete certainty is impossible. Probability theory plays a center-stage role in machine learning theory, as many learning algorithms rely on probabilistic assumptions about the given data. In this article, we will consider a specific probability bound — Markov’s Bound.

This article is meant to understand the inequality behind the bound, the so-called Markov’s Inequality. It will try to give a good mathematical and intuitive understanding of it. In two other articles, we will also consider two other bounds: Chebyshev’s Inequality and Hoeffding’s Inequality, with the latter having an especially great impact upon the theory of Machine Learning.

Remembering Random Variables

For us to be able to understand Markov’s Inequality we will first have to remember some theory from the realm of probability. We will have to understand random variables since they are the heart of Markov’s Inequality. So, let us ask ourselves, what is a random variable?

A random variable is a set of possible values from a random experiment. This might not seem like a very clear definition, so let us take an…

--

--

No responses yet