Member-only story

An Introduction to Optimization For Convex Learning Problems in Machine Learning

Helene
8 min readDec 2, 2021

--

In machine learning, we are often interested in better performance in our models. Therefore, there is forcedly a link between machine learning and the theory of optimization. In this article, we will explore optimization — specifically a few topics from gradient-based optimization. We also assume that we are working with a convex learning problem, i.e., that our function and hypothesis set is convex. It is assumed that this theory is known, else you start by reading the following article: Convex Learning Problems.

Understanding The Basic Definitions

Before we take the deep dive into understanding optimization, maybe we should first get a solid understanding of the basic definitions. As we can imagine, when we want to improve our model — what we are actually doing, is we are optimizing a function. Let us talk a bit more about this function.

Imagine that we have a function, f, which we can define like so:

Where we can say that:

Now, don’t worry about these notations. They simply mean that we have a function, which takes us from our set ‘M’ to the real line. In other words, the function takes something from a set as an input and outputs a number. Also, our set, M, is simply a subset of the space with d-dimensions. So, when we say that the space has d-dimensions it is like when we say that:

--

--

Responses (1)