Lecture 11: Convex vs Non-Convex Optimization

Lecture 11: Convex and Non-Convex Optimization

1. Convex Optimization – Basics

A set C is convex if for any two points \(x_1, x_2 \in C\), the line segment joining them is also in C. Mathematically:
\( \theta x_1 + (1-\theta)x_2 \in C, \ \forall \theta \in [0,1] \).

A function \(f(x)\) is convex if: \( f(\theta x_1 + (1-\theta)x_2) \leq \theta f(x_1) + (1-\theta)f(x_2) \).

A convex optimization problem has:

2. Properties of Convex Problems

3. Example of Convex and Non-Convex Functions

Convex Example: \( f(x) = x^2 \) → Single global minimum at \(x=0\).

Non-Convex Example: \( f(x) = x^4 - 3x^2 + 2 \) → Multiple local minima.

4. Applications in Machine Learning

5. Convex vs Non-Convex – Comparison

Convex Optimization

  • Easy to solve (polynomial time algorithms).
  • No risk of local minima traps.
  • Guaranteed global optimum.

Non-Convex Optimization

  • Hard to solve, may get stuck in local minima/saddle points.
  • Useful in complex models (e.g., deep learning).
  • Global optimum not guaranteed.

6. Interactive Gradient Descent Demo

Select function type and try gradient descent updates: