Lecture 13: Quadratic Programming and KKT Conditions
Lecture 13: Quadratic Programming (QP) and KKT Conditions
1. Introduction
Quadratic Programming (QP) is a special type of optimization problem where
the objective function is quadratic and the constraints are linear.
It is one of the most important optimization frameworks in machine learning,
especially in Support Vector Machines (SVMs), portfolio optimization, and control systems.
2. General Form of Quadratic Programming
Minimize: f(x) = 1/2 xTQx + cTx
Subject to: Ax ≤ b, Ex = d
Q → Symmetric matrix (defines curvature of quadratic function)
c → Linear coefficients vector
A, b → Inequality constraints
E, d → Equality constraints
3. Types of Quadratic Programming
Convex QP: If Q is positive semidefinite → global optimum guaranteed.
Non-convex QP: If Q has negative eigenvalues → local minima/maxima possible.
4. Properties of Quadratic Programming
Convex QPs can be solved efficiently using interior-point or active-set methods.
Non-convex QPs are NP-hard in general.
Solution depends heavily on the definiteness of matrix Q.
KKT conditions provide necessary (and for convex case, sufficient) conditions for optimality.
5. Karush–Kuhn–Tucker (KKT) Conditions
The KKT conditions are first-order necessary conditions for a solution in nonlinear
programming to be optimal, under certain regularity conditions.