Convex Optimization

Convexity

Definition 30

A subset CRnC\in\mathbb{R}^n is convex if it contains the line segment between any two points in the set.

x1,x2C, λ[0,1],λx1+(1λ)x2C\forall \mathbf{x}_1, \mathbf{x}_2\in C,\ \lambda\in[0, 1],\quad \lambda \mathbf{x}_1+(1-\lambda)\mathbf{x}_2 \in C

Convexity can be preserved by some operations.

Theorem 10

If C1,,CmC_1,\cdots,C_m are convex sets, then their intersection C=i=1,,mCiC = \bigcap_{i=1,\cdots,m}C_iis also a convex set.

Theorem 11

If a map f:RnRmf:\mathbb{R}^n\to\mathbb{R}^m is affine and CRnC \subset \mathbb{R}^n is convex, then f(C)={f(x):xC}f(C) = \{ f(\mathbf{x}): \mathbf{x}\in C \}is convex.

Theorem 10, Theorem 11 are important because they allow us to prove sets are convex using sets that we know are convex. For example, Theorem 11 tells us that a projection of a convex set onto a subspace must also be convex since projection is a linear operator.

Definition 31

A function f:RnRf:\mathbb{R}^n\to\mathbb{R} is convex if its domain is a convex set and x,y\forall \mathbf{x}, \mathbf{y} in the domain, λ[0,1]\lambda \in[0, 1],

f(λx+(1λ)y)λf(x)+(1λ)f(y)f(\lambda \mathbf{x} + (1-\lambda)\mathbf{y}) \leq \lambda f(\mathbf{x}) + (1-\lambda)f(\mathbf{y})

Loosely, convexity means that the function is bowl shaped since a line connecting any two points on the function is above the function itself. A concave function is simply one where f-f is convex, and these appear like a “hill”. Because convex functions are bowl shaped, they must be \infty outside their domain.

Theorem 12

A function ffis convex if and only if its epigraph is a convex set.

Just like convex sets, some operations preserve convexity for functions.

Theorem 13

If fi:RnRf_i:\mathbb{R}^n\to\mathbb{R} are convex functions, then f(x)=i=1mαifi(x)f(\mathbf{x}) = \sum_{i=1}^m\alpha_if_i(\mathbf{x}) where αi0\alpha_i\geq 0is also convex.

A similar property to Theorem 11 exists for convex functions.

Theorem 14

If f:RnRf:\mathbb{R}^n\to\mathbb{R} is convex, then g(x)=f(Ax+b)g(\mathbf{x}) = f(A\mathbf{x}+b)is also convex.

We can also look at the first and second order derivatives to determine the convexity of a function.

Theorem 15

If ff is differentiable, then ff is convex if and only if

x,y,f(y)f(x)+xT(yx)\forall \mathbf{x}, \mathbf{y},\quad f(\mathbf{y}) \geq f(\mathbf{x}) + \nabla_x^T (\mathbf{y}-\mathbf{x})

Theorem 15 can be understood geometrically by saying the graph of ff is bounded below everywhere by its tangent hyperplanes.

Theorem 16

If ff is twice differentiable, then ff is convex if and only if the Hessian abla2abla^2is positive semi-definite everywhere.

Geometrically, the second-order condition says that ff looks bowl-shaped.

Theorem 17

A function ff is convex if and only if its restriction to any line g(t)=f(x0+tv)g(t)=f(\mathbf{x}_0+t\mathbf{v})is convex.

Theorem 18

If (fα)αA(f_\alpha)_{\alpha\in\mathcal{A}} is a family of convex functions, then the pointwise maximum f(x)=maxαAfα(x)f(\mathbf{x}) = \max_{\alpha\in\mathcal{A}} f_\alpha(\mathbf{x})is convex.

Because of the nice geometry that convexity gives, optimization problems which involve convex functions and sets are reliably solveable.

Definition 32

A convex optimization problem in standard form is

p=minxf0(x):i[1,m],fi(x)0,Ax=bp^* = \min_{\mathbf{x}}f_0(\mathbf{x}) : \quad \forall i\in[1,m], f_i(\mathbf{x}) \leq 0, A\mathbf{x} = \mathbf{b}

where f0,f1,f_0, f_1, \cdotsare convex functions and the equality constraints are affine.

Since the constraints form a convex set, Definition 32 is equivalent to minimizing a convex function over a convex set X\mathcal{X}.

Theorem 19

A locally optimal solution to a convex problem is also globally optimal, and this set X\mathcal{X}is convex.

Theorem 19 is why convex problems are nice to solve.

Optimality

When problems are convex, we can define conditions that any optimal solution must satisfy.

Theorem 20

For a convex optimization problem with a differentiable objective function f0(x)f_0(\mathbf{x}) and feasible set X\mathcal{X},

x is optimal yX,xf0(x)(yx)0\mathbf{x} \text{ is optimal } \Leftrightarrow \forall \mathbf{y}\in\mathcal{X}, \nabla_xf_0(\mathbf{x})^\top(\mathbf{y}-\mathbf{x}) \geq 0

Since the gradient points in the direction of greatest increase, the dot product of the gradient with the different between any vector and the optimal solution being positive means other solutions will only increase the value of f0(x)f_0(\mathbf{x}). For unconstrained problems, we can make this condition even sharper.

Theorem 21

In a convex unconstrained problem with a differentiable objective function f0(x)f_0(\mathbf{x}), x\mathbf{x} is optimal if an only if ablaxf0(x)=0abla_xf_0(\mathbf{x}) = \boldsymbol{0}

Conic Programming

Conic programming is the set of optimization problems which deal with variables constrained to a second-order cone.

Definition 33

A n-dimensional second-order cone is the set

Kn={(x,t), xRn, tR: x2t}\mathcal{K}_n = \{(\mathbf{x}, t),\ \mathbf{x}\in\mathbb{R}^n,\ t\in\mathbb{R}:\ \|\mathbf{x}\|_2 \leq t\}

By Cauchy-Schwartz, x2=maxu:u1uTxt\|\mathbf{x}\|_2 = \max_{\mathbf{u}:\|\mathbf{u}\|\leq 1} \mathbf{u}^T\mathbf{x} \leq t. This means that second order cones are convex sets since they are the intersection of half-spaces. In spaces 3-dimensions and higher, we can rotate these cones.

Definition 34

A rotated second order cone in Rn+2\mathbb{R}^{n+2} is the set

Knr={(x,y,z),xRn,yR,zR: xTxyz,y0,z0}.\mathcal{K}_n^r = \{(\mathbf{x}, y, z),\mathbf{x}\in\mathbb{R}^n, y\in\mathbb{R}, z\in\mathbb{R}:\ \mathbf{x}^T\mathbf{x} \leq yz, y\geq 0, z \geq 0 \}.

The rotated second-order cone can be interpreted as a rotation because the hyperbolic constraint x22yz\|\mathbf{x}\|_2^2\leq yz can be expressed equivalently as

[2xyz]2y+z.\left\lVert\begin{bmatrix}2\mathbf{x} \\ y - z\end{bmatrix}\right\rVert_2 \leq y+z.

Definition 35

The standard Second Order Cone Constraint is

Ax+b2cTx+d.\|A\mathbf{x}+\mathbf{b}\|_2 \leq \mathbf{c}^T\mathbf{x} +d.

A SOC constraint will confine x\mathbf{x} to a second order cone since if we let y=Ax+bRm\mathbf{y} = A\mathbf{x}+\mathbf{b} \in \mathbb{R}^m and t=cTx+dt = \mathbf{c}^T\mathbf{x}+d, then (y,t)Km(\mathbf{y}, t)\in\mathcal{K}_m.

Definition 36

A second-order cone program in standard inequality form is given by

mincTx such that Aix+bi2ciTx+di.\min \mathbf{c}^T\mathbf{x} \text{ such that } \|A_i\mathbf{x}+\mathbf{b}_i\|_2 \leq \mathbf{c}_i^T\mathbf{x}+d_i.

An SOC program is a convex problem since its objective is linear, and hence convex, and the SOC constraints are also convex.

Quadratic Programming

A special case of SOCPs are Quadratic Programs. These programs have constraints and an objective function which can be expressed as a quadratic function. In SOCP form, they look like

minx,ta0Tx+ts.t: [2Q012xt1]2t+1[2Qi12xbiaiTx1]2biaix+1\begin{aligned} \min_{\mathbf{x}, t} &\quad \mathbf{a}_0^T\mathbf{x} + t\\ \text{s.t: } & \left\lVert \begin{bmatrix}2Q_0^{\frac{1}{2}}\mathbf{x}\\ t-1 \end{bmatrix}\right\rVert_2 \leq t+1\\ & \left\lVert \begin{bmatrix}2Q_i^{\frac{1}{2}}\mathbf{x}\\ b_i-\mathbf{a}_i^T\mathbf{x}-1 \end{bmatrix}\right\rVert_2 \leq b_i - \mathbf{a}_i\mathbf{x} + 1\end{aligned}

Since they are a special case of SOCPs, Quadratic Programs are also convex.

Definition 37

The standard form of a quadratic constrained quadratic program is

minxxTQ0x+a0Tx:i[1,m], xTQix+aiTxbi\min_\mathbf{x} \mathbf{x}^TQ_0\mathbf{x} + \mathbf{a}_0^T\mathbf{x} \quad : \quad \forall i\in[1,m],\ \mathbf{x}^TQ_i\mathbf{x} + \mathbf{a}_i^T\mathbf{x} \leq b_i

To be a quadratic program, the matrix HH must be positive semi-definite. If the Qi=0Q_i=0 in the constraints, then we get a normal quadratic program.

Definition 38

The standard form of a quadratic program is given by

minx12xTHx+cTx:i[1,m], aiTxbi\min_\mathbf{x}\frac{1}{2}\mathbf{x}^TH\mathbf{x} + \mathbf{c}^T\mathbf{x} \quad : \quad \forall i\in[1,m],\ \mathbf{a}_i^T\mathbf{x} \leq b_i

Its SOCP form looks like

minx,ycTx+ys.t: [2H12xy1]2y+1,aixbi\begin{aligned} \min_{\mathbf{x}, y} &\quad \mathbf{c}^T\mathbf{x} + y\\ \text{s.t: } &\left\lVert \begin{bmatrix}2H^{\frac{1}{2}}\mathbf{x} \\ y - 1 \end{bmatrix}\right\rVert_2 \leq y + 1,\\ & \mathbf{a}_i\mathbf{x} \leq b_i\end{aligned}

In the special case where HH is positive definite and we have no constraints, then

12xTHx+cTx+d=12(x+H1c)TH(x+H1c)+d(H1c)TH(H1c)\frac{1}{2}\mathbf{x}^TH\mathbf{x} + \mathbf{c}^T\mathbf{x} + d = \frac{1}{2}(\mathbf{x} + H^{-1}\mathbf{c})^TH(\mathbf{x} + H^{-1}\mathbf{c}) + d - (H^{-1}\mathbf{c})^TH(H^{-1}\mathbf{c})

Thus

argminx12xTHx+cTx+d=H1c\text{argmin}_\mathbf{x} \frac{1}{2}\mathbf{x}^TH\mathbf{x} + \mathbf{c}^T\mathbf{x} + d = -H^{-1}\mathbf{c}

Linear Programming

If the matrix in the objective function of a quadratic program is 0 (and there are no quadratic constraints), then the resulting objective and constraints are affine functions. This is a linear program.

Definition 39

The inequality form of a linear program is given by

minxcTx+d:i[1,m], aiTxbi\min_\mathbf{x} \mathbf{c}^T\mathbf{x} + d \quad : \quad \forall i\in[1,m],\ \mathbf{a}_i^T\mathbf{x} \leq b_i

Since linear program is a special case of a quadratic program, it can also be expressed as an SOCP.

minxcTxs.t i[1,m], 0x+02biaiTx\begin{aligned} \min_\mathbf{x} &\quad \mathbf{c}^T\mathbf{x}\\ \text{s.t } &\quad \forall i\in[1,m],\ \|0\mathbf{x} + 0\|_2 \leq b_i - \mathbf{a}_i^T\mathbf{x}\end{aligned}

Because of the constraints, the feasible set of a linear program is a polyhedron. Thus linear programs are also convex.

Last updated