Fundamentals of Optimization

Definition 23

The standard form of optimization is

p=minxf0(x) such that fi(x)0p^\star = \min_\mathbf{x} f_0(\mathbf{x}) \text{ such that } f_i(\mathbf{x}) \leq 0

  • The vector xRn\mathbf{x}\in\mathbb{R}^n is known as the decision variable.

  • The function f0:RnRf_0:\mathbb{R}^n\to\mathbb{R} is the objective.

  • The functions fi:RnRf_i:\mathbb{R}^n\to\mathbb{R} is the constraints.

  • pp^\star is the optimal value, and the x\mathbf{x}^\star which achieves the optimal value is called the optimizer.

Definition 24

The feasible set of an optimization problem is

X={xRn: fi(x)0}\mathcal{X} = \{\mathbf{x}\in\mathbb{R}^n:\ f_i(\mathbf{x}) \leq 0 \}

Definition 25

A point x\mathbf{x} is ϵ\epsilon-suboptimal if it is feasible and satisfies

pf0(x)p+ϵp^\star \leq f_0(\mathbf{x}) \leq p^\star + \epsilon

Definition 26

An optimization problem is strictly feasible if x0\exists \mathbf{x}_0such that all constraints are strictly satisfied (i.e inequalities are strict inequalities, and equalities are satisfied).

Problem Transformations

Sometimes, optimizations in a particular formulation do not admit themselves to be solved easily. In this case, we can sometimes transform the problem into an easier one from which we can easily recover the solution to our original problem. In many cases, we can introduce additional “slack” variable and constraints to massage the problem into a form which is easier to analyze.

Theorem 7 (Epigraphic Constraints)

minxf0(x)\min_\mathbf{x} f_0(x) is equivalent to the problem with epigraphic constraints

minx,tt:f0(x)t,\min_{\mathbf{x}, t} t \quad : \quad f_0(x) \leq t,

Theorem 7 works because by minimizing tt, we are also minimizing how large f0(x)f_0(x) can get since f0(x)tf_0(x) \leq t, so at optimum, f0(x)=tf_0(x) = t. It can be helpful when f0(x)tf_0(x) \leq t can be massaged further into constraints that are easier to deal with.

Theorem 8 (Monotone Objective Transformation)

Let Φ:RR\Phi:\mathbb{R}\to\mathbb{R} be a continuous and strictly increasing function over a feasible set X\mathcal{X}. Then

minxXf0(x)minxXΦ(f0(x))\min_{\mathbf{x}\in\mathcal{X}}f_0(\mathbf{x}) \equiv \min_{\mathbf{x}\in\mathcal{X}} \Phi(f_0(\mathbf{x}))

Robust Optimization

For a “nominal” problem

minxf0(x):i[1,m], fi(x)0,\min_\mathbf{x} f_0(\mathbf{x}) \quad : \quad \forall i\in[1,m],\ f_i(\mathbf{x}) \leq 0,

uncertainty can enter in the data used to create the f0f_0 and fif_i. It can also enter during decision time where the x\mathbf{x}^\star which solves the optimization cannot be implemented exactly. These uncertainties can create unstable solutions or degraded performance. To make our optimization more robust to uncertainty, we add a new variable uU\mathbf{u}\in\mathcal{U}.

Definition 27

For a nominal optimization problem minxf0(x)\min_\mathbf{x} f_0(\mathbf{x}) subject to fi(x)0f_i(\mathbf{x}) \leq 0 for i[1,m]i\in[1,m], the robust counterpart is

minxmaxuUf0(x,u):i[1,m], fi(x,u)0\min_\mathbf{x} \max_{\mathbf{u}\in\mathcal{U}} f_0(\mathbf{x}, \mathbf{u}) \quad : \quad \forall i\in[1,m],\ f_i(\mathbf{x}, \mathbf{u}) \leq 0

Last updated