p⋆=minxf0(x) such that fi(x)≤0
The vector x∈Rn is known as the decision variable.
The function f0:Rn→R is the objective.
The functions fi:Rn→R is the constraints.
p⋆ is the optimal value, and the x⋆ which achieves the optimal value is called the optimizer.
X={x∈Rn: fi(x)≤0}
A point x is ϵ-suboptimal if it is feasible and satisfies
p⋆≤f0(x)≤p⋆+ϵ
An optimization problem is strictly feasible if ∃x0such that all constraints are strictly satisfied (i.e inequalities are strict inequalities, and equalities are satisfied).
minxf0(x) is equivalent to the problem with epigraphic constraints
minx,tt:f0(x)≤t,
Theorem 7 works because by minimizing t, we are also minimizing how large f0(x) can get since f0(x)≤t, so at optimum, f0(x)=t. It can be helpful when f0(x)≤t can be massaged further into constraints that are easier to deal with.
Let Φ:R→R be a continuous and strictly increasing function over a feasible set X. Then
minx∈Xf0(x)≡minx∈XΦ(f0(x))
minxf0(x):∀i∈[1,m], fi(x)≤0,
uncertainty can enter in the data used to create the f0 and fi. It can also enter during decision time where the x⋆ which solves the optimization cannot be implemented exactly. These uncertainties can create unstable solutions or degraded performance. To make our optimization more robust to uncertainty, we add a new variable u∈U.
For a nominal optimization problem minxf0(x) subject to fi(x)≤0 for i∈[1,m], the robust counterpart is
minxmaxu∈Uf0(x,u):∀i∈[1,m], fi(x,u)≤0