# Fundamentals of Optimization

- The vector$\mathbf{x}\in\mathbb{R}^n$is known as the
**decision variable**. - The function$f_0:\mathbb{R}^n\to\mathbb{R}$is the
**objective**. - The functions$f_i:\mathbb{R}^n\to\mathbb{R}$is the
**constraints**. - $p^\star$is the
**optimal value**, and the$\mathbf{x}^\star$which achieves the optimal value is called the**optimizer**.

Sometimes, optimizations in a particular formulation do not admit themselves to be solved easily. In this case, we can sometimes transform the problem into an easier one from which we can easily recover the solution to our original problem. In many cases, we can introduce additional “slack” variable and constraints to massage the problem into a form which is easier to analyze.

Theorem 7 works because by minimizing

$t$

, we are also minimizing how large $f_0(x)$

can get since $f_0(x) \leq t$

, so at optimum, $f_0(x) = t$

. It can be helpful when $f_0(x) \leq t$

can be massaged further into constraints that are easier to deal with.For a “nominal” problem

$\min_\mathbf{x} f_0(\mathbf{x}) \quad : \quad \forall i\in[1,m],\ f_i(\mathbf{x}) \leq 0,$

uncertainty can enter in the data used to create the

$f_0$

and $f_i$

. It can also enter during decision time where the $\mathbf{x}^\star$

which solves the optimization cannot be implemented exactly. These uncertainties can create unstable solutions or degraded performance. To make our optimization more robust to uncertainty, we add a new variable $\mathbf{u}\in\mathcal{U}$

.For a nominal optimization problem

$\min_\mathbf{x} f_0(\mathbf{x})$

subject to $f_i(\mathbf{x}) \leq 0$

for $i\in[1,m]$

, the robust counterpart is

$\min_\mathbf{x} \max_{\mathbf{u}\in\mathcal{U}} f_0(\mathbf{x}, \mathbf{u}) \quad : \quad \forall i\in[1,m],\ f_i(\mathbf{x}, \mathbf{u}) \leq 0$

Last modified 2yr ago