Linear Algebra

Definition 1

An affine set is one of the form
A={xX: x=v+x0, vV}\mathcal{A}=\{ \mathbf{x}\in\mathcal{X}:\ \mathbf{x}=\mathbf{v}+\mathbf{x_0},\ \mathbf{v}\in\mathcal{V}\}
where
V\mathcal{V}
is a subspace of a vector space
X\mathcal{X}
and
x0x_0
is a given point.
Notice that by Definition 1, a subspace is simply an affine set containing the origin. Also notice that the dimension of an affine set
A\mathcal{A}
is the same as the dimension of
V\mathcal{V}
.

Norms

Definition 2

A norm on the vector space
X\mathcal{X}
is a function
:XR\|\cdot\|:\mathcal{X}\rightarrow\mathbb{R}
which satisfies:
  1. 1.
    x0\|\mathbf{x}\|\geq 0
    with equality if and only if
    x=0\mathbf{x}=\boldsymbol{0}
  2. 2.
    x+yx+y\|\mathbf{x}+\mathbf{y}\|\leq\|\mathbf{x}\|+\|\mathbf{y}\|
  3. 3.
    αx=αx\|\alpha \mathbf{x}\| = |\alpha|\|\mathbf{x}\|
    for any scalar
    α\alpha
    .

Definition 3

The
lpl_p
norms are defined by
xp=(k=1nxkp)1p, 1p\|\mathbf{x}\|_p=\left( \sum_{k=1}^n|x_k|^p \right)^{\frac{1}{p}},\ 1\leq p\leq \infty
In the limit as
pp\to\infty
,
x=maxkxk.\|\mathbf{x}\|_{\infty} = \max_k|x_k|.
Similar to vectors, matrices can also have norms.

Definition 4

A function
f:Rm×nRf: \mathbb{R}^{m\times n} \to \mathbb{R}
is a matrix norm if
f(A)0f(A)=0A=0f(αA)=αf(A)f(A+B)f(A)+f(B)f(A) \geq 0 \quad f(A) = 0 \Leftrightarrow A = 0 \quad f(\alpha A) = |\alpha| f(A) \quad f(A+B) \leq f(A) + f(B)

Definition 5

The Froebenius norm is the
l2l_2
norm applied to all elements of the matrix.
AF=traceAAT=i=1mj=1naij2\|A\|_F = \sqrt{\text{trace} AA^T} = \sqrt{\sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^2}
One useful way to characterize matrices is by measuring their “gain” relative to some
lpl_p
norm.

Definition 6

The operator norms is defined as
Ap=maxu0Aupup\|A\|_p = \max_{\mathbf{u}\ne0} \frac{\|A\mathbf{u}\|_p}{\|u\|_p}
When
p=2p=2
, the norm is called the spectral norm because it relates to the largest eigenvalue of
ATAA^TA
.
A2=λmax(ATA)\|A\|_2 = \sqrt{\lambda_{max}(A^TA)}

Inner Products

Definition 7

An inner product on real vector space is a function that maps
x,yX\mathbf{x},\mathbf{y} \in \mathcal{X}
to a non-negative scalar, is distributive, is commutative, and
x,x,=0x=0\langle \mathbf{x}, \mathbf{x}, \rangle = 0 \Leftrightarrow \mathbf{x}=0
.
Inner products induce a norm
x=x,x\|\mathbf{x}\| = \sqrt{\langle \mathbf{x}, \mathbf{x} \rangle}
. In
Rn\mathbb{R}^n
, the standard inner product is
xTy\mathbf{x}^T\mathbf{y}
. The angle bewteen two vectors is given by
cosθ=xTyx2y2.\cos\theta = \frac{\mathbf{x}^T\mathbf{y}}{\|\mathbf{x}\|_2\|\mathbf{y}\|_2}.
In general, we can bound the absolute value of the standard inner product between two vectors.

Theorem 1 (Holder Inequality)

xTyk=1nxkykxpyq, p,q1 s.t p1+q1=1.|\mathbf{x}^T\mathbf{y}| \leq \sum_{k=1}^n |x_ky_k| \leq \|\mathbf{x}\|_p\|\mathbf{y}\|_q,\ p, q\geq 1 \text{ s.t } p^{-1}+q^{-1}=1.
Notice that for
p=q=2p=q = 2
, Theorem 1 turns into the Cauchy-Schwartz Inequality (
xTyx2y2|\mathbf{x}^T\mathbf{y}| \leq \|\mathbf{x}\|_2\|\mathbf{y}\|_2
).

Functions

We consider functions to be of the form
f:RnRf:\mathbb{R}^n\rightarrow\mathbb{R}
. By contrast, a map is of the form
f:RnRmf:\mathbb{R}^n\rightarrow\mathbb{R}^m
. The components of the map
ff
are the scalar valued functions
fif_i
that produce each component of a map.

Definition 8

The graph of a function
ff
is the set of input-output pairs that
ff
can attain.
{(x,f(x))Rn+1: xRn}\left\{ (x, f(x))\in \mathbb{R}^{n+1}:\ x\in\mathbb{R}^n \right\}

Definition 9

The epigraph of a function is the set of input-output pairs that
ff
can achieve and anything above.
{(x,t)Rn+1: xRn+1, tf(x)}\left\{ (x,t) \in \mathbb{R}^{n+1}:\ \mathbf{x}\in\mathbb{R}^{n+1},\ t\geq f(x) \right\}

Definition 10

The t-level set is the set of points that achieve exactly some value of
ff
.
{xRn: f(x)=t}\{ \mathbf{x}\in\mathbb{R}^n:\ f(x)=t \}

Definition 11

The t-sublevel set of
ff
is the set of points achieving at most a value
tt
.
{xRn: f(x)t}\{ x\in\mathbb{R}^n:\ f(x)\leq t \}

Definition 12

The half-spaces are the regions of space which a hyper-plane separates.
H_={x:aTxb}H+={x:aTx>b}H_{\_} = \{ x: \mathbf{a}^T\mathbf{x}\leq b \} \qquad H_{+} = \{ x: \mathbf{a}^T\mathbf{x} > b \}

Definition 13

A polyhedron is the intersection of
mm
half-spaces given by
aiTxbi\mathbf{a}_i^T\mathbf{x}\leq b_i
for
i[1,m]i\in[1,m]
.
When a polyhedron is bounded, it is called a polytope.

Types of Functions

Theorem 2

A function is linear if and only if it can be expressed as
f(x)=aTx+bf(\mathbf{x}) = \mathbf{a}^T\mathbf{x}+b
for some unique pair
(a,b)(\mathbf{a}, b)
.
An affine function is linear when
b=0b=0
. A hyperplane is simply a level set of a linear function.

Theorem 3

Any quadratic function can be written as the sum of a quadratic term involving a symmetric matrix and an affine term:
q(x)=12xTHx+cTx+d.q(x) = \frac{1}{2}\mathbf{x}^TH\mathbf{x}+\mathbf{c}^T\mathbf{x} + d.
Another special class of functions are polyhedral functions.

Definition 14

A function
f:RnRf:\mathbb{R}^n\to\mathbb{R}
is polyhedral if its epigraph is a polyhedron.
epi f={(x,t)Rn+1: C[xt]d}\text{epi } f = \left\{(x,t) \in \mathbb{R}^{n+1} :\ C \begin{bmatrix}\mathbf{x} \\ t \end{bmatrix} \leq d \right\}

Vector Calculus

We can also do calculus with vector functions.

Definition 15

The gradient of a function at a point
xx
where
ff
is differentiable is a column vector of first derivatives of
ff
with respsect to the components of
x\mathbf{x}
ablaf(x)=[fx1fxn]abla f(x) = \begin{bmatrix} \frac{\partial f}{\partial x_1}\\ \vdots\\ \frac{\partial f}{\partial x_n} \end{bmatrix}
The gradient is perpendicular to the level sets of
ff
and points from a point
x0\mathbf{x}_0
to higher values of the function. In other words, it is the direction of steepest increase. It is akin to the derivative of a 1D function.

Definition 16

The Hessian of a function
ff
at point
xx
is a matrix of second derivatives.
Hij=2fxixjH_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j}
The Hessian is akin to the second derivative in a 1D function. Note that the Hessian is a symmetric matrix.

Matrices

Matrices define a linear map between an input space and an output space. Any linear map
f:RnRmf: \mathbb{R}^n \to \mathbb{R}^m
can be represented by a matrix.

Theorem 4 (Fundamental Theorem of Linear Algebra)

For any matrix
ARm×nA\in\mathbb{R}^{m\times n}
,
N(A)R(AT)=RnR(A)N(AT)=Rm.\mathcal{N}(A) \oplus \mathcal{R}(A^T) = \mathbb{R}^n \qquad \mathcal{R}(A) \oplus \mathcal{N}(A^T) = \mathbb{R}^m.

Symmetric Matrices

Recall that a symmetric matrix is one where
A=ATA = A^T
.

Theorem 5 (Spectral Theorem)

Any symmetric matrix is orthogonally similar to a real diagonal matrix.
A=AT    A=UΛUT=iλiuiuiT,u=1,uiTuj=0 (ij)A = A^T \implies A = U \Lambda U^T = \sum_i \lambda_i \mathbf{u}_i\mathbf{u}_i^T,\quad \|\mathbf{u}\| = 1, \quad \mathbf{u}_i^T\mathbf{u}_j = 0 \ (i \ne j)
Let
λmin(A)\lambda_{min}(A)
be the smallest eigenvalue of symmetric matrix
AA
and
λmax(A)\lambda_{max}(A)
be the largest eigenvalue.

Definition 17

The Rayleigh Quotient for
x0\mathbf{x} \ne \boldsymbol{0}
is
xTAxx2.\frac{\mathbf{x}^TA\mathbf{x}}{\|\mathbf{x}\|^2}.

Theorem 6

For any
x0\mathbf{x} \ne \boldsymbol{0}
,
λmin(A)xTAxx2λmax(A).\lambda_{min}(A) \leq \frac{\mathbf{x}^TA\mathbf{x}}{\|\mathbf{x}\|^2} \leq \lambda_{max}(A).
Two special types of symmetric matrices are those with non-negative eigenvalues.

Definition 18

A symmetric matrix is positive semi-definite if
xTAx0    λmin(A)0\mathbf{x}^TA\mathbf{x} \geq 0 \implies \lambda_{min}(A) \geq 0
.

Definition 19

A symmetric matrix is poitive definite if
xTAx>0    λmin(A)>0\mathbf{x}^TA\mathbf{x} > 0 \implies \lambda_{min}(A) > 0
.
These matrices are important because they often have very clear geometric structures. For example, an ellipsoid in multi-dimensional space can be defined as the set of points
E={xRm: xTP1x1}\mathcal{E} = \{ x\in\mathbb{R}^m : \ \mathbf{x}^T P^{-1} \mathbf{x} \leq 1 \}
where
PP
is a positive definite matrix. The eigenvectors of
PP
give the principle axes of this ellipse, and
λ\sqrt{\lambda}
are the semi-axis lengths.

QR Factorization

Similar to how spectral theorem allows us to decompose symmetric matrices, QR factorization is another matrix decomposition technique that works for any general matrix.

Definition 20

The QR factorization matrix are the orthogonal matrix Q and the upper triangular matrix R such that
A=QRA = QR
An easy way to find the QR factorization of a matrix is to apply Graham Schmidt to the columns of the matrix and express the result in matrix form. Suppose that our matrix
AA
is full rank (i.e its columns
ai\mathbf{a}_i
are linearly independent) and we have applied Graham-Schmidt to columns
ai+1an\mathbf{a}_{i+1}\cdots\mathbf{a}_n
to get orthogonal vectors
qi+1qn\mathbf{q}_{i+1}\cdots\mathbf{q}_{n}
. Continuing the procedure, the ith orthogonal vector
qi\mathbf{q}_i
is
q~i=aik=i+1n(qkTak)qkqi=q~iq~i2.\mathbf{\tilde{q}}_i = \mathbf{a}_i - \sum_{k=i+1}^{n} (\mathbf{q}_k^T \mathbf{a}_k)\mathbf{q}_k \qquad \mathbf{q}_i = \frac{\mathbf{\tilde{q}}_i}{\|\mathbf{\tilde{q}}_i\|_2}.
If we re-arrange this, to solve for
ai\mathbf{a}_i
, we see that
ai=q~i2qi+k=i+1n(qkTak)qk.\mathbf{a}_i = \|\mathbf{\tilde{q}}_i\|_2 \mathbf{q}_i + \sum_{k=i+1}^{n} (\mathbf{q}_k^T \mathbf{a}_k)\mathbf{q}_k.
Putting this in matrix form, we can see that
[a1a2an]=[q1q2qn][r11r12r1n0r22r2n00rnn]rij=aiTqj,rii=q~i2.\begin{bmatrix} | & | & & | \\ \mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_{n}\\ | & | & & | \\ \end{bmatrix} = \begin{bmatrix} | & | & & | \\ \mathbf{q}_1 & \mathbf{q}_2 & \cdots & \mathbf{q}_{n}\\ | & | & & | \\ \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & \cdots & r_{1n}\\ 0 & r_{22} & \cdots & r_{2n}\\ \vdots & \ddots & \ddots & \vdots\\ 0 & \cdots & 0 & r_{nn} \end{bmatrix} \qquad r_{ij} = \mathbf{a}_i^T\mathbf{q_j}, r_{ii} = \|\mathbf{\tilde{q}}_i\|_2.

Singular Value Decomposition

Definition 21

A matrix
ARm×nA\in\mathbb{R}^{m\times n}
is a dyad if it can be written as
pqT\mathbf{p}\mathbf{q}^T
.
A dyad is a rank-one matrix. It turns out that all matrices can be decomposed into a sum of dyads.

Definition 22

The Singular Value Decomposition of a matrix
AA
is
A=i=1rσiuiviTA = \sum_{i=1}^{r} \sigma_i \mathbf{u}_i\mathbf{v}_i^T
where
σi\sigma_i
are the singular values of
AA
and
ui\mathbf{u}_i
and
vi\mathbf{v}_i
are the left and right singular vectors.
Th singular values are ordered such that
σ1>=σ2>=\sigma_1 >= \sigma_2 >= \cdots
. The left singular values are the eigenvectors of
AATAA^T
and the right singular values are the eigenvectors of
ATAA^TA
. The singular values are
λi\sqrt{\lambda}_i
where
λi\lambda_i
are the eigenvalues of
ATAA^TA
. Since
AATAA^T
and
ATAA^TA
are symmetric,
ui\mathbf{u}_i
and
vi\mathbf{v}_i
are orthogonal. The number of non-zero singular values is equal to the rank of the matrix. We can write the SVD in matrix form as
A=[UrUnr]diag(σ1,,σr,0,,0)[VrTVnrT]A = \left[U_r\quad U_{n-r}\right]\text{diag}(\sigma_1,\cdots,\sigma_r,0,\cdots,0)\begin{bmatrix}V^T_r\\V^T_{n-r}\end{bmatrix}
Writing the SVD tells us that
  1. 1.
    VnrV_{n-r}
    forms a basis for
    N(A)\mathcal{N}(A)
  2. 2.
    UrU_{r}
    form a basis for
    R(A)\mathcal{R}(A)
The Frobenius norm and spectral norm are tightly related to the SVD.
AF=iσi2\|A\|_F = \sum_{i}\sigma_i^2
A22=σ12\|A\|_2^2 = \sigma_1^2