Notice that by Definition 1, a subspace is simply an affine set containing the origin. Also notice that the dimension of an affine set
is the same as the dimension of
In the limit as
Similar to vectors, matrices can also have norms.
One useful way to characterize matrices is by measuring their “gain” relative to some
, the norm is called the spectral norm because it relates to the largest eigenvalue of
Inner products induce a norm
, the standard inner product is
. The angle bewteen two vectors is given by
In general, we can bound the absolute value of the standard inner product between two vectors.
Notice that for
, Theorem 1 turns into the Cauchy-Schwartz Inequality (
We consider functions to be of the form
. By contrast, a map is of the form
. The components of the map
are the scalar valued functions
that produce each component of a map.
When a polyhedron is bounded, it is called a polytope.
An affine function is linear when
. A hyperplane is simply a level set of a linear function.
Another special class of functions are polyhedral functions.
We can also do calculus with vector functions.
The gradient is perpendicular to the level sets of
and points from a point
to higher values of the function. In other words, it is the direction of steepest increase. It is akin to the derivative of a 1D function.
The Hessian is akin to the second derivative in a 1D function. Note that the Hessian is a symmetric matrix.
Matrices define a linear map between an input space and an output space. Any linear map
can be represented by a matrix.
Recall that a symmetric matrix is one where
be the smallest eigenvalue of symmetric matrix
be the largest eigenvalue.
Two special types of symmetric matrices are those with non-negative eigenvalues.
These matrices are important because they often have very clear geometric structures. For example, an ellipsoid in multi-dimensional space can be defined as the set of points
is a positive definite matrix. The eigenvectors of
give the principle axes of this ellipse, and
are the semi-axis lengths.
Similar to how spectral theorem allows us to decompose symmetric matrices, QR factorization is another matrix decomposition technique that works for any general matrix.
An easy way to find the QR factorization of a matrix is to apply Graham Schmidt to the columns of the matrix and express the result in matrix form. Suppose that our matrix
is full rank (i.e its columns
are linearly independent) and we have applied Graham-Schmidt to columns
to get orthogonal vectors
. Continuing the procedure, the ith orthogonal vector
If we re-arrange this, to solve for
, we see that
Putting this in matrix form, we can see that
A dyad is a rank-one matrix. It turns out that all matrices can be decomposed into a sum of dyads.
Th singular values are ordered such that
. The left singular values are the eigenvectors of
and the right singular values are the eigenvectors of
. The singular values are
are the eigenvalues of
are orthogonal. The number of non-zero singular values is equal to the rank of the matrix. We can write the SVD in matrix form as
Writing the SVD tells us that
- 1.forms a basis for
- 2.form a basis for
The Frobenius norm and spectral norm are tightly related to the SVD.