Comment on page
Linear Algebraic Optimization
Many optimization problems can be solved using the machinery of Linear Algebra. These problems do not have inequality constraints or non-euclidean norms in the objective function.
The idea behind projection is to find the closest point in a set closest (with respect to particular norm) to a given point.
It is necessary and sufficient for
to be optimal that
. The same condition applies when projecting onto an affine set.
There are several special cases of pseudoinverses.
- 1.is the Moore-Penrose Pseudo-inverse.
- 2.Whenand non-singular,.
- 3.Whenis full column rank,.
- 4.Whenis full row rank,
The pseudo-inverses are useful because they can easily compute the projection of a vector onto a related subspace of
.
- 1.
- 2.
- 3.
- 4.
The Low Rank Approximation problem is to approximate a matrix
with a rank
matrix
The solution to the low rank approximation problem is simply the first
terms of the SVD:
This is because the singular values give us a notion of how much of the Frobenius Norm (Total Variance) each dyad explains.
Suppose we had a matrix containing
data points in
(each data point is a column), and without loss of generality, assume this data is centered around 0 (i.e
). The variance of this data along a particular direction
is given by
. Principle Component Analysis is finding the directions
such that the variance is maximized.
The left singular vector corresponding to the largest singular value of the
matrix is the optimizer of this problem, and the variance along this direction is
. If we wanted to find subsequent directions of maximal variance, they are just the left singular vectors corresponding to the largest singular values.
Following from the Fundmental Theorem of Linear Algebra, if
has a solution, then the set of solutions can be expressed as
where
and
is a basis for
. This means if we have a constrained optimization problem
we can write an equivalent unconstrained problem
where
Last modified 2yr ago