Hilbert Space Theory
Complex random variables form a Hilbert space with inner product
. If we have a random complex vector, then we can use Hilbert Theory in a more efficient manner by looking at the matrix of inner products. For simplicity, we will call this the “inner product” of two complex vectors.
The ij-th entry of the matrix is simply the scalar inner product
are the ith and jth entries of
respectively. This means the matrix is equivalent to the cross correlation
between the two vectors. We can also specify the auto-correlation
. One reason why we can think of this matrix as the inner product is because it also satisfies the properties of inner products. In particular, it is
Since we are thinking of the matrix as an inner product, we can also think of the norm as a matrix.
When thinking of inner products as matrices instead of scalars, we must rewrite the Hilbert Projection Theorem to use matrices instead.
When we do a minimization over a matrix, we are minimizing it in a PSD sense, so for any other linear function
Suppose we have jointly distributed random variables
. Ideally, we would be able to “de-correlate” them so each new vector
captures the new information which is orthogonal to previous random vectors in the sequence. Since vectors of a Hilbert Space operate like vectors in
, we can simply do Gram-Schmidt on the
Innovations have two key properties.
We can also write innovations in terms of a matrix where
. Since each
only depends on the previous
, then A must be lower triangular, and because we need each
to be mutually orthogonal,
should be diagonal.
, so if
, then we can use its unique LDL decomposition