Linear Estimation

Affine Estimation

This is equivalent to the following orthogonality conditions:

Solving gives us

then

  1. Gaussian random variables are the hardest predict because nonlinearity should improve our error, but it does not in the Gaussian case. This means if affine estimation works well, we shouldn’t try and find better non-linear estimators.

Least Squares

Theorem 2 (Gauss Markov Theorem)

Recursive Least Squares

Applying Theorem 1 and solving the normal equation, we see

By applying the Sherman-Morrison-Woodbury identity, we can see that

Theorem 3 (Recursive Least Squares Update)

Applying the Sherman-Morrison-Woodbury identity,

Theorem 4 (Recursive Least Squares Downdate)

The best least squares estimate using all but the kth observation can be found by updating the best least squares estimate using all data points using

Last updated