Berkeley Notes
  • Introduction
  • EE120
    • Introduction to Signals and Systems
    • The Fourier Series
    • The Fourier Transform
    • Generalized transforms
    • Linear Time-Invariant Systems
    • Feedback Control
    • Sampling
    • Appendix
  • EE123
    • The DFT
    • Spectral Analysis
    • Sampling
    • Filtering
  • EECS126
    • Introduction to Probability
    • Random Variables and their Distributions
    • Concentration
    • Information Theory
    • Random Processes
    • Random Graphs
    • Statistical Inference
    • Estimation
  • EECS127
    • Linear Algebra
    • Fundamentals of Optimization
    • Linear Algebraic Optimization
    • Convex Optimization
    • Duality
  • EE128
    • Introduction to Control
    • Modeling Systems
    • System Performance
    • Design Tools
    • Cascade Compensation
    • State-Space Control
    • Digital Control Systems
    • Cayley-Hamilton
  • EECS225A
    • Hilbert Space Theory
    • Linear Estimation
    • Discrete Time Random Processes
    • Filtering
  • EE222
    • Real Analysis
    • Differential Geometry
    • Nonlinear System Dynamics
    • Stability of Nonlinear Systems
    • Nonlinear Feedback Control
Powered by GitBook
On this page

Was this helpful?

  1. EE128

Cayley-Hamilton

PreviousDigital Control SystemsNextEECS225A

Last updated 3 years ago

Was this helpful?

Theorem 16

Every square matrix AAA satisfies its own characteristic polynomial if there are no repeated eigenvalues.

Δ(A)=0\Delta(A) = 0Δ(A)=0

Δ(λ)=∣λI−A∣=λn+∑i=0n−1ciλi\Delta(\lambda) = |\lambda I - A| = \lambda^n + \sum_{i=0}^{n-1} c_i \lambda^iΔ(λ)=∣λI−A∣=λn+∑i=0n−1​ci​λi

In the case where AAA is diagonalizable (i.e A=PΛP−1A = P\Lambda P^{-1}A=PΛP−1),

Δ(A)=P[Λn+∑i=0n−1ciΛi]P−1.\Delta(A) = P\left[ \Lambda^n + \sum_{i=0}^{n-1}c_i \Lambda^i \right]P^{-1}.Δ(A)=P[Λn+∑i=0n−1​ci​Λi]P−1.

Λn+∑i=0n−1ciΛi\Lambda^n + \sum_{i=0}^{n-1}c_i \Lambda^iΛn+∑i=0n−1​ci​Λi is itself a diagonal matrix where the jth entry on the diagonal is

λjn+∑i=0n−1ciλj=0\lambda_j^n + \sum_{i=0}^{n-1}c_i\lambda_j = 0λjn​+∑i=0n−1​ci​λj​=0

since λj\lambda_jλj​ is a root of the characteristic polynomial. Thus Δ(A)=P⋅0⋅P−1=0\Delta(A) = P \cdot 0 \cdot P^{-1} = 0Δ(A)=P⋅0⋅P−1=0, and

−An=∑i=0n−1ciAi.(20)-A^n = \sum_{i=0}^{n-1}c_iA^i. \qquad (20)−An=∑i=0n−1​ci​Ai.(20)

This also gives us a new way to find eAte^{At}eAt because by its Taylor series expansion,

eAt=∑k=0∞1k!Ak.e^{At} = \sum_{k=0}^{\infty} \frac{1}{k!}A^k.eAt=∑k=0∞​k!1​Ak.

By Equation 20, all Ak=AnAk−nA^k = A^{n}A^{k-n}Ak=AnAk−n for k>nk>nk>n can be expressed in terms of the lower powers AiA^iAi for i∈[0,n)i\in[0, n)i∈[0,n).

Theorem 17

eAt=∑i=0n−1αi(t)Aie^{At} = \sum_{i=0}^{n-1}\alpha_i(t)A^ieAt=∑i=0n−1​αi​(t)Ai

for some αi\alpha_iαi​ which are solutions to the equations

eλjt=∑i=0n−1αi(t)λji.e^{\lambda_jt} = \sum_{i=0}^{n-1}\alpha_i(t)\lambda_j^i.eλj​t=∑i=0n−1​αi​(t)λji​.