Berkeley Notes
  • Introduction
  • EE120
    • Introduction to Signals and Systems
    • The Fourier Series
    • The Fourier Transform
    • Generalized transforms
    • Linear Time-Invariant Systems
    • Feedback Control
    • Sampling
    • Appendix
  • EE123
    • The DFT
    • Spectral Analysis
    • Sampling
    • Filtering
  • EECS126
    • Introduction to Probability
    • Random Variables and their Distributions
    • Concentration
    • Information Theory
    • Random Processes
    • Random Graphs
    • Statistical Inference
    • Estimation
  • EECS127
    • Linear Algebra
    • Fundamentals of Optimization
    • Linear Algebraic Optimization
    • Convex Optimization
    • Duality
  • EE128
    • Introduction to Control
    • Modeling Systems
    • System Performance
    • Design Tools
    • Cascade Compensation
    • State-Space Control
    • Digital Control Systems
    • Cayley-Hamilton
  • EECS225A
    • Hilbert Space Theory
    • Linear Estimation
    • Discrete Time Random Processes
    • Filtering
  • EE222
    • Real Analysis
    • Differential Geometry
    • Nonlinear System Dynamics
    • Stability of Nonlinear Systems
    • Nonlinear Feedback Control
Powered by GitBook
On this page
  • Lyapunov Functions
  • Quadratic Lyapunov Functions
  • Sum-of-Squares Lyapunov Functions
  • Proving Stability
  • Indirect Method of Lyapunov
  • Proving Instability
  • Region of Attraction

Was this helpful?

  1. EE222

Stability of Nonlinear Systems

PreviousNonlinear System DynamicsNextNonlinear Feedback Control

Last updated 2 years ago

Was this helpful?

The equilibria of a system can tell us a great deal about the stability of the system. For nonlinear systems, stability is a property of the equilibrium points, and to be stable is to converge to or stay equilibrium.

Definition 36

An equilibrium point xe∈R\boldsymbol{x}_e\in \mathbb{R}xe​∈R is a stable equilibrium point in the sense of Lyapunov if and only if ∀ϵ>0,∃δ(t0,ϵ)\forall \epsilon > 0,\exists \delta(t_0, \epsilon)∀ϵ>0,∃δ(t0​,ϵ) such that

∀t≥t0, ∥x0−xe∥<δ(t0,ϵ)  ⟹  ∥x(t)−xe∥<ϵ\forall t \geq t_0,\ \|\boldsymbol{x}_0 - \boldsymbol{x}_e\| < \delta(t_0, \epsilon) \implies \|\boldsymbol{x}(t) - \boldsymbol{x}_e\| < \epsilon∀t≥t0​, ∥x0​−xe​∥<δ(t0​,ϵ)⟹∥x(t)−xe​∥<ϵ

Lyapunov Stability essentially says that a finite deviation in the initial condition from equilibrium means the resulting trajectory of the system stay close to equilibrium. Notice that this definition is nearly identical to Theorem 10. That means stability of an equilibrium point is the same as saying the function which returns the solution to a system given its initial condition is continuous at the equilibrium point.

Definition 37

An equilibrium point xe∈R\boldsymbol{x}_e\in \mathbb{R}xe​∈R is an uniformly stable equilibrium point in the sense of Lyapunov if and only if ∀ϵ>0,∃δ(ϵ)\forall \epsilon > 0,\exists \delta( \epsilon)∀ϵ>0,∃δ(ϵ) such that

∀t≥t0, ∥x0−xe∥<δ(ϵ)  ⟹  ∥x(t)−xe∥<ϵ\forall t \geq t_0,\ \|\boldsymbol{x}_0 - \boldsymbol{x}_e\| < \delta(\epsilon) \implies \|\boldsymbol{x}(t) - \boldsymbol{x}_e\| < \epsilon∀t≥t0​, ∥x0​−xe​∥<δ(ϵ)⟹∥x(t)−xe​∥<ϵ

Uniform stability means that the δ\deltaδ can be chosen independently of the time the system starts at. Both stability and uniform stability do not imply convergence to the equilibrium point. They only guarantee the solution stays within a particular norm ball. Stricter notions of stabilty add this idea in.

Definition 38

An equilibrium point xe\boldsymbol{x}_exe​ is attractive if ∀t0>0, ∃c(t0)\forall t_0 > 0,\ \exists c(t_0)∀t0​>0, ∃c(t0​) such that

x(t0)∈Bc(xe)  ⟹  lim⁡t→∞∥x(t,t0,x0)−xe∥=0\boldsymbol{x}(t_0) \in B_c(\boldsymbol{x}_e) \implies \lim_{t\to\infty} \|\boldsymbol{x}(t, t_0, \boldsymbol{x}_0) - \boldsymbol{x}_e\| = 0x(t0​)∈Bc​(xe​)⟹limt→∞​∥x(t,t0​,x0​)−xe​∥=0

Attractive equilibria guarantee that trajectories beginning from initial conditions inside of a ball will converge to the equilibrium. However, attractivity does not imply stability since the trajectory could go arbitarily far from the equilibrium so long as it eventually returns.

Definition 39

An equilibrium point xe\boldsymbol{x}_exe​ is asymptotically stable if xe\boldsymbol{x}_exe​is stable in the sense of Lyapunov and attractive.

Asymptotic stability fixes the problem of attractivity where trajectories could go far from the equilibrium, and it fixes the problem with stability where the trajectory may not converge to equilibrium. It means that trajectories starting in a ball around equilibrium will converge to equilibrium without leaving that ball. Because the constant for attractivity may depend on time, defining uniform asymptotic stability requires some modifications to the idea of attractivity.

Definition 40

Definition 41

Just as we can define stability, we can also define instability.

Definition 42

Lyapunov Functions

In order to prove different types of stability, we will construct functions which have particular properties around equilibrium points of the system. The properties of these functions will help determine what type of stable the equilibrium point is.

Definition 43

Definition 44

Definition 45

LPDF functions are locally “energy-like” in the sense that the equilibrium point is assigned the lowest “energy” value, and the larger the deviation from the equilibrium, the higher the value of the “energy”.

Definition 46

Definition 47

Descresence means that for a ball around the equilibrium, we can upper bound the the energy.

Quadratic Lyapunov Functions

Definition 48

A Quadratic Lypunov function is of the form

Theorem 14

Sum-of-Squares Lyapunov Functions

Definition 49

Theorem 15

A polynomial is SOS if and only if it can be written as

is SOS.

Proving Stability

Theorem 16

Theorem 17

Theorem 18

Theorem 19

Theorem 20

The results of Theorem 16, Theorem 17, Theorem 18, Theorem 19, Theorem 20 are summarized in Table 1.

Theorem 21 (LaSalle's Invariance Principle)

Indirect Method of Lyapunov

Definition 50

The state transition matrix is useful in determining properties of the system.

Theorem 22 (Lyapunov Lemma)

Theorem 23 (Tausskey Lemma)

The Lyapunov Lemma has extensions to the time-varying case.

Theorem 24 (Time-Varying Lyapunov Lemma)

It turns out that uniform asymptotic stability of the linearization of a system corresponds to uniform, asymptotic stability of the nonlinear system.

Theorem 25 (Indirect Theorem of Lyapunov)

Proving Instability

Theorem 26

Region of Attraction

For asymptotically stable and exponential stable equilibria, it makes sense to wonder which initial conditions will cause trajectories to converge to the equilibrium.

Definition 51

An equilibrium point is uniformly asympototically stable if xe\boldsymbol{x}_exe​ is uniformly stable in the sense of Lyapunov, and ∃c\exists c∃c and γ:R+×Rn→R+\gamma:\mathbb{R}_+\times\mathbb{R}^n\to\mathbb{R}_+γ:R+​×Rn→R+​ such that

∀x0∈Bc(xe), lim⁡τ→∞γ(τ,x0)=0,∀t≥t0, ∥x(t,t0,x0)−xe∥≤γ(t−t0,x0)\forall \boldsymbol{x}_0\in B_c(\boldsymbol{x}_e),\ \lim_{\tau\to\infty}\gamma(\tau, \boldsymbol{x}_0) = 0, \qquad \forall t\geq t_0,\ \|\boldsymbol{x}(t, t_0, \boldsymbol{x}_0) - \boldsymbol{x}_e\| \leq \gamma(t-t_0, \boldsymbol{x}_0)∀x0​∈Bc​(xe​), limτ→∞​γ(τ,x0​)=0,∀t≥t0​, ∥x(t,t0​,x0​)−xe​∥≤γ(t−t0​,x0​)

The existence of the γ\gammaγ function helps guarantee that the rate of converges to equilibrium does not depend on t0t_0t0​ since the function γ\gammaγ is independent of t0t_0t0​. Suppose that the γ\gammaγ is an exponential function. Then solutions to the system will converge to the equilibrium exponentially fast.

An equilibrium point xe\boldsymbol{x}_exe​ is locally exponentially stable if ∃h,m,α\exists h,m,\alpha∃h,m,α such that

∀x0∈Bh(xe), ∥x(t,t0,x0)−xe∥≤me−α(t−t0)∥x(t)−xe∥\forall \boldsymbol{x}_0\in B_h(\boldsymbol{x}_e),\ \|\boldsymbol{x}(t, t_0, \boldsymbol{x}_0) - \boldsymbol{x}_e\| \leq me^{-\alpha(t - t_0)}\|\boldsymbol{x}(t) - \boldsymbol{x}_e\|∀x0​∈Bh​(xe​), ∥x(t,t0​,x0​)−xe​∥≤me−α(t−t0​)∥x(t)−xe​∥

are all local definitions because the only need to hold for x0\boldsymbol{x}_0x0​ inside a ball around the equilibrium. If they hold ∀x0∈Rn\forall \boldsymbol{x}_0\in\mathbb{R}^n∀x0​∈Rn, then they become global properties.

An equilibrium point xe\boldsymbol{x}_exe​ is unstable in the sense of Lyapunov if ∃ϵ>0,∀δ>0\exists \epsilon > 0, \forall \delta > 0∃ϵ>0,∀δ>0 such that

∃x0∈Bδ(xe)  ⟹  ∃T≥t0,x(T,t0,x0)∉Bϵ(xe)\exists \boldsymbol{x}_0\in B_\delta(\boldsymbol{x}_e) \implies \exists T\geq t_0, x(T, t_0, \boldsymbol{x}_0) \not\in B_\epsilon(\boldsymbol{x}_e)∃x0​∈Bδ​(xe​)⟹∃T≥t0​,x(T,t0​,x0​)∈Bϵ​(xe​)

Instability means that for any δ−\delta-δ−ball, we can find an ϵ−\epsilon-ϵ−ball for which there is at least one initial condition whose corresponding trajectory leaves the ϵ−\epsilon-ϵ−ball.

A class K\mathcal{K}K function is a function α:R+→R+\alpha: \mathbb{R}_+ \to \mathbb{R}_+α:R+​→R+​ such that α(0)=0\alpha(0) = 0α(0)=0 and α(s)\alpha(s)α(s) is strictly monotonically increasing in sss.

A subset of the class K\mathcal{K}K functions grow unbounded as the argument approaches infinity.

A class KR\mathcal{KR}KR function is a class K\mathcal{K}K function α\alphaα where lim⁡s→∞α(s)=s\lim_{s\to\infty}\alpha(s) = slims→∞​α(s)=s.

Class KR\mathcal{KR}KR functions are “radially unbounded”. We can use class K\mathcal{K}K and class KR\mathcal{KR}KR to bound “energy-like” functions called Lyapunov Functions.

A function V(x,t):Rn×R+→RV(\boldsymbol{x}, t): \mathbb{R}^n \times \mathbb{R}_+ \to \mathbb{R}V(x,t):Rn×R+​→R is locally positive definite (LPDF) on a set G⊂RnG\subset \mathbb{R}^nG⊂Rn containing xe\boldsymbol{x}_exe​ if ∃α∈K\exists \alpha \in \mathcal{K}∃α∈K such that

V(x,t)≥α(∥x−xe∥)V(\boldsymbol{x}, t) \geq \alpha(\|\boldsymbol{x} - \boldsymbol{x}_e\|)V(x,t)≥α(∥x−xe​∥)

A function V(x,t):Rn×R+→RV(\boldsymbol{x}, t): \mathbb{R}^n \times \mathbb{R}_+ \to \mathbb{R}V(x,t):Rn×R+​→R is positive definite (PDF) if ∃α∈KR\exists \alpha \in \mathcal{KR}∃α∈KR such that

∀x∈Rn, V(x,t)≥α(∥x−xe∥)\forall \boldsymbol{x}\in\mathbb{R}^n,\ V(\boldsymbol{x}, t) \geq \alpha(\|\boldsymbol{x} - \boldsymbol{x}_e\|)∀x∈Rn, V(x,t)≥α(∥x−xe​∥)

Positive definite functions act like “energy functions” everywhere in Rn\mathbb{R}^nRn.

A function V(x,t):Rn×R+→RV(\boldsymbol{x}, t): \mathbb{R}^n \times \mathbb{R}_+ \to \mathbb{R}V(x,t):Rn×R+​→R is decrescent if ∃α∈K\exists \alpha \in \mathcal{K}∃α∈K such that

∀x∈Bh(xe), V(x,t)≤β(∥x−xe∥)\forall \boldsymbol{x}\in B_h(\boldsymbol{x}_e),\ V(\boldsymbol{x}, t) \leq \beta(\|\boldsymbol{x} - \boldsymbol{x}_e\|)∀x∈Bh​(xe​), V(x,t)≤β(∥x−xe​∥)

Note that we can assume xe=0\boldsymbol{x}_e = 0xe​=0 without loss of generality for Definition 45, Definition 46, Definition 47 since for a given system, we can always define a linear change of variables that shifts the equilibrium point to the origin.

V(x)=x⊤Px,P≻0V(\boldsymbol{x}) = \boldsymbol{x}^\top P \boldsymbol{x},\quad P \succ 0V(x)=x⊤Px,P≻0

Quadratic Lyapunov Functions are one of the simplest types of Lyapunov Functions. Their level sets are ellipses where the major axis is the eigenvector corresponding to λmin(P)\lambda_{min}(P)λmin​(P), and the minor axis is the eigenvecctor corresponding to λmax(P)\lambda_{max}(P)λmax​(P).

Consider the sublevel set Ωc={x∣V(x)≤c}\Omega_c = \{ \boldsymbol{x} | V(\boldsymbol{x}) \leq c \}Ωc​={x∣V(x)≤c}. Then r∗r_*r∗​ is the radius of the largest circle contained inside Ωc\Omega_cΩc​, and r∗r^*r∗ is the radius of the largest circle containing Ωc\Omega_cΩc​.

r∗=cλmax(P)r∗=cλmin(P)r_* = \sqrt{\frac{c}{\lambda_{max}(P)}} \qquad r^* = \sqrt{\frac{c}{\lambda_{min}(P)}}r∗​=λmax​(P)c​​r∗=λmin​(P)c​​

A polynomial p(x)p(\boldsymbol{x})p(x) is sum-of-squares (SOS) if ∃g1,⋯ ,gr\exists g_1,\cdots,g_r∃g1​,⋯,gr​ such that

p(x)=∑i=1rgi2(x)p(\boldsymbol{x}) = \sum_{i=1}^r g_i^2(\boldsymbol{x})p(x)=∑i=1r​gi2​(x)

SOS polynomials have the nice property that they are always non-negative due to being a sum of squared numbers. Since any polynomial can be written in a quadratic form P(x)=z⊤(x)Qz(x)P(\boldsymbol{x}) = z^\top(\boldsymbol{x}) Q z(\boldsymbol{x})P(x)=z⊤(x)Qz(x) where zzz is a vector of monomials, the properties of QQQ can tell us if PPP is SOS or not.

p(x)=z⊤(x)Qz(x),Q⪰0p(\boldsymbol{x}) = z^\top(\boldsymbol{x}) Q z(\boldsymbol{x}), \quad Q \succeq 0p(x)=z⊤(x)Qz(x),Q⪰0

Note that QQQ is not necessarily unique, and if we construct a linear operator which maps QQQ to PPP, then this linear operator will have a Null Space. Mathematically, consider

L(Q)(x)=z⊤(x)Qz(x).\mathcal{L}(Q)(\boldsymbol{x}) = z^\top(\boldsymbol{x})Qz(\boldsymbol{x}).L(Q)(x)=z⊤(x)Qz(x).

This linear operator has a null space spanned by the polynomials NjN_jNj​. Given a matrix Q0⪰0Q_0 \succeq 0Q0​⪰0 such that p(x)=z⊤(x)Q0z(x)p(\boldsymbol{x}) = z^\top(\boldsymbol{x})Q_0z(\boldsymbol{x})p(x)=z⊤(x)Q0​z(x) (i.e ppp is SOS), it is also true that

p(x)=z⊤(x)(Q0+∑jλjNj(x))z(x).p(\boldsymbol{x}) = z^\top(\boldsymbol{x})\left( Q_0 + \sum_{j} \lambda_j N_j(\boldsymbol{x}) \right) z(\boldsymbol{x}).p(x)=z⊤(x)(Q0​+∑j​λj​Nj​(x))z(x).

SOS polynomials are helpful in finding Lyapunov functions because we can use SOS Programming to find SOS polynomials which satisfy desired properties. For example, if we want V(x)V(\boldsymbol{x})V(x) to be PDF, then one constraint in our SOS program will be that

V(x)−ϵx⊤x,ϵ>0V(\boldsymbol{x}) - \epsilon \boldsymbol{x}^\top \boldsymbol{x}, \quad \epsilon > 0V(x)−ϵx⊤x,ϵ>0

To prove the stability of an equilibrium point for a given nonlinear system, we will construct a Lyapunov function and determine stability from the properties of the Lyapunov functions which we can find. Given properties of VVV and dVdt\frac{d^{}V}{dt^{}}dtdV​, we can use the Lyapunov Stability Theorems to prove the stability of equilibria.

If ∃V(x,t)\exists V(\boldsymbol{x}, t)∃V(x,t) such that VVV is LPDF and −dVdt≥0-\frac{d^{}V}{dt^{}} \geq 0−dtdV​≥0 locally, then xe\boldsymbol{x}_exe​is stable in the sense of Lyapunov.

If ∃V(x,t)\exists V(\boldsymbol{x}, t)∃V(x,t) such that VVV is LPDF and decrescent, and −dVdt≥0-\frac{d^{}V}{dt^{}} \geq 0−dtdV​≥0 locally, then xe\boldsymbol{x}_exe​is uniformly stable in the sense of Lyapunov.

If ∃V(x,t)\exists V(\boldsymbol{x}, t)∃V(x,t) such that VVV is LPDF and decrescent, and −dVdt-\frac{d^{}V}{dt^{}}−dtdV​ is LPDF, then xe\boldsymbol{x}_exe​is uniformly asymptotically stable in the sense of Lyapunov.

If ∃V(x,t)\exists V(\boldsymbol{x}, t)∃V(x,t) such that VVV is PDF and decrescent, and −dVdt-\frac{d^{}V}{dt^{}}−dtdV​ is LPDF, then xe\boldsymbol{x}_exe​is globally uniformly asymptotically stable in the sense of Lyapunov.

If ∃V(x,t)\exists V(\boldsymbol{x}, t)∃V(x,t) and h,α>0h, \alpha > 0h,α>0 such that V is LPDF is decrescent, −dVdt-\frac{d^{}V}{dt^{}}−dtdV​ is LDPF, and

∀x∈Bh(xe), ∣∣dVdt∣∣≤α∥x−xe∥\forall \boldsymbol{x}\in B_h(\boldsymbol{x}_e),\ \left\lvert\left\lvert\frac{d^{}V}{dt^{}}\right\rvert\right\rvert \leq \alpha \|\boldsymbol{x}-\boldsymbol{x}_e\|∀x∈Bh​(xe​), ​​dtdV​​​≤α∥x−xe​∥

Going down the rows of Table 1 lead to increasingly stricter forms of stability. Descresence appears to add uniformity to the stability, while −dVdt-\frac{d^{}V}{dt^{}}−dtdV​ being LPDF adds asymptotic convergence. However, these conditions are only sufficient, meaning if we cannot find a suitable VVV, that does not mean that an equilibrium point is not stable.

One very common case where it can be difficult to find appropriate Lyapunov functions is in proving asymptotic stability since it can be hard to find VVV such that −dVdt-\frac{d^{}V}{dt^{}}−dtdV​ is LPDF. In the case of autonomous systems, we can still prove asymptotic stability without such a VVV.

Consider a smooth function V:Rn→RV:\mathbb{R}^n\to\mathbb{R}V:Rn→R with bounded sub-level sets Ωc={x∣V(x)≤c}\Omega_c = \left\{\boldsymbol{x} | V(\boldsymbol{x}) \leq c \right\}Ωc​={x∣V(x)≤c} and ∀x∈Ωc\forall \boldsymbol{x}\in \Omega_c∀x∈Ωc​, dVdt≤0\frac{d^{}V}{dt^{}} \leq 0dtdV​≤0. Define S={x∣dVdt=0}S = \left\{\boldsymbol{x}\bigg\lvert\frac{d^{}V}{dt^{}} = 0\right\}S={x​dtdV​=0} and let MMM be the largest invariant set in SSS, then

∀x0∈Ωc, x(t,t0,x0)→M as t→∞.\forall \boldsymbol{x}_0\in \Omega_c,\ \boldsymbol{x}(t, t_0, x_0) \to M \text{ as } t\to \infty.∀x0​∈Ωc​, x(t,t0​,x0​)→M as t→∞.

LaSalle’s theorem helps prove general convergence to an invariant set. Since VVV is always decreasing in the sub-level set Ωc\Omega_cΩc​, trajectories starting in Ωc\Omega_cΩc​ must eventually reach SSS. At some point, they will reach the set MMM in SSS, and then they will stay there. Thus if the set MMM is only the equilibrium point, or a set of equilibrium points, then we can show that the system trajectories asymptotically converges to this equilibrium or set of equilibria. Moreover, if V(x)V(\boldsymbol{x})V(x) is PDF, and ∀x∈Rn,dVdt≤0\forall \boldsymbol{x}\in\mathbb{R}^n, \frac{d^{}V}{dt^{}} \leq 0∀x∈Rn,dtdV​≤0, then we can show global asymptotic stability as well.

LaSalle’s theorem can be generalized to non-autonomous systems as well, but it is slightly more complicated since the set SSS may change over time.

It turns out that we can also prove the stability of systems by looking at the linearization around the equilibrium. Without loss of generality, suppose xe=0\boldsymbol{x}_e = 0xe​=0. The linearization at the equilibrium is given by

dxdt=f(x,t)=f(0,t)+∂f∂x∣x=0x+f1(x,t)≈A(t)x.\frac{d^{}\boldsymbol{x}}{dt^{}} = f(x,t) = f(0, t) + \frac{\partial f}{\partial \boldsymbol{x}}\bigg\lvert_{\boldsymbol{x} = 0}\boldsymbol{x} + f_1(\boldsymbol{x},t) \approx A(t)\boldsymbol{x}.dtdx​=f(x,t)=f(0,t)+∂x∂f​​x=0​x+f1​(x,t)≈A(t)x.

The function f1(x,t)f_1(\boldsymbol{x}, t)f1​(x,t) is the higher-order terms of the linearization. The linearization is a time-varying system. Consider the time-varying linear system

dxdt=A(t)x, x(t0)=x0.\frac{d^{}\boldsymbol{x}}{dt^{}} = A(t)\boldsymbol{x},\ \boldsymbol{x}(t_0) = \boldsymbol{x}_0.dtdx​=A(t)x, x(t0​)=x0​.

The state transition matrix Φ(t,t0)\Phi(t, t_0)Φ(t,t0​) of a time-varying linear system is a matrix satisfying

x(t)=Φ(t,t0)x0, dΦdt=A(t)Φ(t,t0), Φ(t0,t0)=I\boldsymbol{x}(t) = \Phi(t, t_0)\boldsymbol{x}_0,\ \frac{d^{}\Phi}{dt^{}} = A(t)\Phi(t, t_0),\ \Phi(t_0, t_0) = Ix(t)=Φ(t,t0​)x0​, dtdΦ​=A(t)Φ(t,t0​), Φ(t0​,t0​)=I

sup⁡t≥t0∥Φ(t,t0)∥=m(t0)<∞  ⟹  \sup_{t\geq t_0} \|\Phi(t, t_0)\| = m(t_0) < \infty \impliessupt≥t0​​∥Φ(t,t0​)∥=m(t0​)<∞⟹ the system is stable at the origin at t0t_0t0​.

sup⁡t0≥0sup⁡t≥t0∥Φ(t,t0)∥=m<∞  ⟹  \sup_{t_0\geq 0}\sup_{t\geq t_0} \|\Phi(t, t_0)\| = m < \infty \impliessupt0​≥0​supt≥t0​​∥Φ(t,t0​)∥=m<∞⟹ the system is uniformly stable at the origin at t0t_0t0​.

lim⁡t→∞∥Φ(t,t0)∥=0  ⟹  \lim_{t\to\infty}\|\Phi(t, t_0)\| = 0 \implieslimt→∞​∥Φ(t,t0​)∥=0⟹ the system is asymptotically stable.

∀t0,ϵ>0,∃T\forall t_0,\epsilon>0,\exists T∀t0​,ϵ>0,∃T such that ∀t≥t0+T, ∥Φ(t,t0)∥<ϵ  ⟹  \forall t\geq t_0 + T,\ \|\Phi(t, t_0)\| < \epsilon \implies∀t≥t0​+T, ∥Φ(t,t0​)∥<ϵ⟹ the system is uniformly asymptotically stable.

∥Φ(t,t0)∥≤Me−λ(t−t0)  ⟹  \|\Phi(t, t_0)\| \leq Me^{-\lambda(t-t_0)} \implies∥Φ(t,t0​)∥≤Me−λ(t−t0​)⟹ exponential stability.

If the system was Time-Invariant, then the system would be stable so long as the eigenvalues of AAA were in the open left-half of the complex plane. In fact, we could use AAA to construct positive definite matrices.

For a matrix A∈Rn×nA\in \mathbb{R}^{n\times n}A∈Rn×n, its eigenvalues λi\lambda_iλi​ satisfy Re(λi)<0\mathbb{R}e(\lambda_i) < 0Re(λi​)<0 if and only if ∀Q≻0\forall Q \succ 0∀Q≻0, there exists a solution P≻0P\succ 0P≻0 to the equation

ATP+PA=−Q.A^TP + PA = -Q.ATP+PA=−Q.

In general, we can use the Lyapunov Equation to count how many eigenvalues of AAA are stable.

For A∈Rn×nA\in\mathbb{R}^{n\times n}A∈Rn×n and given Q≻0Q \succ 0Q≻0, if there are no eigenvalues on the jωj\omegajω axis, then the solution PPP to ATP+PA=−QA^TP + PA = -QATP+PA=−Q has as many positive eigenvalues as AAAhas eigenvalues in the complex left half plane.

If A(⋅)A(\cdot)A(⋅) is bounded and for some Q(t)⪰αIQ(t) \succeq \alpha IQ(t)⪰αI, the solution P(t)P(t)P(t) to A(t)TP(t)+P(t)A(t)=−Q(t)A(t)^TP(t) + P(t)A(t) = -Q(t)A(t)TP(t)+P(t)A(t)=−Q(t)is bounded, then the origin is a asymptotically stable equilibrium point.

For a nonlinear system whose higher-order terms of the linearization are given by f(x,t)f(\boldsymbol{x},t)f(x,t), if

lim⁡∥x∥→0sup⁡t≥0∥f1(x,t)∥∥x∥=0\lim_{\|\boldsymbol{x}\|\to 0}\sup_{t\geq 0} \frac{\|f_1(\boldsymbol{x},t)\|}{\|\boldsymbol{x}\|} = 0lim∥x∥→0​supt≥0​∥x∥∥f1​(x,t)∥​=0

and if xe\boldsymbol{x}_exe​ is a uniformly asymptotic stable equilibrium point of dzdt=A(t)z\frac{d^{}\boldsymbol{z}}{dt^{}}=A(t)\boldsymbol{z}dtdz​=A(t)z where A(t)A(t)A(t) is the Jacobian at the xe\boldsymbol{x}_exe​, then xe\boldsymbol{x}_exe​ is a uniformly asymptotic stable equilibrium point of dxdt=f(x,t)\frac{d^{}\boldsymbol{x}}{dt^{}} = f(\boldsymbol{x},t)dtdx​=f(x,t)

An equilibrium point xe\boldsymbol{x}_exe​ is unstable in the sense of Lyapunov if ∃V(x,t)\exists V(\boldsymbol{x},t)∃V(x,t) which is decrescent, the Lie derivative dVdt\frac{d^{}V}{dt^{}}dtdV​ is LPDF, V(xe,t)V(\boldsymbol{x}_e, t)V(xe​,t), and ∃x\exists \boldsymbol{x}∃x in the neighborhood of xe\boldsymbol{x}_exe​ such that V(x0,t)>0V(\boldsymbol{x}_0, t) > 0V(x0​,t)>0.

If xe\boldsymbol{x}_exe​ is an equilibrium point of a time-invariant system dxdt=f(x)\frac{d^{}\boldsymbol{x}}{dt^{}} = f(\boldsymbol{x})dtdx​=f(x), then the Region of Attraction of xe\boldsymbol{x}_exe​ is

RA(xe)={x0∈Rn∣lim⁡t→∞x(t,t0)=xe}\mathcal{R}_A(\boldsymbol{x}_e) = \{ \boldsymbol{x}_0 \in \mathbb{R}^n | \lim_{t\to\infty} \boldsymbol{x}(t, t_0) = \boldsymbol{x}_e \}RA​(xe​)={x0​∈Rn∣limt→∞​x(t,t0​)=xe​}

Suppose that we have a Lyapunov function V(x)V(\boldsymbol{x})V(x) and a region DDD such that V(x)>0V(\boldsymbol{x}) > 0V(x)>0 and dVdt<0\frac{d^{}V}{dt^{}} < 0dtdV​<0 in DDD. Define a sublevel set of the Lypunov function Ωc\Omega_cΩc​ which is a subset of DDD. We know that if x0∈Ωc\boldsymbol{x}_0\in\Omega_cx0​∈Ωc​, then the trajectory will stay inside Ωc\Omega_cΩc​ and converge to the equilibrium point. Thus we can use the largest Ωc\Omega_cΩc​ that is compact and contained in DDD as an estimate of the region of attraction.

When we have a Quadratic Lyapunov Function, we can set DDD to be the largest circle which satisfies the conditions on VVV, and the corresponding Ωc\Omega_cΩc​ contained inside DDD will be the estimate of the Region of Attraction.

We can find even better approximations of the region of attraction using SOS programming. Suppose we have a VVV which we used to prove asymptotic stability. Then if there exists an sss which satisfies the following SOS program, then the sublevel set Ωc\Omega_cΩc​ is an estimate of the Region of Attraction.

max⁡c,scs.ts(x) is SOS,−(dVdt+ϵx⊤x)+s(x)(c−V(x)) is SOS.\begin{aligned} \max_{c, s} &\quad c\\ \text{s.t} &\quad s(\boldsymbol{x}) \text{ is SOS,}\\ &\quad -\left(\frac{d^{}V}{dt^{}} + \epsilon \boldsymbol{x}^\top \boldsymbol{x}\right) + s(\boldsymbol{x})(c - V(\boldsymbol{x})) \text{ is SOS.}\end{aligned}c,smax​s.t​cs(x) is SOS,−(dtdV​+ϵx⊤x)+s(x)(c−V(x)) is SOS.​

Table 1: Summary of Lyapunov Stability Theorems