Nonlinear System Dynamics

Consider the nonlinear system

dxdt=f(x,t).\frac{d^{}\boldsymbol{x}}{dt^{}} = f(\boldsymbol{x}, t).

ff is a vector field which potentially changes with time and governs how the system evolves.

Definition 30

The system is autonomous if f(x,t)f(\boldsymbol{x}, t) is not explicitly dependent on time tt.

Definition 31

A point x0x_0 is an equilibrium point at time t0t_0 if

tt0, f(x0,t)=0\forall t \geq t_0, \ f(\boldsymbol{x}_0, t) = 0

Consider a single trajectory ϕ(t,t0,x0)\phi(t, t_0, \boldsymbol{x}_0).

Definition 32

A set SS is said to be the ω\omega-limit set of ϕ\phi if

yS,tn,limnϕ(tn,t0,x0)=y\forall \boldsymbol{y}\in S,\exists t_n\to \infty, \lim_{n\to\infty}\phi(t_n, t_0, \boldsymbol{x}_0) = \boldsymbol{y}

Whereas linear systems converge to a single point if they converge at all, nonlinear systems can converge to a set of points. Thus the ω\omega-limit set essentially generalizes the idea of a limit.

Definition 33

A set MRnM\subset \mathbb{R}^n is said to be invariant if

tt0, yM    ϕ(t,t0,y)M\forall t\geq t_0,\ \boldsymbol{y}\in M \implies \phi(t, t_0, \boldsymbol{y}) \in M

An invariant set is one which a trajectory of the system will never leave once it enters the set. Just like linear systems, non-linear systems can also have periodic solutions.

Definition 34

A closed orbit γ\gamma is a trajectory of the system such that γ(0)=γ(T)\gamma(0) = \gamma(T) for finite TT.

Solutions to Nonlinear Systems

Consider the nonlinear system

dxdt=f(x,t),x(t0)=x0Rn.\frac{d^{}\boldsymbol{x}}{dt^{}} = f(\boldsymbol{x}, t),\qquad \boldsymbol{x}(t_0) = \boldsymbol{x}_0\in \mathbb{R}^n.

Definition 35

A function Φ(t)\boldsymbol{\Phi}(t) is a solution to dxdt=f(x,t), x(t0)=x0\frac{d^{}\boldsymbol{x}}{dt^{}} = f(\boldsymbol{x}, t),\ \boldsymbol{x}(t_0) = \boldsymbol{x}_0 on the closed interval [t0,t][t_0, t] if Φ(t)\boldsymbol{\Phi}(t) is defined on the interval [t0,t][t_0, t], dΦdt=f(Φ(t),t)\frac{d^{}\boldsymbol{\Phi}}{dt^{}} = f(\boldsymbol{\Phi}(t), t) on the interval [t0,t][t_0, t], and Φ(t0)=x0\boldsymbol{\Phi}(t_0) = \boldsymbol{x}_0.

We say that Φ(t)\boldsymbol{\Phi}(t) is a solution in the sense of Caratheodory if

Φ(t)=x0+t0tf(Φ(τ),τ)dτ.\boldsymbol{\Phi}(t) = \boldsymbol{x}_0 + \int_{t_0}^t f(\boldsymbol{\Phi}(\tau), \tau)d\tau.

Because the system is nonlinear, it could potentially have no solution, one solution, or many solutions. These solutions could exist locally, or they could exist for all time. We might also want to know when there is a solution which depends continuously on the initial conditions.

Theorem 7 (Local Existence and Uniqueness)

Given dxdt=f(x,t), x(t0)=x0Rn\frac{d^{}\boldsymbol{x}}{dt^{}} = f(\boldsymbol{x}, t),\ \boldsymbol{x}(t_0) = \boldsymbol{x}_0\in\mathbb{R}^n where ff is piecewise continuous in tt and T>t0\exists T>t_0 such that t[t0,T],f\forall t\in [t_0, T], f is LL-Lipschitz Continuous, then δ>0\exists \delta > 0 such that a solution exists and is unique t[t0,t0+δ]\forall t\in [t_0, t_0 + \delta].

can be proved using the Contraction Mapping Theorem (Theorem 2) by finding δ\delta such that the function P:Cn[t0,t0+δ]Cn[t0,t0+δ]P:C_n[t_0, t_0+\delta] \to C_n[t_0, t_0+\delta] given by

P(Φ)(t)=x0+t0t0+δf(Φ(τ),τ)dτP(\boldsymbol{\Phi})(t) = \boldsymbol{x}_0 + \int_{t_0}^{t_0+\delta} f(\boldsymbol{\Phi}(\tau), \tau)d\tau

is a contraction under the norm Φ=supt0tt0+δΦ(t)\|\boldsymbol{\Phi}\|_\infty = \sup_{t_0\leq t \leq t_0+\delta} \|\boldsymbol{\Phi}(t)\|.

Theorem 8 (Global Existence and Uniqueness)

Suppose f(x,t)f(\boldsymbol{x}, t) is piecewise continuous in tt and T[t0,)\forall T\in [t_0, \infty), LT<\exists L_T < \infty such that ff is LTL_T Lipshitz continuous for all x,yRn\boldsymbol{x}, \boldsymbol{y} \in \mathbb{R}^n, then the nonlinear system has exactly one solution on [t0,T][t_0, T].

Once we know that solutions to a nonlinear system exist, we can sometimes bound them.

Theorem 9 (Bellman-Gronwall Lemma)

Suppose λR\lambda\in\mathbb{R} is a constant and μ:[a,b]R\mu:[a,b]\to\mathbb{R} is continuous and non-negative, then for a continuous function y:[a,b]Ry:[a, b]\to\mathbb{R}

y(t)λ+atμ(τ)y(τ)dτ    y(t)λexp(atμ(τ)dτ)y(t) \leq \lambda + \int_a^t \mu(\tau)y(\tau)d\tau \implies y(t) \leq \lambda \text{exp}\left(\int_a^t\mu(\tau)d\tau\right)

Another thing we might want to do is understand how the nonlinear system reacts to changes in the initial condition.

Theorem 10

Suppose the system dxdt=f(x,t), x(t0)=x0\frac{d^{}\boldsymbol{x}}{dt^{}} = f(\boldsymbol{x}, t),\ \boldsymbol{x}(t_0) = \boldsymbol{x}_0 satisfies the conditions of global uniqueness and existence. Fix T[t0,]T\in[t_0, \infty] and suppose x()\boldsymbol{x}(\cdot) and z()\boldsymbol{z}(\cdot) are two solutions satisfying dxdt=f(x,t),x(t0)=x0\frac{d^{}\boldsymbol{x}}{dt^{}} = f(\boldsymbol{x}, t), \boldsymbol{x}(t_0) = \boldsymbol{x}_0 and dzdt=f(z(t),t), z(t0)=z0\frac{d^{}\boldsymbol{z}}{dt^{}} = f(\boldsymbol{z}(t), t),\ \boldsymbol{z}(t_0)=\boldsymbol{z}_0, then ϵ>0,δ>0\forall \epsilon > 0, \exists \delta > 0 such that

x0z0<δ    xz<ϵ.\|\boldsymbol{x}_0 - \boldsymbol{z}_0\| < \delta \implies \|\boldsymbol{x} - \boldsymbol{z}\|_{\infty} < \epsilon.

is best understood by defining a function Ψ:RnCn[t0,t]\Psi:\mathbb{R}^n \to C_n[t_0, t] where Ψ(x0)(t)\Psi(\boldsymbol{x}_0)(t) returns the solution to the system given the initial condition. If the conditions of are satisfied, then the function Ψ\Psi will be continuous.

Planar Dynamical Systems

Planar dynamical systems are those with 2 state variables. Suppose we linearize the autonomous system dxdt=f(x)\frac{d^{}\boldsymbol{x}}{dt^{}} = f(\boldsymbol{x}) at an equilibrium point.

dxdt=fxx0x\frac{d^{}\boldsymbol{x}}{dt^{}} = \frac{\partial f}{\partial \boldsymbol{x}} \bigg\lvert_{\boldsymbol{x_0}}\boldsymbol{x}

Depending on the eigenvalues of fx\frac{\partial f}{\partial \boldsymbol{x}}, the Jacobian, we get several cases for how this linear system behaves. We’ll let z1z_1 and z2z_2 be the eigenbasis of the phase space.

  1. The eigenvalues are real, yielding solutions z1=z1(0)eλ1t,z2=z2(0)eλ2tz_1 = z_1(0)e^{\lambda_1 t}, z_2 = z_2(0)e^{\lambda_2 t}. If we eliminate the time variable, we can plot the trajectories of the system.

    z1z1(0)=(z2z2(0))λ1λ2\frac{z_1}{z_1(0)} = \left(\frac{z_2}{z_2(0)}\right)^{\frac{\lambda_1}{\lambda_2}}

    1. When λ1,λ2<0\lambda_1, \lambda_2 < 0, all trajectories converge to the origin, so we call this a stable node.

    2. When λ1,λ2>0\lambda_1, \lambda_2 > 0, all trajectories blow up, so we call this an unstable node.

    3. When λ1<0<λ2\lambda_1 < 0 < \lambda_2, the trajectories will converge to the origin along the axis corresponding to λ1\lambda_1 and diverge along the axis corresponding to λ2\lambda_2, so we call this a saddle node.

  2. There is a single repeated eigenvalue with one eigenvector. As before, we can eliminate the time variable and plot the trajectories on the z1z_1, z2z_2 axes.

    1. When λ<0\lambda < 0, the trajetories will converge to the origin, so we call it an improper stable node

    2. When λ>0\lambda > 0, the trajetories will diverge from the origin, so we call it an improper unstable node

  3. When there is a complex pair of eigenvalues, the linear system will have oscillatory behavior. The Real Jordan form of fx\frac{\partial f}{\partial \boldsymbol{x}} will look like

    fx=[αββα].\frac{\partial f}{\partial \boldsymbol{x}} = \begin{bmatrix} \alpha & \beta \\ -\beta & \alpha \end{bmatrix}.

    The parameter β\beta will determine the direction of the trajectories (clockwise if positive).

    1. When α<0\alpha < 0, the trajectories will spiral towards the origin, so we call it a stable focus.

    2. When α=0\alpha = 0, the trajectories will remain at a constant radius from the origin, so we call it a center.

    3. When α>0\alpha > 0, the trajectories will spiral away from the origin, so we call it an unstable focus.

It turns out that understanding the linear dynamics at equilibrium points can be helpful in understanding the nonlinear dynamics near equilibrium points.

Theorem 11 (Hartman-Grobman Theorem)

If the linearization of a planar dynamical system dxdt=f(x)\frac{d^{}\boldsymbol{x}}{dt^{}} = f(\boldsymbol{x}) at an equilibrium point x0\boldsymbol{x_0} has no zero or purely imaginary eigenvalues, then there exists a homeomorphism from a neighborhood UU of x0\boldsymbol{x}_0 into R2\mathbb{R}^2 which takes trajectories of the nonlinear system and maps them onto the linearization where h(x0)=0h(\boldsymbol{x_0}) = 0, and the homeomorphism can be chosen to preserve the parameterization by time.

essentially says that the linear dynamics predict the nonlinear dynamics around equilibria, but only for a neighborhood around the equilibrium point. Outside of this neighborhood, the linearization may be very wrong.

Suppose that we have a simply connected region DD (meaning DD cannot be contracted to a point) and we want to know if it contains a closed orbit.

Theorem 12 (Bendixon's Theorem)

If div(f)\text{div}(f) is not identically zero in a sub-region of DD and does not change sign in DD, then DDcontains no closed orbits.

lets us rule out closed orbits from regions of R2\mathbb{R}^2. If we have a positively invariant region, then we can determine whether it contains closed orbits.

Theorem 13 (Poincare-Bendixson Theorem)

If MM is a compact, positively invariant set for the flow ϕt(x)\phi_t(\boldsymbol{x}), then if MM contains no equilibrium points, then MMhas a limit cycle.

Last updated