Berkeley Notes
  • Introduction
  • EE120
    • Introduction to Signals and Systems
    • The Fourier Series
    • The Fourier Transform
    • Generalized transforms
    • Linear Time-Invariant Systems
    • Feedback Control
    • Sampling
    • Appendix
  • EE123
    • The DFT
    • Spectral Analysis
    • Sampling
    • Filtering
  • EECS126
    • Introduction to Probability
    • Random Variables and their Distributions
    • Concentration
    • Information Theory
    • Random Processes
    • Random Graphs
    • Statistical Inference
    • Estimation
  • EECS127
    • Linear Algebra
    • Fundamentals of Optimization
    • Linear Algebraic Optimization
    • Convex Optimization
    • Duality
  • EE128
    • Introduction to Control
    • Modeling Systems
    • System Performance
    • Design Tools
    • Cascade Compensation
    • State-Space Control
    • Digital Control Systems
    • Cayley-Hamilton
  • EECS225A
    • Hilbert Space Theory
    • Linear Estimation
    • Discrete Time Random Processes
    • Filtering
  • EE222
    • Real Analysis
    • Differential Geometry
    • Nonlinear System Dynamics
    • Stability of Nonlinear Systems
    • Nonlinear Feedback Control
Powered by GitBook
On this page
  • Electrical and Mechanical Systems
  • Eletrical Systems
  • Mechanical Systems
  • Electro-Mechanical Equivalence
  • Linearization
  • State-Space Equations
  • Phase Variable Form
  • Time Domain Solution
  • Controllability
  • Observability
  • Time Delays

Was this helpful?

  1. EE128

Modeling Systems

PreviousIntroduction to ControlNextSystem Performance

Last updated 3 years ago

Was this helpful?

Systems are most easily modeled using systems of linear constant coefficient differential equations. They can be represented either as a set of state-space equations or as a transfer function in the Laplace domain.

Electrical and Mechanical Systems

Eletrical Systems

In electrical systems, there are three basic components: resistors, capacitors, and inductors. See Table 1 for their Laplace domain relationships. At an electrical node, ∑V=0\sum V = 0∑V=0 by Kirchoff’s Voltage Law, and at an electrical junction, ∑Iin=∑Iout\sum I_{in} = \sum I_{out}∑Iin​=∑Iout​ by Kirchoff’s Current Law.

Mechanical Systems

In mechanical systems, there are also three basic components: dampers, springs, and masses. There are also rotational counterparts. See Table 1 for their Laplace domain relationships. At a massless node, ∑F=0\sum F=0∑F=0 by Newton’s 2nd law. Because we consider dampers and springs are massless, the force at two ends of a damper or spring must be equal. In rotational systems, we can also have a gear train. Rotational impedances are reflected through gear trains by multiplying by (Ndest2Nsource2)\left(\frac{N^2_{dest}}{N^2_{source}}\right)(Nsource2​Ndest2​​).

Electro-Mechanical Equivalence

It turns out that electrical and mechanical systems are analogous to each other. In other words, given an electrical system, we can convert it into a mechanical system and vice versa. Capacitors act like springs as energy storage, resistors act like dampers which dissipate energy, and inductors act like inertial masses which resist movement. These are clear from their force/voltage differential equations (in the Laplace domain) in Table 1. Under these analogies, forces are like voltages, currents are like velocities, and charge is like position.

Linearization

Because non-linear systems often have dynamics which are complicated to analyze, a standard trick to make them simpler is to linearize them.

Definition 5

Using Definition 5, we can see that around our operating point, we have

State-Space Equations

Definition 6

System variables are variables which depend on either the input or the system's internal state.

Definition 7

We can easily go from State-Space Equations to a transfer function via the Unilateral Laplace transform. After taking the Laplace Transform of both sides of Equation 2, Equation 3,

Phase Variable Form

We can also derive state space equations from their transfer functions. First, we assume that the transfer function comes from the LCCDE

meaning our transfer function will be of the form

Using this intermediary variable, we can now let

Converting this back to the time-domain,

When we do control in State-Space Control, this makes it easier to place the system poles where we want them to be.

Time Domain Solution

Notice that

Combining these two equations, we see that

Notice that Equation 7 is broken into two pieces.

Definition 8

The zero-input response is how the system will behave when no input is supplied.

Definition 9

Controllability

Definition 10

By the Cayley-Hamilton Theorem (see Cayley-Hamilton),

Definition 11

The controllability matrix is

Theorem 1

Observability

Definition 12

Definition 13

The observability matrix is

A theorem analogous to Theorem 1 exists for observability.

Theorem 2

Time Delays

Linearization is when a nonlinear system f(x)f(\mathbf{x})f(x) is approximated by the first two terms of its Taylor series about a particular operating point.

f(x0+δx)≈f(x0)+∇x∣x0+δxδxf(\mathbf{x}_0 + \delta \mathbf{x}) \approx f(\mathbf{x}_0) + \nabla_x|_{\mathbf{x}_0+\delta\mathbf{x}}\delta\mathbf{x}f(x0​+δx)≈f(x0​)+∇x​∣x0​+δx​δx

f(x)−f(x0)=δf(x)≈∇x∣x0+δxδx(1)f(\mathbf{x}) - f(\mathbf{x}_0) = \delta f(\mathbf{x}) \approx \nabla_x|_{\mathbf{x}_0+\delta\mathbf{x}} \delta\mathbf{x} \qquad (1)f(x)−f(x0​)=δf(x)≈∇x​∣x0​+δx​δx(1)

Equation 1 will hold so long as δx\delta\mathbf{x}δx is small enough to be within the linear regime (i.e where the Taylor Series expansion is a good approximation). If fff is a multi-variable equation, then Equation 1 becomes

δf(x,u,… )≈∇x∣x0+δxδx+∇u∣u0+δuδu+⋯\delta f(\mathbf{x}, \mathbf{u}, \dots) \approx \nabla_x|_{\mathbf{x}_0+\delta\mathbf{x}} \mathbf{\delta x} + \nabla_u|_{\mathbf{u}_0+\delta\mathbf{u}} \mathbf{\delta u} + \cdotsδf(x,u,…)≈∇x​∣x0​+δx​δx+∇u​∣u0​+δu​δu+⋯

The state variables of a system are the smallest set of linear independent system variables that can uniquely determine all the other system variables for all t>0t > 0t>0.

One can think of the state variables x\mathbf{x}x as capturing the internal dynamics of the system. The dynamics are described by matrices AAA (the state-evolution matrix) and BBB (the input matrix)

dxdt=Ax+Bu\frac{d^{}\mathbf{x}}{dt^{}} = A\mathbf{x} + B\mathbf{u}dtdx​=Ax+Bu

where u\mathbf{u}u is the input to the system. Sometimes the states are not directly observable, but instead the sensor in Figure 2 only provides a linear combination of the states determined by the output matrix CCC and the feedforward matrix DDD. Together, Equation 2 and Equation 3 are the state-space equations of the system.

dxdt=Ax+Bu(2)y=Cx+Du(3)\begin{aligned} \frac{d^{}\mathbf{x}}{dt^{}} &= A\mathbf{x} + B\mathbf{u} \qquad (2)\\ \mathbf{y} &= C\mathbf{x} + D \mathbf{u} \qquad (3)\end{aligned}dtdx​y​=Ax+Bu(2)=Cx+Du(3)​

sX(s)−x(0−)=AX(s)+BU(s)  ⟹  X(s)=(sI−A)−1BU(s)+x(0−)Y(s)=CX(s)+DU(s)  ⟹  Y(s)=(C(sI−A)−1B+D)U(s)+C(sI−A)−1x(0−).\begin{aligned} s\mathbf{X}(s) - \mathbf{x}(0^-) &= A\mathbf{X}(s) + B\mathbf{U}(s)\\ &\implies \mathbf{X}(s) = (sI-A)^{-1}B\mathbf{U}(s) + \mathbf{x}(0^{-})\\ \mathbf{Y}(s) &= C\mathbf{X}(s) + D\mathbf{U}(s)\\ &\implies \mathbf{Y}(s) = (C\left( sI-A \right)^{-1}B+D)\mathbf{U}(s) + C(sI-A)^{-1}\mathbf{x}(0^-).\end{aligned}sX(s)−x(0−)Y(s)​=AX(s)+BU(s)⟹X(s)=(sI−A)−1BU(s)+x(0−)=CX(s)+DU(s)⟹Y(s)=(C(sI−A)−1B+D)U(s)+C(sI−A)−1x(0−).​

If the system is Single-Input, Single-Output (SISO) and the initial condition is x(0−)=0\mathbf{x}(0^-) = \boldsymbol{0}x(0−)=0, then

H(s)=Y(s)U(s)=C(sI−A)−1B+D.(4)H(s) = \frac{Y(s)}{U(s)} = C(sI-A)^{-1}B+D. \qquad (4)H(s)=U(s)Y(s)​=C(sI−A)−1B+D.(4)

Equation 4 makes it very clear that the poles of the system are the same as the eigenvalues of the AAA matrix.

∑k=0Nakdkydtk=∑k=0Nbkdkudtk,\sum_{k=0}^N a_k \frac{d^{k}y}{dt^{k}} = \sum_{k=0}^{N} b_k \frac{d^{k}u}{dt^{k}},∑k=0N​ak​dtkdky​=∑k=0N​bk​dtkdku​,

H(s)=Y(s)U(s)=∑k=0Nbksk∑k=0Naksk=∑k=0NbkaNsksN+∑k=0N−1akaNsk.H(s) = \frac{Y(s)}{U(s)} = \frac{\sum_{k=0}^N b_k s^k}{\sum_{k=0}^{N}a_k s^k} = \frac{\sum_{k=0}^{N} \frac{b_k}{a_N}s^k}{s^N + \sum_{k=0}^{N-1} \frac{a_k}{a_N}s^k}.H(s)=U(s)Y(s)​=∑k=0N​ak​sk∑k=0N​bk​sk​=sN+∑k=0N−1​aN​ak​​sk∑k=0N​aN​bk​​sk​.

It is possible that ∃M<N\exists M < N∃M<N such that ∀k≥M,bk=0\forall k \geq M, b_k=0∀k≥M,bk​=0. In other words, the numerator can have fewer terms than the denominator. We now introduce an intermediary variable XXX so

Y(s)U(s)=Y(s)X(s)X(s)U(s).\frac{Y(s)}{U(s)} = \frac{Y(s)}{X(s)}\frac{X(s)}{U(s)}.U(s)Y(s)​=X(s)Y(s)​U(s)X(s)​.

Y(s)=∑k=0NbkaNskX(s)X(s)=U(s)sN+∑k=0N−1akaNsk.Y(s) = \sum_{k=0}^{N} \frac{b_k}{a_N} s^k X(s) \qquad X(s) = \frac{U(s)}{s^N + \sum_{k=0}^{N-1}\frac{a_k}{a_N}s^k}.Y(s)=∑k=0N​aN​bk​​skX(s)X(s)=sN+∑k=0N−1​aN​ak​​skU(s)​.

y(t)=∑k=0NbkaNdkxdtkdNxdtN=u(t)−∑k=0N−1akaNdkxdtk.y(t) = \sum_{k=0}^{N} \frac{b_k}{a_N} \frac{d^{k}x}{dt^{k}} \qquad \frac{d^{N}x}{dt^{N}} = u(t) - \sum_{k=0}^{N-1} \frac{a_k}{a_N} \frac{d^{k}x}{dt^{k}}.y(t)=∑k=0N​aN​bk​​dtkdkx​dtNdNx​=u(t)−∑k=0N−1​aN​ak​​dtkdkx​.

We can now choose our state-variables to be the derivatives x,dxdt,⋯ ,dN−1xdtN−1x, \frac{d^{}x}{dt^{}}, \cdots, \frac{d^{N-1}x}{dt^{N-1}}x,dtdx​,⋯,dtN−1dN−1x​, giving us the state-evolution equation

ddt[xdxdt⋮dN−2xdtN−2dN−1xdtN−1]=[0100…0010…00⋱⋱⋱00…01−a0aN−a1aN…−aN−2aN−aN−1aN][xdxdt⋮dN−2xdtN−2dN−1xdtN−1]+[00⋮01]u(t).(5)\frac{d}{dt} \begin{bmatrix} x\\\frac{d^{}x}{dt^{}}\\ \vdots \\ \frac{d^{N-2}x}{dt^{N-2}} \\\frac{d^{N-1}x}{dt^{N-1}} \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 & \ldots\\ 0 & 0 & 1 & 0 & \ldots\\ 0 & 0 & \ddots & \ddots & \ddots\\ 0 & 0 & \ldots & 0 & 1\\ -\frac{a_0}{a_N} & -\frac{a_1}{a_N} & \ldots & -\frac{a_{N-2}}{a_N} & -\frac{a_{N-1}}{a_N} \end{bmatrix} \begin{bmatrix} x\\\frac{d^{}x}{dt^{}}\\ \vdots \\ \frac{d^{N-2}x}{dt^{N-2}} \\\frac{d^{N-1}x}{dt^{N-1}} \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} u(t). \qquad (5)dtd​​xdtdx​⋮dtN−2dN−2x​dtN−1dN−1x​​​=​0000−aN​a0​​​1000−aN​a1​​​01⋱……​00⋱0−aN​aN−2​​​……⋱1−aN​aN−1​​​​​xdtdx​⋮dtN−2dN−2x​dtN−1dN−1x​​​+​00⋮01​​u(t).(5)

Applying the state-variables to y(t)y(t)y(t),

y(t)=bNaN(u(t)−∑k=0N−1akaNdkxdtk)+∑k=0N−1bkaNdkxdtky(t)=bNaNu(t)+∑k=0N−1(bkaN−bNakaN2)dkxdtky(t)=1aN[b0−bNa0aNb1−bNa1aN…bN−1−bNaN−1aN]x+bNaNu(t).(6)\begin{aligned} y(t) &= \frac{b_N}{a_N}\left( u(t) - \sum_{k=0}^{N-1}\frac{a_k}{a_N} \frac{d^{k}x}{dt^{k}} \right) + \sum_{k=0}^{N-1} \frac{b_k}{a_N} \frac{d^{k}x}{dt^{k}}\\ y(t) &= \frac{b_N}{a_N}u(t) + \sum_{k=0}^{N-1} \left(\frac{b_k}{a_N} - \frac{b_Na_k}{a_N^2}\right) \frac{d^{k}x}{dt^{k}}\\ y(t) &= \frac{1}{a_N}\begin{bmatrix} b_0 - \frac{b_Na_0}{a_N} & b_1 - \frac{b_Na_1}{a_N} & \ldots & b_{N-1} - \frac{b_Na_{N-1}}{a_N} \end{bmatrix} \mathbf{x} + \frac{b_N}{a_N}u(t). \qquad (6)\end{aligned}y(t)y(t)y(t)​=aN​bN​​(u(t)−k=0∑N−1​aN​ak​​dtkdkx​)+k=0∑N−1​aN​bk​​dtkdkx​=aN​bN​​u(t)+k=0∑N−1​(aN​bk​​−aN2​bN​ak​​)dtkdkx​=aN​1​[b0​−aN​bN​a0​​​b1​−aN​bN​a1​​​…​bN−1​−aN​bN​aN−1​​​]x+aN​bN​​u(t).(6)​

Together, Equation 5, Equation 6 are known as Phase Variable Form. Notice that the characteristic polynomial of the AAA matrix when it is in phase variable form is

Δ(s)=sn+∑i=0N−1aiaNsi.\Delta(s) = s^n + \sum_{i=0}^{N-1}\frac{a_i}{a_N}s^i.Δ(s)=sn+∑i=0N−1​aN​ai​​si.

For transfer functions, the time domain solution for a particular input is given by L−1{H(s)U(s)}\mathcal{L}^{-1}\left\{ H(s) U(s) \right\}L−1{H(s)U(s)}. How do we do the same for state-space equations? Equation 2 is a inhomogenous, first-order vector ordinary differential equation. If it was a scalar homogenous ODE, then we know the solution would be x(t)=x(0)eatx(t)=x(0)e^{at}x(t)=x(0)eat, so for our vector case, let us first define

eAt=∑k=0∞1k!Ake^{At} = \sum_{k=0}^{\infty} \frac{1}{k!} A^keAt=∑k=0∞​k!1​Ak

using the Taylor Series expansion. With this definition, we can solve Equation 2 using integrating factors. If we let e−Ate^{-At}e−At be our integrating factor, then multiplying it to both sides of Equation 2 gives

e−Atdxdt=e−AtAx+e−AtBu.e^{-At}\frac{d^{}\mathbf{x}}{dt^{}} = e^{-At}A\mathbf{x} + e^{-At}B\mathbf{u}.e−Atdtdx​=e−AtAx+e−AtBu.

ddt[e−Atx]=e−Atdxdt−Ae−Atx.\frac{d}{dt}\left[ e^{-At}\mathbf{x} \right] = e^{-At}\frac{d^{}\mathbf{x}}{dt^{}} - A e^{-At}\mathbf{x}.dtd​[e−Atx]=e−Atdtdx​−Ae−Atx.

ddt[e−Atx]=e−AtBu.\frac{d}{dt}\left[ e^{-At}\mathbf{x} \right] = e^{-At}B\mathbf{u}.dtd​[e−Atx]=e−AtBu.

Integrating both sides from 0 to ttt,

e−Atx(t)−x(0)=∫0te−AτBu(τ)dτ∴x(t)=eAtx(0)+∫0teA(t−τ)Bu(τ)dτ(7)\begin{aligned} e^{-At}\mathbf{x}(t) - \mathbf{x}(0) = \int_{0}^{t}e^{-A\tau}B\mathbf{u}(\tau)d\tau\\ \therefore \mathbf{x}(t) = e^{At}\mathbf{x}(0) + \int_{0}^{t}e^{A(t-\tau)}B\mathbf{u}(\tau)d\tau \qquad (7)\end{aligned}e−Atx(t)−x(0)=∫0t​e−AτBu(τ)dτ∴x(t)=eAtx(0)+∫0t​eA(t−τ)Bu(τ)dτ(7)​

x(t)=eAtx(0)\mathbf{x}(t) = e^{At}\mathbf{x}(0)x(t)=eAtx(0)

The zero-state response is how the system response to an input when its initial state is x(0)=0\mathbf{x}(0) = \boldsymbol{0}x(0)=0. It is the convolution of the input with eAtBu(t)u(t)e^{At}B\mathbf{u}(t)u(t)eAtBu(t)u(t) where u(t)u(t)u(t) is the unit step.

x(t)=∫0teA(t−τ)Bu(τ)dτ\mathbf{x}(t) = \int_{0}^{t}e^{A(t-\tau)}B\mathbf{u}(\tau)d\taux(t)=∫0t​eA(t−τ)Bu(τ)dτ

A system is controllable if for any initial state x0\mathbf{x}_0x0​, we can reach a new state xf\mathbf{x}_fxf​ in finite time with no constraints on the input u\mathbf{u}u.

Let us assume that we have a controllable system and we want to reach the state 0\mathbf{0}0 from x0\mathbf{x}_0x0​, and we reach it at time tft_ftf​. Then using Equation 7,

−x0=∫0tfe−AτBu(τ)dτ.-\mathbf{x}_0 = \int_0^{t_f} e^{-A\tau}B\mathbf{u}(\tau)d\tau.−x0​=∫0tf​​e−AτBu(τ)dτ.

−x0=∑j=0n−1AjB∫0tfαj(τ)u(τ)dτ∴[BABA2B…An−1B][c0c1⋮cn−1]where ci=∫0tfαj(τ)u(τ)dτ.\begin{aligned} -\mathbf{x}_0 = \sum_{j=0}^{n-1}A^jB\int_0^{t_f}\alpha_j(\tau)\mathbf{u}(\tau)d\tau\\ \therefore \begin{bmatrix} B & AB & A^2 B & \ldots & A^{n-1}B \end{bmatrix} \begin{bmatrix} c_0 \\ c_1 \\ \vdots \\ c_{n-1} \end{bmatrix}\\ \text{where } c_i = \int_0^{t_f} \alpha_j(\tau)u(\tau)d\tau.\end{aligned}−x0​=j=0∑n−1​AjB∫0tf​​αj​(τ)u(τ)dτ∴[B​AB​A2B​…​An−1B​]​c0​c1​⋮cn−1​​​where ci​=∫0tf​​αj​(τ)u(τ)dτ.​

C=[BABA2B…An−1B].\mathcal{C} = \begin{bmatrix} B & AB & A^2 B & \ldots & A^{n-1}B \end{bmatrix}.C=[B​AB​A2B​…​An−1B​].

Notice that if C\mathcal{C}C is invertible, then we can find the c\mathbf{c}c which will recover −x0-\mathbf{x}_0−x0​, but if it is not invertible, then we may not be able to do this.

If C\mathcal{C}Cis invertible, then the system is controllable.

A system is observable if for any initial state x0\mathbf{x}_0x0​, we can determine x0\mathbf{x}_0x0​ from u(t)u(t)u(t) and y(t)y(t)y(t)over a finite time interval.

O=[CCA⋮CAn−1].\mathcal{O} = \begin{bmatrix} C \\ CA \\ \\ \vdots \\ CA^{n-1} \end{bmatrix}.O=​CCA⋮CAn−1​​.

If O\mathcal{O}Ois invertible, then the system is observable.

Sometimes systems have a time-delay in them. This is equivalent to placing a system before the plant with impulse response δ(t−T)\delta(t-T)δ(t−T) since x(t)∗δ(t−T)=x(t−T)x(t)*\delta(t-T) = x(t-T)x(t)∗δ(t−T)=x(t−T). In the Laplace domain, this is the same as the transfer function e−sTe^{-sT}e−sT as shown in Figure 3.

Table 1: Electro-mechanical equations and their analogies.
Figure 3: System with time delay