Berkeley Notes
  • Introduction
  • EE120
    • Introduction to Signals and Systems
    • The Fourier Series
    • The Fourier Transform
    • Generalized transforms
    • Linear Time-Invariant Systems
    • Feedback Control
    • Sampling
    • Appendix
  • EE123
    • The DFT
    • Spectral Analysis
    • Sampling
    • Filtering
  • EECS126
    • Introduction to Probability
    • Random Variables and their Distributions
    • Concentration
    • Information Theory
    • Random Processes
    • Random Graphs
    • Statistical Inference
    • Estimation
  • EECS127
    • Linear Algebra
    • Fundamentals of Optimization
    • Linear Algebraic Optimization
    • Convex Optimization
    • Duality
  • EE128
    • Introduction to Control
    • Modeling Systems
    • System Performance
    • Design Tools
    • Cascade Compensation
    • State-Space Control
    • Digital Control Systems
    • Cayley-Hamilton
  • EECS225A
    • Hilbert Space Theory
    • Linear Estimation
    • Discrete Time Random Processes
    • Filtering
  • EE222
    • Real Analysis
    • Differential Geometry
    • Nonlinear System Dynamics
    • Stability of Nonlinear Systems
    • Nonlinear Feedback Control
Powered by GitBook
On this page
  • First Order Systems
  • Second Order Systems
  • The Underdamped Case
  • Additional Poles and Zeros of a Second Order System
  • Stability
  • Steady State Error
  • Margins

Was this helpful?

  1. EE128

System Performance

PreviousModeling SystemsNextDesign Tools

Last updated 3 years ago

Was this helpful?

Definition 14

The step response of a system is how a system H(s)H(s)H(s) responds to a step input.

y(t)=L−1{H(s)s}y(t) = \mathcal{L}^{-1}\left\{ \frac{H(s)}{s} \right\}y(t)=L−1{sH(s)​}

First Order Systems

Definition 15

A first order system is one with the transfer function of the form

H(s)=s+αs+β.H(s) = \frac{s+\alpha}{s+\beta}.H(s)=s+βs+α​.

After applying partial fraction decomposition to them, their step response is of the form

Au(t)+Be−βtu(t).Au(t) + Be^{-\beta t}u(t).Au(t)+Be−βtu(t).

Thus, the larger β\betaβ is (i.e the deeper in the left half plane it is), the faster the system will “settle”.

Second Order Systems

Definition 16

Second order systems are those with the transfer function in the form

H(s)=ωn2s2+2ζωns+ωn2.H(s) = \frac{\omega_n^2}{s^2+2\zeta\omega_ns+\omega_n^2}.H(s)=s2+2ζωn​s+ωn2​ωn2​​.

ωn\omega_nωn​ is known as the natural frequency, and ζ\zetaζis known as the damping factor.

Notice that the poles of the second order system are

  1. Undamped

  2. Underdamped

  3. Critically Damped

  4. Overdamped

The Underdamped Case

If we analyze the underdamped case further, we can first look at its derivative.

Definition 17

Definition 18

Definition 19

Since our poles are complex, we can represent them in their polar form.

Additional Poles and Zeros of a Second Order System

Suppose we added an additional pole to the second order system so its transfer function was instead

Then its step response will be

Notice that

If we instead add an additional zero to the second order system so its transfer function looks like

and its step response will look like

Stability

Recall Equation 7 which told us the time-domain solution to state-space equations was

Definition 20

Following from Definition 20, Equation 7, this means that

Theorem 3

If all poles are in the left half plane and the number of zeros is less than or equal to the number of poles, then the system is BIBO stable.

Definition 21

Theorem 4

In all other cases, the system will be unstable.

Steady State Error

We want to understand what its steady state error will be in response to different inputs.

Theorem 5

The final value theorem says that for a function whose unilateral laplace transform has all poles in the left half plane,

Using this fact, we see that for the unity feedback system,

Using these, we can define the static error constants.

Definition 22

The position constant determines how well a system can track a unit step.

Definition 23

The velocity constant determines how well a system can track a ramp.

Definition 24

The acceleration constant determines how well a system can track a parabola.

Definition 25

The system type is the number of poles at 0.

This also brings another observation.

Definition 26

The internal model principle is that if the system in the feedback loop has a model of the input we want to track, then it can track it exactly.

If instead we have a state-space system, then assuming the system is stable,

Applying this to the state space equations for a step input,

Looking at the error between the reference and the output in the 1D input case,

Margins

Definition 27

The frequency response of the system determines how it scales pure frequencies. It is equivalent to the Laplace transform evaluated on the imaginary axis.

Definition 28

Definition 29

We can imagine the gain and phase margin like placing a “virtual box” before the plant as shown in Figure 6.

The characteristic polynomial of the closed loop transfer function is

s=−2ζωn±4ζ2ωn2−4ωn22=−ζωn±ωnζ2−1.s = \frac{-2\zeta\omega_n \pm \sqrt{4\zeta^2\omega^2_n-4\omega^2_n}}{2} = -\zeta\omega_n \pm \omega_n\sqrt{\zeta^2 - 1}.s=2−2ζωn​±4ζ2ωn2​−4ωn2​​​=−ζωn​±ωn​ζ2−1​.

There are four cases of interest based on ζ\zetaζ.

When ζ=0\zeta=0ζ=0, the poles are s=±ωnjs = \pm \omega_n js=±ωn​j. Because they are purely imaginary, the step response will be purely oscillatory.

Y(s)=1sωn2s2+ωn2↔y(t)=u(t)−cos⁡(ωnt)u(t)Y(s) = \frac{1}{s}\frac{\omega_n^2}{s^2+\omega_n^2} \leftrightarrow y(t) = u(t) - \cos(\omega_n t)u(t)Y(s)=s1​s2+ωn2​ωn2​​↔y(t)=u(t)−cos(ωn​t)u(t)

When ζ∈(0,1)\zeta\in(0, 1)ζ∈(0,1), the poles are s=−ζωn±jωn1−ζ2s = -\zeta\omega_n\pm j\omega_n\sqrt{1-\zeta^2}s=−ζωn​±jωn​1−ζ2​. They are complex and in the left-half plane, so the step response will be a exponentially decaying sinusoid. We define the damped frequency ωd=ωn1−ζ2\omega_d = \omega_n\sqrt{1-\zeta^2}ωd​=ωn​1−ζ2​ so that the poles become s=−ζωn±ωdjs=-\zeta\omega_n \pm \omega_djs=−ζωn​±ωd​j. Notice that ωd<ωn\omega_d < \omega_nωd​<ωn​. If we compute the time-response of the system,

y(t)=[1−e−ζωnt1−ζ2cos⁡(ωdt−arctan⁡(ζ1−ζ2))]u(t)y(t) = \left[ 1 - \frac{e^{-\zeta\omega_nt}}{\sqrt{1-\zeta^2}}\cos\left(\omega_d t - \arctan\left( \frac{\zeta}{\sqrt{1-\zeta^2}} \right)\right)\right]u(t)y(t)=[1−1−ζ2​e−ζωn​t​cos(ωd​t−arctan(1−ζ2​ζ​))]u(t)

When ζ=1\zeta=1ζ=1, both poles are at s=−ωns=-\omega_ns=−ωn​. The poles are both real, so the time-response will respond without any overshoot.

When ζ>1\zeta>1ζ>1, the poles are −ζωn±ωnζ2−1-\zeta\omega_n\pm \omega_n\sqrt{\zeta^2-1}−ζωn​±ωn​ζ2−1​. Both of these will be real, so the time-response will look similar to a first-order system where it is slow and primarily governed by the slowest pole.

sY(s)=ωn2s2+2ζωns+ωn2=ωn2ωdωd(s+ζωn)2+ωd2∴dydt=ωn2ωde−ζωntsin⁡(ωdt)u(t)(8)\begin{aligned} sY(s) &= \frac{\omega_n^2}{s^2+2\zeta\omega_ns+\omega_n^2} = \frac{\omega_n^2}{\omega_d} \frac{\omega_d}{(s+\zeta\omega_n)^2+\omega_d^2}\\ \therefore \frac{d^{}y}{dt^{}} &= \frac{\omega_n^2}{\omega_d}e^{-\zeta\omega_nt}\sin(\omega_d t)u(t) \qquad (8)\end{aligned}sY(s)∴dtdy​​=s2+2ζωn​s+ωn2​ωn2​​=ωd​ωn2​​(s+ζωn​)2+ωd2​ωd​​=ωd​ωn2​​e−ζωn​tsin(ωd​t)u(t)(8)​

The Time to Peak (TpT_pTp​) of a system is how long it takes to reach is largest value in the step response.

Using Equation 8, we see that the derivative is first equal to 0 when t=πωdt = \frac{\pi}{\omega_d}t=ωd​π​.

∴Tp=πωd\therefore T_p = \frac{\pi}{\omega_d}∴Tp​=ωd​π​

The Percent Overshoot (%O.S\% O.S%O.S) of a system is by how much it will overshoot the step response.

The percent overshoot occurs at t=πωdt = \frac{\pi}{\omega_d}t=ωd​π​, so

%O.S=e−ζωnπωd=e−ζπ1−ζ2.\% O.S = e^{-\zeta\omega_n \frac{\pi}{\omega_d}} = e^{\frac{-\zeta\pi}{\sqrt{1-\zeta^2}}}.%O.S=e−ζωn​ωd​π​=e1−ζ2​−ζπ​.

The Settling Time (TsT_sTs​) of a system is how long it takes for the system to start oscillating within 2\% of its final value.

∣y(Ts)−1∣<0.02  ⟹  e−ζωnTs1−ζ2=0.02∴Ts=−1ζωnln⁡(0.021−ζ2)\begin{aligned} |y(T_s) - 1| < 0.02 \implies \frac{e^{-\zeta\omega_nT_s}}{\sqrt{1-\zeta^2}} = 0.02\\ \therefore T_s = -\frac{1}{\zeta\omega_n} \ln(0.02 \sqrt{1-\zeta^2})\end{aligned}∣y(Ts​)−1∣<0.02⟹1−ζ2​e−ζωn​Ts​​=0.02∴Ts​=−ζωn​1​ln(0.021−ζ2​)​

r=ωd2+ζ2+ωn2=ωn2(1−ζ2)+ζ2ωn2=ωn2cos⁡(π−θ)=−ζωnωn=−ζ\begin{aligned} r = \omega_d^2 + \zeta^2 + \omega_n^2 = \omega_n^2(1-\zeta^2)+\zeta^2\omega_n^2 = \omega_n^2\\ \cos(\pi-\theta) = \frac{-\zeta\omega_n}{\omega_n} = -\zeta\\\end{aligned}r=ωd2​+ζ2+ωn2​=ωn2​(1−ζ2)+ζ2ωn2​=ωn2​cos(π−θ)=ωn​−ζωn​​=−ζ​

What this tells us is that if we search along the vector at angle π−θ\pi-\thetaπ−θ, we get a constant ζ\zetaζ.

H(s)=bc(s+c)(s2+2as+b).H(s) = \frac{bc}{(s+c)(s^2+2as+b)}.H(s)=(s+c)(s2+2as+b)bc​.

Y(s)=1s+Ds+c+Bs+Cs2+as+bB=c(a−c)c2+b−caC=c(a2−ac−b)c2+b−caD=−bc2−ac+b.\begin{aligned} Y(s) &= \frac{1}{s}+\frac{D}{s+c}+\frac{Bs+C}{s^2+as+b}\\ B &= \frac{c(a-c)}{c^2+b-ca}\quad C = \frac{c(a^2-ac-b)}{c^2+b-ca} \quad D = \frac{-b}{c^2-ac+b}.\end{aligned}Y(s)B​=s1​+s+cD​+s2+as+bBs+C​=c2+b−cac(a−c)​C=c2+b−cac(a2−ac−b)​D=c2−ac+b−b​.​

lim⁡c→∞D=0lim⁡c→∞B=−1lim⁡c→∞C=−a.\lim_{c\to\infty} D = 0 \quad \lim_{c\to\infty} B = -1 \lim_{c\to\infty} C = -a.limc→∞​D=0limc→∞​B=−1limc→∞​C=−a.

In other words, as the additional pole moves to infinity, the system acts more and more like a second-order. As a rule of thumb, if Re{c}≥5Re{a}Re\{c\}\geq5Re\{a\}Re{c}≥5Re{a}, then the system will approximate a second order system. Because of this property, we can often decompose complex systems into a series of first and second order systems.

H(s)=s+as2+2ζωn+ωn2H(s) = \frac{s+a}{s^2+2\zeta\omega_n+\omega_n^2}H(s)=s2+2ζωn​+ωn2​s+a​

sY(s)+aY(s).sY(s) + aY(s).sY(s)+aY(s).

Thus if aaa is small, then the effect of the zero is similar to introducing a derivative into the system, whereas if aaa is large, then the impact of the zero is primarily to scale the step response. One useful property about zeros is that if a zero occurs close enough to a pole, then they will “cancel” each other out and that pole will have a much smaller effect on the step response.

x(t)=eAtx(0)+∫0teA(t−τ)Bu(τ)dτ.\mathbf{x}(t) = e^{At}\mathbf{x}(0) + \int_{0}^{t}e^{A(t-\tau)}B\mathbf{u}(\tau)d\tau.x(t)=eAtx(0)+∫0t​eA(t−τ)Bu(τ)dτ.

A system is bounded-input, bounded output(BIBO) stable if ∃Ku,Kx<∞\exists K_u, K_x < \infty∃Ku​,Kx​<∞ such that ∣u(t)∣<Ku  ⟹  ∣x(t)∣<Kx|\mathbf{u}(t)| < K_{u} \implies |\mathbf{x}(t)| < K_x∣u(t)∣<Ku​⟹∣x(t)∣<Kx​.

lim⁡t→∞x(t)=0.\lim_{t\to\infty}\mathbf{x}(t) = \boldsymbol{0}.limt→∞​x(t)=0.

If instead lim⁡t→∞x(t)=∞\lim_{t\to\infty}\mathbf{x}(t) = \inftylimt→∞​x(t)=∞, then the system is unstable.

A system is called marginally stable if the zero-input response does not converge to 0\boldsymbol{0}0.

A system is marginally stable if there is exactly one pole at s=0s=0s=0 or a pair of poles at s=±jω0s=\pm j\omega_0s=±jω0​.

Consider the unity feedback loop depicted in Figure 4 where we put a system G(s)G(s)G(s) in unity feedback to control it.

lim⁡t→∞x(t)=lim⁡s→0sX(s).\lim_{t\to\infty}x(t) = \lim_{s\to0} sX(s).limt→∞​x(t)=lims→0​sX(s).

E(s)=R(s)1+G(s).E(s) = \frac{R(s)}{1+G(s)}.E(s)=1+G(s)R(s)​.

Kp=lim⁡s→0G(s)(9)K_p = \lim_{s\to0}G(s) \qquad (9)Kp​=lims→0​G(s)(9)

lim⁡t→∞e(t)=lim⁡s→0s1s11+G(s)=11+Kp\lim_{t\to\infty} e(t) = \lim_{s\to0} s \frac{1}{s} \frac{1}{1+G(s)} = \frac{1}{1+K_p}limt→∞​e(t)=lims→0​ss1​1+G(s)1​=1+Kp​1​

Kv=lim⁡s→0sG(s)(10)K_v = \lim_{s\to0}sG(s) \qquad (10)Kv​=lims→0​sG(s)(10)

lim⁡t→∞e(t)=lim⁡s→0s1s211+G(s)=1Kv\lim_{t\to\infty} e(t) = \lim_{s\to0} s \frac{1}{s^2} \frac{1}{1+G(s)} = \frac{1}{K_v}limt→∞​e(t)=lims→0​ss21​1+G(s)1​=Kv​1​

Ka=lim⁡s→0s2G(s)(11)K_a = \lim_{s\to0}s^2G(s) \qquad (11)Ka​=lims→0​s2G(s)(11)

lim⁡t→∞e(t)=lim⁡s→0s1s311+G(s)=1Ka\lim_{t\to\infty} e(t) = \lim_{s\to0} s \frac{1}{s^3} \frac{1}{1+G(s)} = \frac{1}{K_a}limt→∞​e(t)=lims→0​ss31​1+G(s)1​=Ka​1​

Notice that large static error constants mean a smaller error. Another observation we can make is that if a system has nnn poles at s=0s=0s=0, it can perfectly track an input whose laplace transform is 1sn−k\frac{1}{s^{n-k}}sn−k1​ for k∈[0,n−1]k\in[0, n-1]k∈[0,n−1]. We give nnn a formal name.

lim⁡t→∞dxdt=0  ⟹  lim⁡t→∞x=xss.\lim_{t\to\infty}\frac{d^{}\mathbf{x}}{dt^{}} = \boldsymbol{0} \implies \lim_{t\to\infty}\mathbf{x} = \mathbf{x}_{ss}.limt→∞​dtdx​=0⟹limt→∞​x=xss​.

dxdt=0=Axss+B⋅I  ⟹  xss=−A−1B(12)\frac{d^{}\mathbf{x}}{dt^{}} = \boldsymbol{0} = A\mathbf{x}_{ss} + B\cdot I \implies \mathbf{x}_{ss} = -A^{-1}B \qquad (12)dtdx​=0=Axss​+B⋅I⟹xss​=−A−1B(12)

e(t)=r(t)−y(t)=1−Cxss=1+CA−1B.\mathbf{e}(t) = \mathbf{r}(t) - \mathbf{y}(t) = 1 - C\mathbf{x}_{ss} = 1 + CA^{-1}B.e(t)=r(t)−y(t)=1−Cxss​=1+CA−1B.

If we take a complex exponential and pass it into a causal LTI system with impulse response g(t)g(t)g(t), then

y(t)=ejωt∗g(t)=∫−∞∞g(τ)ejω(t−τ)dτ=ejωt∫0∞g(τ)e−jωτdτ.y(t) = e^{j\omega t} * g(t) = \int_{-\infty}^{\infty}g(\tau)e^{j\omega(t-\tau)}d\tau = e^{j\omega t} \int_{0}^{\infty}g(\tau)e^{-j\omega \tau}d\tau.y(t)=ejωt∗g(t)=∫−∞∞​g(τ)ejω(t−τ)dτ=ejωt∫0∞​g(τ)e−jωτdτ.

This shows us that ejωte^{j\omega t}ejωt is an eigenfunction of the system.

G(jω)=∫0∞g(τ)e−jωτdτ(13)G(j\omega) = \int_0^{\infty}g(\tau)e^{-j\omega\tau}d\tau \qquad (13)G(jω)=∫0∞​g(τ)e−jωτdτ(13)

Suppose we put a linear system G(s)G(s)G(s) in negative feedback. We know that if ∠G(jω)=(2k+1)π\angle G(j\omega) = (2k+1)\pi∠G(jω)=(2k+1)π for some k∈Zk\in\mathbb{Z}k∈Z, then the output of the plant will be −∣G(jω)∣ejωt-|G(j\omega)|e^{j\omega t}−∣G(jω)∣ejωt. If ∣G(jω)∣≥1|G(j\omega)| \geq 1∣G(jω)∣≥1, then this will feed back into the error term where it will be multiplied by ∣G(jω)∣|G(j\omega)|∣G(jω)∣ repeatedly, and this will cause the system to be unstable because ∣G(jω)∣≥1|G(j\omega)|\geq1∣G(jω)∣≥1 and thus will not decay.

The gain margin GmG_mGm​is the change in the open loop gain required to make the closed loop system unstable.

The phase margin ϕm\phi_mϕm​is the change in the open loop phase required to make the closed loop system unstable.

1+Gme−jϕmG(s)=0.1 + G_me^{-j\phi_m}G(s) = 0.1+Gm​e−jϕm​G(s)=0.

At the gain margin frequency ωgm\omega_{gm}ωgm​,

∣Gm∣∣G(jωgm)∣=1  ⟹  ∣Gm∣=1∣G(jωgm)∣.|G_m||G(j\omega_{gm})| = 1 \implies |G_m| = \frac{1}{|G(j\omega_{gm})|}.∣Gm​∣∣G(jωgm​)∣=1⟹∣Gm​∣=∣G(jωgm​)∣1​.

where the gain margin frequency is ∠G(jωm)=(2k+1)π\angle G(j\omega_m) = (2k+1)\pi∠G(jωm​)=(2k+1)π for k∈Zk\in\mathbb{Z}k∈Z. Likewise, at the phase margin frequency ωpm\omega_{pm}ωpm​,

1+Gme−jωmG(jωpm)=0  ⟹  −ϕm+∠G(jωpm)=(2k+1)π.1 + G_me^{-j\omega_m}G(j\omega_{pm}) = 0 \implies -\phi_m + \angle G(j\omega_{pm}) = (2k+1)\pi.1+Gm​e−jωm​G(jωpm​)=0⟹−ϕm​+∠G(jωpm​)=(2k+1)π.

where the phase margin frequency is ∣G(jωpm)∣=1|G(j\omega_{pm})| = 1∣G(jωpm​)∣=1.

Notice that if there is a time delay of TTT in the system, the phase margin will remain unchanged since the magnitude response will be the same, but the gain margin will change because the new phase will be

∠G(jω)−ωT.\angle G(j\omega) - \omega T.∠G(jω)−ωT.

Figure 4: Unity Feedback Loop
Figure 5: Frequency Response
Figure 6: The Gain and Phase Margin virtual system