Berkeley Notes
  • Introduction
  • EE120
    • Introduction to Signals and Systems
    • The Fourier Series
    • The Fourier Transform
    • Generalized transforms
    • Linear Time-Invariant Systems
    • Feedback Control
    • Sampling
    • Appendix
  • EE123
    • The DFT
    • Spectral Analysis
    • Sampling
    • Filtering
  • EECS126
    • Introduction to Probability
    • Random Variables and their Distributions
    • Concentration
    • Information Theory
    • Random Processes
    • Random Graphs
    • Statistical Inference
    • Estimation
  • EECS127
    • Linear Algebra
    • Fundamentals of Optimization
    • Linear Algebraic Optimization
    • Convex Optimization
    • Duality
  • EE128
    • Introduction to Control
    • Modeling Systems
    • System Performance
    • Design Tools
    • Cascade Compensation
    • State-Space Control
    • Digital Control Systems
    • Cayley-Hamilton
  • EECS225A
    • Hilbert Space Theory
    • Linear Estimation
    • Discrete Time Random Processes
    • Filtering
  • EE222
    • Real Analysis
    • Differential Geometry
    • Nonlinear System Dynamics
    • Stability of Nonlinear Systems
    • Nonlinear Feedback Control
Powered by GitBook
On this page
  • Types of Control
  • Constant Gain Control
  • Lead Control
  • Integral Control
  • Root Locus Analysis
  • Feedback Controller Design
  • Steady State Tracking Accuracy
  • Disturbance Rejection
  • Noise Insensitivity

Was this helpful?

  1. EE120

Feedback Control

PreviousLinear Time-Invariant SystemsNextSampling

Last updated 3 years ago

Was this helpful?

In feedback control, we have some physical system Hp(s)H_p(s)Hp​(s) called a plant which we would like to control using a system of our choice Hc(s)H_c(s)Hc​(s) in order to follow some reference signal r(t)r(t)r(t).

Notice how the output signal is subtracted from a reference signal, and we use the difference (a.k.a the error) to determine what input we pass into the plant. Looking at the overall transfer function of the system, we see that

Y(s)=(R(s)−Y(s))Hc(s)Hp(s)(1+Hc(s)Hp(s))Y(s)=Hc(s)Hp(s)R(s)H(s)=Y(s)R(s)=Hc(s)Hp(s)1+Hc(s)Hp(s)\begin{aligned} Y(s) &= (R(s)-Y(s))H_c(s)H_p(s)\\ (1+H_c(s)H_p(s))Y(s) &= H_c(s)H_p(s)R(s)\\ H(s) = \frac{Y(s)}{R(s)} &= \frac{H_c(s)H_p(s)}{1+H_c(s)H_p(s)}\end{aligned}Y(s)(1+Hc​(s)Hp​(s))Y(s)H(s)=R(s)Y(s)​​=(R(s)−Y(s))Hc​(s)Hp​(s)=Hc​(s)Hp​(s)R(s)=1+Hc​(s)Hp​(s)Hc​(s)Hp​(s)​​

Depending on what control we use for Hc(s)H_c(s)Hc​(s), we can shape this transfer function to be what we want.

Types of Control

Constant Gain Control

When Hc(s)=K0H_c(s) = K_0Hc​(s)=K0​, this is known as constant gain control.

H(s)=K0Hp(s)1+K0Hp(s)H(s) = \frac{K_0H_p(s)}{1+K_0H_p(s)}H(s)=1+K0​Hp​(s)K0​Hp​(s)​

The poles of this system are clearly when 1+K0Hp(s)=01+K_0H_p(s)=01+K0​Hp​(s)=0.

Lead Control

Lead controllers are of the form

Hc(s)=K0s−βs−αH_c(s) = K_0\frac{s-\beta}{s-\alpha}Hc​(s)=K0​s−αs−β​

Their poles are when

1+K0s−βs−αHp(s)=01 + K_0\frac{s-\beta}{s-\alpha}H_p(s) = 01+K0​s−αs−β​Hp​(s)=0

Integral Control

Integral controller are of the form

Hc(s)=K0sH_c(s) = \frac{K_0}{s}Hc​(s)=sK0​​

Their poles are when

1+K0sHp(s)=01 + \frac{K_0}{s}H_p(s) = 01+sK0​​Hp​(s)=0

Root Locus Analysis

For all forms of control, we need to choose a constant which places our poles where we want. Root Locus Analysis is the technique which helps us determine how our poles will move as K0→∞K_0\rightarrow \inftyK0​→∞. Assuming we only have a single gain to choose, we the poles of the new transfer function will be the roots of

1+K0H(s)1 + K_0H(s)1+K0​H(s)

where H(s)H(s)H(s) is some transfer function that results in the denominator (For example, in constant gain control, H(s)=Hp(s)H(s) = H_p(s)H(s)=Hp​(s) but for lead control, Hs=s−βs−αHp(s)H_s = \frac{s-\beta}{s-\alpha}H_p(s)Hs​=s−αs−β​Hp​(s)).

Definition 39

The root locus is the set of all points s0∈Cs_0\in \mathbb{C}s0​∈C such that ∃K0>0\exists K_0>0∃K0​>0 such that 1+K0H(s)=01+K_0H(s)=01+K0​H(s)=0.

This definition implies that H(s0)=1K0H(s_0)=\frac{1}{K_0}H(s0​)=K0​1​ for some K0K_0K0​, meaning the root locus is all points such that ∠H(s0)=−180\angle H(s_0)=-180∠H(s0​)=−180˚. The first step of RLA is to factor the numerator and denominator of H(s)H(s)H(s)

H(s)=∏i=0m(s−βi)∏i=0n(s−αi)H(s) = \frac{\prod_{i=0}^{m}{(s-\beta_i)}}{\prod_{i=0}^{n}{(s-\alpha_i)}}H(s)=∏i=0n​(s−αi​)∏i=0m​(s−βi​)​

As k→0,H(s0)=−1K0→∞k\rightarrow 0, H(s_0)=-\frac{1}{K_0}\rightarrow \inftyk→0,H(s0​)=−K0​1​→∞, so the root locus begins at the poles. As k→∞,H(s0)=−1K0→0k\rightarrow \infty, H(s_0)=-\frac{1}{K_0}\rightarrow 0k→∞,H(s0​)=−K0​1​→0, so the root locus will end at the open loop zeros. However, if m<nm < nm<n, (i.e there are more poles than 0’s), not all of the poles can converge to a zero. Instead n−mn-mn−m branches will approach ∞\infty∞ witth asymptotes at

∑inαi−∑imβin−m\frac{\sum_i^n\alpha_i-\sum_i^m\beta_i}{n-m}n−m∑in​αi​−∑im​βi​​

and angles of

180+(i−1)∗360n−m\frac{180 + (i-1)*360}{n-m}n−m180+(i−1)∗360​

The final rule of RLA is that parts of the real line left of an odd number of real poles and zeros are on the root locus. RLA tells us that we have to be careful when choosing our gain K0K_0K0​ because we could by mistake cause instability. In particular, high gain will cause instability if

  • H(s)H(s)H(s) has zeros in the right half of the plane

  • if n−m≥3n-m \ge 3n−m≥3, then the asympototes will cross the imaginary axis.

Feedback Controller Design

When we design systems to use in feedback control, there are certain properties we want besides just basic ones like stability. Because signals can be thought of as a series of step signals, when analyzing these properties, we will assume r(t)=u(t)r(t)=u(t)r(t)=u(t)

Steady State Tracking Accuracy

Definition 40

A system has steady state tracking accurracy if the different between the reference and the output signals tends to 0 as t→∞t\rightarrow \inftyt→∞

ess:=lim⁡t→∞e(t)=0e_{ss} := \lim_{t\rightarrow\infty}{e(t)}=0ess​:=limt→∞​e(t)=0

A useful theorem which can help us evaluate this limit is the final value theorem.

Theorem 17 (Final Value Theorem)

lim⁡t→∞e(t)=lim⁡s→0sE(s)\lim_{t\rightarrow\infty}{e(t)} = \lim_{s\rightarrow0}{sE(s)}limt→∞​e(t)=lims→0​sE(s)

As long as a limit exists and e(t)=0e(t)=0e(t)=0 for t<0t<0t<0

Looking at the relationship between E(s)E(s)E(s) and R(s)R(s)R(s), we see that

E(s)R(s)=11+Hc(s)Hp(s)  ⟹  E(s)=1s1+Hc(s)Hp(s)\frac{E(s)}{R(s)} = \frac{1}{1+H_c(s)H_p(s)} \implies E(s) = \frac{\frac{1}{s}}{1+H_c(s)H_p(s)}R(s)E(s)​=1+Hc​(s)Hp​(s)1​⟹E(s)=1+Hc​(s)Hp​(s)s1​​

Thus

ess=lim⁡s→011+Hc(s)Hp(s)e_{ss}=\lim_{s\rightarrow0}{\frac{1}{1+H_c(s)H_p(s)}}ess​=lims→0​1+Hc​(s)Hp​(s)1​

Thus as long as Hc(s)Hp(s)H_c(s)H_p(s)Hc​(s)Hp​(s) has at least one pole at s=0s=0s=0, then ess=0e_{ss}=0ess​=0. Notice that integral control gives us a pole at s0s_0s0​, so it is guaranteed that an integral controller will be steady-state accurate.

Disturbance Rejection

Sometimes the output of our controller can be disturbed before it goes into the plant. Ideally, our system should be robust to these disturbances.

Y(s)=Hp(s)[Hc(s)(Rs−Ys)+D(s)]Y(s)=Hc(s)Hp(s)1+Hc(s)Hp(s)R(s)+Hp(s)1+Hc(s)Hp(s)D(s)\begin{aligned} Y(s) &= H_p(s)\left[H_c(s)(R_s-Y_s)+D(s)\right]\\ Y(s) &= \frac{H_c(s)H_p(s)}{1+H_c(s)H_p(s)}R(s) + \frac{H_p(s)}{1+H_c(s)H_p(s)}D(s)\end{aligned}Y(s)Y(s)​=Hp​(s)[Hc​(s)(Rs​−Ys​)+D(s)]=1+Hc​(s)Hp​(s)Hc​(s)Hp​(s)​R(s)+1+Hc​(s)Hp​(s)Hp​(s)​D(s)​

The system will reject disturbances if the Hp(s)1+Hc(s)Hp(s)D(s)\frac{H_p(s)}{1+H_c(s)H_p(s)}D(s)1+Hc​(s)Hp​(s)Hp​(s)​D(s) is close to 000 in the steady state. Assuming that d(t)=u(t)d(t) = u(t)d(t)=u(t), we see that

δss=lim⁡s→0sHp(s)1+Hc(s)Hp(s)1s=lim⁡s→0Hp(s)1+Hc(s)Hp(s)\delta_{ss} = \lim_{s\rightarrow0}{s\frac{H_p(s)}{1+H_c(s)H_p(s)}\frac{1}{s}}=\lim_{s\rightarrow0}{\frac{H_p(s)}{1+H_c(s)H_p(s)}}δss​=lims→0​s1+Hc​(s)Hp​(s)Hp​(s)​s1​=lims→0​1+Hc​(s)Hp​(s)Hp​(s)​

Thus as long as HcH_cHc​ has a pole at 000, then the system will reject disturbances. Notice that integral control guarantees disturbance rejection as well.

Noise Insensitivity

In real systems, our measurement of the output y(t)y(t)y(t) is not always 100% accurate. One way to model this is to add a noise term to the output. Looking at the relationship between the noise and the output signal, we see

H(s)=−Hc(s)Hp(s)1+Hc(s)Hp(s)H(s) = \frac{-H_c(s)H_p(s)}{1+H_c(s)H_p(s)}H(s)=1+Hc​(s)Hp​(s)−Hc​(s)Hp​(s)​

In order to reject this noise, we want this term to be close to 0, so ideally Hc(s)Hp(s)<<1H_c(s)H_p(s) << 1Hc​(s)Hp​(s)<<1 as s→0s\rightarrow 0s→0. However, this conflicts with our desire for Hc(s)Hp(s)H_c(s)H_p(s)Hc​(s)Hp​(s) to have a pole at 0 to guarantee steady state tracking. Thus it is difficult to make a controler that is both accurate and robust to noise. However, because noise is usually a high frequency signal and the reference is a low frequency signal, we can mitigate this by choosing Hc(s)Hp(s)H_c(s)H_p(s)Hc​(s)Hp​(s) to be a low-pass filter.