Berkeley Notes
  • Introduction
  • EE120
    • Introduction to Signals and Systems
    • The Fourier Series
    • The Fourier Transform
    • Generalized transforms
    • Linear Time-Invariant Systems
    • Feedback Control
    • Sampling
    • Appendix
  • EE123
    • The DFT
    • Spectral Analysis
    • Sampling
    • Filtering
  • EECS126
    • Introduction to Probability
    • Random Variables and their Distributions
    • Concentration
    • Information Theory
    • Random Processes
    • Random Graphs
    • Statistical Inference
    • Estimation
  • EECS127
    • Linear Algebra
    • Fundamentals of Optimization
    • Linear Algebraic Optimization
    • Convex Optimization
    • Duality
  • EE128
    • Introduction to Control
    • Modeling Systems
    • System Performance
    • Design Tools
    • Cascade Compensation
    • State-Space Control
    • Digital Control Systems
    • Cayley-Hamilton
  • EECS225A
    • Hilbert Space Theory
    • Linear Estimation
    • Discrete Time Random Processes
    • Filtering
  • EE222
    • Real Analysis
    • Differential Geometry
    • Nonlinear System Dynamics
    • Stability of Nonlinear Systems
    • Nonlinear Feedback Control
Powered by GitBook
On this page
  • Impulse Response of LTI systems
  • The Discrete Case
  • The Continuous Case
  • Determining Properties of an LTI system
  • Causality
  • Memory
  • Stability
  • Frequency Response and Transfer Functions
  • Stability of transfer functions
  • Bode Plots
  • Special LTI Systems
  • Linear Constant Coefficient Difference/Differential Equations
  • State Space Equations
  • Second Order Systems

Was this helpful?

  1. EE120

Linear Time-Invariant Systems

PreviousGeneralized transformsNextFeedback Control

Last updated 3 years ago

Was this helpful?

Definition 33

LTI systems are ones which are both linear and time-invariant.

Impulse Response of LTI systems

LTI systems are special systems because their output can be determined entirely the impulse response h[n]h[n]h[n].

The Discrete Case

We can think of the original signal x[n]x[n]x[n] in terms of the impulse function.

x[n]=x[0]δ[n]+x[1]δ[n−1]+...=∑k=−∞∞x[k]δ[n−k]x[n] = x[0]\delta[n]+x[1]\delta[n-1]+... = \sum_{k=-\infty}^{\infty}{x[k]\delta[n-k]}x[n]=x[0]δ[n]+x[1]δ[n−1]+...=∑k=−∞∞​x[k]δ[n−k]

This signal will be transformed in some way to get the output y[n]y[n]y[n]. Since the LTI system applies a functional FFF and the LTI is linear and time-invariant,

y[n]=F(∑k=−∞∞x[k]δ[n−k])=∑k=−∞∞x[k]F(δ[n−k])=∑k=−∞∞x[k]h[n−k]y[n] = F\left(\sum_{k=-\infty}^{\infty}{x[k]\delta[n-k]}\right) = \sum_{k=-\infty}^{\infty}{x[k]F(\delta[n-k])} = \sum_{k=-\infty}^{\infty}{x[k]h[n-k]}y[n]=F(∑k=−∞∞​x[k]δ[n−k])=∑k=−∞∞​x[k]F(δ[n−k])=∑k=−∞∞​x[k]h[n−k]

Notice this operation is the convolution between the input and the impulse response.

The Continuous Case

We can approximate the function by breaking it into intervals of length Δ\DeltaΔ.

x(t)≈∑k=−∞∞x(kΔ)δΔ(t−kΔ)Δx(t) \approx \sum_{k=-\infty}^{\infty}{x(k\Delta)\delta_{\Delta}(t-k\Delta)\Delta}x(t)≈∑k=−∞∞​x(kΔ)δΔ​(t−kΔ)Δ

x(t)=limΔ→0∑k=−∞∞x(kΔ)δΔ(t−kΔ)Δx(t) = lim_{\Delta \rightarrow 0}\sum_{k=-\infty}^{\infty}{x(k\Delta)\delta_{\Delta}(t-k\Delta)\Delta}x(t)=limΔ→0​∑k=−∞∞​x(kΔ)δΔ​(t−kΔ)Δ

After applying the LTI system to it,

Notice this operation is the convolution between the input and the impulse response.

Determining Properties of an LTI system

Because an LTI system is determined entirely by its impulse response, we can determine its properties from the impulse response.

Causality

Theorem 9

◻

Memory

Theorem 10

Stability

Theorem 11

so

Frequency Response and Transfer Functions

Definition 34

The frequency response of a system is the output when passed a purely oscillatory signal

If we pass a complex exponential into an LTI system, the output signal is the same signal but scaled. In otherwise, it is an eigenfunction of LTI systems.

The integral is a constant, and the original function is unchanged. The same analysis can be done in the discrete case.

We give these constant terms a special name called the transfer function.

Definition 35

Notice: The frequency response is the fourier transform of the impulse response! This means the Fourier Transform takes us from the impulse response of the system to the frequency response. There is no reason to limit ourselves to the Fourier Domain though

Definition 36

The transfer function is merely the Laplace Transform of the impulse response. In many ways, this can be more useful than the frequency response.

Stability of transfer functions

Recall that an LTI system is stable if the impulse response is absolutely integrable. We can determine this from the transfer function.

Theorem 12

Theorem 13

Bode Plots

Because transfer functions, and hence the frequency response, can be quite complex, we need a easy way to visualize how a system responds to different frequencies.

Definition 37

The log-log scale not only allows us to determine the behavior of Bode plots over a large range of frequencies, but they also let us easily figure out what the plot looks like because it converts the frequency response into piecewise linear components.

To see why, lets write our transfer function in polar form.

If we take the log of this, we get

Thus we can see at decades away from the poles and zeros, the magnitudes and the phases will have less of an effect. Let’s try constructing the Bode Plot for this transfer function.

Special LTI Systems

Linear Constant Coefficient Difference/Differential Equations

Definition 38

A linear constant coefficient difference equation is a system of one of the following forms

Theorem 14

Theorem 15

Systems of the form

are causal, FIR LTI systems and their impulse response is

Theorem 16

Given a constant coefficient difference/differential equation, the transfer function is

Proof. The Continuous Case

Taking the Laplace Transform,

The Discrete Case

Taking the Z Transform

◻

State Space Equations

When we have a LCCDE of the form

Important: This is valid in Discrete Time as well!

In general, state-space equations are useful because they allow us to find transfer functions of complex systems very easily.

  1. Label the output of delay (discrete) or differentiation (continuous) blocks as the state variables.

  2. Write the state equations using inputs and delays/derivatives. Express each as a weighted sum of states and inputs.

  3. Use the formula above to find the transfer function.

Second Order Systems

Most of the time, higher order systems only have 2 dominant poles. Accordingly, they can be approximated by second order systems (i.e systems with two poles). One way to write the transfer function of this system is

Using common laplace transform pairs, this corresponds to

There are several key features of this step response:

Using the step response, we can calculate some of these values.

y(n)=∫−∞∞x(τ)h(t−τ)y(n) = \int_{-\infty}^{\infty}{x(\tau)h(t-\tau)}y(n)=∫−∞∞​x(τ)h(t−τ)

An LTI system is causal when h[n]=0,∀n<0h[n] = 0, \forall n < 0h[n]=0,∀n<0

Proof. Assume h[n]=0,∀n<0h[n] = 0, \forall n < 0h[n]=0,∀n<0

y[n]=(x∗h)[n]=∑k=−∞∞x[n−k]h[k]=∑k=0∞x[n−k]h[k]y[n] = (x*h)[n] = \sum_{k=-\infty}^{\infty}{x[n-k]h[k]}=\sum_{k=0}^{\infty}{x[n-k]h[k]}y[n]=(x∗h)[n]=∑k=−∞∞​x[n−k]h[k]=∑k=0∞​x[n−k]h[k]

Notice that this does not depend on time steps prior to n=0n=0n=0

An LTI system is memoryless if h[n]=0,∀n≠0h[n]=0, \forall n \ne 0h[n]=0,∀n=0

Memoryless means that the system doesn’t depend on past values, so its impulse response should just be a scaled version of δ\deltaδ.

A system is stable if ∑n=−∞∞∣h[n]∣\sum_{n=-\infty}^{\infty}{|h[n]|}∑n=−∞∞​∣h[n]∣converges.

Proof. 1. Assume ∣x[n]∣≤Bx|x[n]| \le B_x∣x[n]∣≤Bx​ to show ∣y[n]∣<D|y[n]| < D∣y[n]∣<D where D is some bound.

∣y[n]∣=∣∑k=−∞∞x[n−k]h[k]∣≤∑k∣x[n−k]h[k]∣=∑k∣x[n−k]∣∣h[k]∣≤Bx∑k∣h[k]|y[n]| = |\sum_{k=-\infty}^{\infty}{x[n-k]h[k]}| \le \sum_{k}{|x[n-k]h[k]|} = \sum_{k}{|x[n-k]||h[k]|}\le B_x\sum_{k}{|h[k]}∣y[n]∣=∣∑k=−∞∞​x[n−k]h[k]∣≤∑k​∣x[n−k]h[k]∣=∑k​∣x[n−k]∣∣h[k]∣≤Bx​∑k​∣h[k]

This means as long as ∑k∣h[k]\sum_{k}{|h[k]}∑k​∣h[k] converges, y[n]y[n]y[n] will be bounded.

2. Assume ∑n∣h[n]∣\sum_{n}{|h[n]|}∑n​∣h[n]∣ does not converge. Show that the system is unstable. Choose x[n]=sgn{h[−n]}x[n]=sgn\{h[-n]\}x[n]=sgn{h[−n]}

y[n]=∑kx[n−k]h[k]y[n]=\sum_{k}{x[n-k]h[k]}y[n]=∑k​x[n−k]h[k]

y[0]=∑kx[−k]h[k]=∑k∣h[k]∣y[0] = \sum_{k}{x[-k]h[k]} = \sum_{k}{|h[k]|}y[0]=∑k​x[−k]h[k]=∑k​∣h[k]∣

And this is unbounded, so y[n]y[n]y[n] is unbounded. ◻

y(t)=∫−∞∞es(t−τ)h(τ)dτ=est∫−∞∞e−sτh(τ)y(t)=\int_{-\infty}^{\infty}{e^{s(t-\tau)}h(\tau)d\tau}=e^{st}\int_{-\infty}^{\infty}{e^{-s\tau}h(\tau)}y(t)=∫−∞∞​es(t−τ)h(τ)dτ=est∫−∞∞​e−sτh(τ)

y[n]=∑k=−∞∞zn−kh[k]=zn∑k=−∞∞z−kh[k]y[n]=\sum_{k=-\infty}^{\infty}z^{n-k}h[k] = z^n \sum_{k=-\infty}^{\infty}z^{-k}h[k]y[n]=∑k=−∞∞​zn−kh[k]=zn∑k=−∞∞​z−kh[k]

The frequency response of an LTI system H(jω)H(j\omega)H(jω) is how the system scales a pure tone of frequency ω\omegaω

H(ω):=∫−∞∞h(τ)e−jωτdτ,H(ω):=∑k=−∞∞h[k]e−jωkH(\omega):=\int_{-\infty}^{\infty}{h(\tau)e^{-j\omega\tau}d\tau}, H(\omega):= \sum_{k=-\infty}^{\infty}{h[k]e^{-j\omega k}}H(ω):=∫−∞∞​h(τ)e−jωτdτ,H(ω):=∑k=−∞∞​h[k]e−jωk

The transfer function of an LTI system H(s)H(s)H(s)is how the system responds to complex exponentials.

A causal continuous LTI system is stable iff all poles of H(s)H(s)H(s)have negative real parts.

The proof of this theorem stems from some facts about the Laplace Transform. If the system is causual, then the ROC is the half place demarcated by the right most pole. When this ROC includes the imaginary axis, the Fourier Transform is well defined, and this only happens when h(t)h(t)h(t) is absolutely integrable. Applying the same logic to the discrete case,

A causal discrete LTI system is stable iff all poles of H(z)H(z)H(z)lie within the unit circle.

This is because we know the ROC extends from the right-most pole for causal systems, and for the Fourier Transform to exist (making h[n]h[n]h[n] absolutely integrable), the ROC must contain the unit circle.

A Bode Plot is a straight-line approximation plot of ∣H(jω)∣|H(j\omega)|∣H(jω)∣ and ∠H(jω)\angle H(j\omega)∠H(jω)on a log-log scale

H(jω)=K(jω)Nz0(jω)Np0∏i=0n(1+jωωzi)∏k=0m(1+jωωpk)=Kejπ2(Nz0−Np0)∏i=0nrzi∏k=0mrpkej(∑i=0nzi−∑k=0mpk)H(j\omega) = K \frac{(j\omega)^{N_{z0}}}{(j\omega)^{N_{p0}}}\frac{\prod_{i=0}^{n}{(1+\frac{j\omega}{\omega_{zi}})}}{\prod_{k=0}^{m}{(1+\frac{j\omega}{\omega_{pk}})}} = Ke^{j\frac{\pi}{2}(N_{z0}-N_{p0})} \frac{\prod_{i=0}^{n}{r_{zi}}}{\prod_{k=0}^{m}{r_{pk}}} e^{j(\sum_{i=0}^{n}{z_i} - \sum_{k=0}^{m}{p_k})}H(jω)=K(jω)Np0​(jω)Nz0​​∏k=0m​(1+ωpk​jω​)∏i=0n​(1+ωzi​jω​)​=Kej2π​(Nz0​−Np0​)∏k=0m​rpk​∏i=0n​rzi​​ej(∑i=0n​zi​−∑k=0m​pk​)

Each rrr is the magnitude of a factor 1+jωωn1 + \frac{j\omega}{\omega_n}1+ωn​jω​ where ωn\omega_nωn​ is either a root or a pole, and the zi,pkz_i, p_kzi​,pk​ are the phases of each factor. By writing H(jω)H(j\omega)H(jω) this way, it is clear that

∣H(ω)∣=K∏i=0nrzi∏k=0mrpk|H(\omega)| = K \frac{\prod_{i=0}^{n}{r_{zi}}}{\prod_{k=0}^{m}{r_{pk}}}∣H(ω)∣=K∏k=0m​rpk​∏i=0n​rzi​​

log(∣H(ω)∣)=log(K)+∑i=0nlog(rzi)−∑k=0mlog(rpk)log(|H(\omega)|) = log(K) + \sum_{i=0}^{n}{log(r_{zi})} - \sum_{k=0}^{m}{log(r_{pk})}log(∣H(ω)∣)=log(K)+∑i=0n​log(rzi​)−∑k=0m​log(rpk​)

For Bode plots, we use the decibel scale, meaning we will multiply this value by 20 when constructing our plot. The exponential form of H(jω)H(j\omega)H(jω) tells us that

∠H(jω)=π2(Nz0−Np0)+(∑i=0nzi−∑k=0mpk)\angle H(j\omega) = \frac{\pi}{2}(N_{z0}-N_{p0})+ \left(\sum_{i=0}^{n}{z_i} - \sum_{k=0}^{m}{p_k}\right)∠H(jω)=2π​(Nz0​−Np0​)+(∑i=0n​zi​−∑k=0m​pk​)

Next, we should verify if we can approximate these equations as linear on a log-log scale. Take the example transfer function H(jω)=11+jωωp=1rpe−jθpH(j\omega) = \frac{1}{1+\frac{j\omega}{\omega_p}} = \frac{1}{r_p}e^{-j\theta_p}H(jω)=1+ωp​jω​1​=rp​1​e−jθp​.

if ω=ωpH(jω)=11+jrp=2θp=π4if ω=10ωpH(jω)=11+10jrp≈10θp≈π2if ω=0.1ωpH(jω)=11+0.1jrp≈1θp≈0\begin{array}{cccc} \text{if } \omega = \omega_p & H(j\omega) = \frac{1}{1+j} & r_p = \sqrt{2} & \theta_p = \frac{\pi}{4}\\ \text{if } \omega = 10\omega_p & H(j\omega) = \frac{1}{1+10j} & r_p \approx 10 & \theta_p \approx \frac{\pi}{2}\\ \text{if } \omega = 0.1\omega_p & H(j\omega) = \frac{1}{1+0.1j} & r_p \approx 1 & \theta_p \approx 0\\ \end{array}if ω=ωp​if ω=10ωp​if ω=0.1ωp​​H(jω)=1+j1​H(jω)=1+10j1​H(jω)=1+0.1j1​​rp​=2​rp​≈10rp​≈1​θp​=4π​θp​≈2π​θp​≈0​

For the magnitude plot, since there are no poles or zeros at ω=0\omega = 0ω=0, we draw a straight line until the pole kicks in at ω=ωp\omega = \omega_pω=ωp​ at which point the slope of the line will be -1. For the phase plot, we apply the same logic, except the pole kicks in at ωp10\frac{\omega_p}{10}10ωp​​ (to see why, look above to see how at ω=ωp\omega = \omega_pω=ωp​, the phase is −π4-\frac{\pi}{4}−4π​). We can apply this same logic for more complicated transfer functions too. Lets take

H(jω)=109(1+jω109)(jω)(1+jω107)H(j\omega) = 10^9 \frac{(1+\frac{j\omega}{10^9})}{(j\omega)(1+\frac{j\omega}{10^7})}H(jω)=109(jω)(1+107jω​)(1+109jω​)​

Notice we have a zero at 10910^9109, poles at 1,1071, 10^71,107, and 9 zeros at ω=0\omega = 0ω=0. With this information, we can see the plots will look like this:

The pole at 0 kicks in immediately, causing the decreasing magnitude and starting the phase at −π2\frac{-\pi}{2}2−π​. The second pole at 10710^7107 will kick in next, followed by the zero at 10910^9109.

Discrete: ∑k=0Naky[n−k]=∑k=0Mbkx[n−k]\text{Discrete: } \sum_{k=0}^{N}{a_k y[n-k]} = \sum_{k=0}^{M}{b_k x[n-k]}Discrete: ∑k=0N​ak​y[n−k]=∑k=0M​bk​x[n−k]

Continuous: ∑k=0Nakdkydtk=∑k=0Mbkdkxdtk\text{Continuous: } \sum_{k=0}^{N}{a_k\frac{d^ky}{dt^k}} = \sum_{k=0}^{M}{b_k\frac{d^kx}{dt^k}}Continuous: ∑k=0N​ak​dtkdky​=∑k=0M​bk​dtkdkx​

Systems described by a linear constant coefficient difference equation are causal LTI iff a0≠0a_0 \ne 0a0​=0 and the system is initially at rest (y[n]=0 for n<n0y[n] = 0 \text{ for } n < n_0y[n]=0 for n<n0​ where n0n_0n0​ is the first instant x[n]≠0x[n] \ne 0x[n]=0)

Notice that if a1..an=0a_1..a_n = 0a1​..an​=0, then the system will have a finite impulse response because eventually the signal will die out. It turns out that all causal FIR systems can be written as a linear constant coefficient difference equation.

y[n]=∑k=0Mbkx[n−k]y[n] = \sum_{k=0}^{M}{b_k x[n-k]}y[n]=∑k=0M​bk​x[n−k]

h[n]=∑k=0Mbkδ[n−k]h[n] = \sum_{k=0}^{M}{b_k \delta[n-k]}h[n]=∑k=0M​bk​δ[n−k]

H(s)=Y(ω)X(ω)=∑k=0Mbksk∑k=0Naksk [Continuous Case]H(s) = \frac{Y(\omega)}{X(\omega)} = \frac{\sum_{k=0}^{M}{b_ks^k}}{\sum_{k=0}^{N}{a_ks^k}}\text{ [Continuous Case]}H(s)=X(ω)Y(ω)​=∑k=0N​ak​sk∑k=0M​bk​sk​ [Continuous Case]

H(z)=Y(ω)X(ω)=∑k=0Mbkz−k∑k=0Nakz−k [Discrete Case]H(z) = \frac{Y(\omega)}{X(\omega)} = \frac{\sum_{k=0}^{M}{b_kz^{-k}}}{\sum_{k=0}^{N}{a_kz^{-k}}}\text{ [Discrete Case]}H(z)=X(ω)Y(ω)​=∑k=0N​ak​z−k∑k=0M​bk​z−k​ [Discrete Case]

∑k=0Nakdkydtk=∑k=0Mbkdkxdtk\sum_{k=0}^{N}{a_k\frac{d^ky}{dt^k}} = \sum_{k=0}^{M}{b_k\frac{d^kx}{dt^k}}∑k=0N​ak​dtkdky​=∑k=0M​bk​dtkdkx​

∑k=0NakskY(ω)=∑k=0MbkskX(ω)\sum_{k=0}^{N}{a_ks^k Y(\omega)} = \sum_{k=0}^{M}{b_ks^k X(\omega)}∑k=0N​ak​skY(ω)=∑k=0M​bk​skX(ω)

Y(s)X(s)=∑k=0Mbksk∑k=0Naksk\frac{Y(s)}{X(s)} = \frac{\sum_{k=0}^{M}{b_ks^k}}{\sum_{k=0}^{N}{a_ks^k}}X(s)Y(s)​=∑k=0N​ak​sk∑k=0M​bk​sk​

y(t)=(h∗x)(t)↔H(ω)X(ω)y(t) = (h*x)(t) \leftrightarrow H(\omega)X(\omega)y(t)=(h∗x)(t)↔H(ω)X(ω)

∴H(s)=Y(s)X(s)=∑k=0Mbksk∑k=0Naksk\therefore H(s) = \frac{Y(s)}{X(s)} = \frac{\sum_{k=0}^{M}{b_ks^k}}{\sum_{k=0}^{N}{a_ks^k}}∴H(s)=X(s)Y(s)​=∑k=0N​ak​sk∑k=0M​bk​sk​

∑k=0Naky[n−k]=∑k=0Mbkx[n−k]\sum_{k=0}^{N}{a_k y[n-k]} = \sum_{k=0}^{M}{b_k x[n-k]}∑k=0N​ak​y[n−k]=∑k=0M​bk​x[n−k]

∑k=0Nakz−kY(z)=∑k=0Mbkz−kX(z)\sum_{k=0}^{N}{a_k z^{-k}Y(z)} = \sum_{k=0}^{M}{b_k z^{-k}X(z)}∑k=0N​ak​z−kY(z)=∑k=0M​bk​z−kX(z)

H(z)=∑k=0Mbkz−k∑k=0Nakz−kH(z) = \frac{\sum_{k=0}^{M}{b_k z^{-k}}}{\sum_{k=0}^{N}{a_k z^{-k}}}H(z)=∑k=0N​ak​z−k∑k=0M​bk​z−k​

∑i=0Naidiydti=b0x(t)\sum_{i=0}^{N}{a_i\frac{d^iy}{dt^i}} = b_0x(t)∑i=0N​ai​dtidiy​=b0​x(t)

we can represent the system in state space form where we keep track of a state vector z⃗(t)∈RN\vec{z}(t)\in\mathbb{R}^Nz(t)∈RN.

ddtz⃗(t)=Az⃗(t)+Bx(t)y(t)=Cz⃗(t)+Dx(t)\begin{aligned} \frac{d}{dt}\vec{z}(t) &= A\vec{z}(t)+Bx(t)\\ y(t) &= C\vec{z}(t)+Dx(t)\end{aligned}dtd​z(t)y(t)​=Az(t)+Bx(t)=Cz(t)+Dx(t)​

The matrices A,B,C,DA,B,C,DA,B,C,D describe the dynamics of the system. If we want to find the transfer function of the system, we can use the Laplace transform.

sZ⃗(s)=AZ⃗(s)+BX(s)  ⟹  Z⃗(s)=(sI−A)−1BX(s)∴Y(s)=C(sI−A)−1BX(s)+DX(s)∴H(s)=C(sI−A)−1B+D\begin{aligned} s\vec{Z}(s) &= A\vec{Z}(s)+BX(s) \implies \vec{Z}(s) = (sI-A)^{-1}BX(s)\\ \therefore Y(s) &= C(sI-A)^{-1}BX(s)+DX(s)\\ \therefore H(s) &= C(sI-A)^{-1}B+D\end{aligned}sZ(s)∴Y(s)∴H(s)​=AZ(s)+BX(s)⟹Z(s)=(sI−A)−1BX(s)=C(sI−A)−1BX(s)+DX(s)=C(sI−A)−1B+D​

Notice that the poles of the transfer function are simply the eigenvalues of AAA. This is because if sss is an eigenvalue of AAA, (sI−A)−1(sI-A)^{-1}(sI−A)−1 is not invertible so the transfer function is undefined just like it is at the poles.

Write y[n]y[n]y[n] in terms of x[n]x[n]x[n] and the state variables

H(s)=ωn2s2+2ζωns+ωn2.H(s)=\frac{\omega_n^2}{s^2+2\zeta\omega_ns+\omega_n^2}.H(s)=s2+2ζωn​s+ωn2​ωn2​​.

The parameter ζ\zetaζ is known as the damping ratio, and the parameter ωn\omega_nωn​ is known as the natural frequency. This parameterization is useful because it gives us an insight into how the system will behave. First, notice that when ζ∈[0,1]\zeta \in [0, 1]ζ∈[0,1], we will get two complex poles. Suppose we want to find the impulse response of this system. Because the poles are complex, we can write the poles in the form

−ωncosθ⏞σ±jωnsinθ⏞ωd\overbrace{-\omega_ncos\theta}^{\sigma} \pm j\overbrace{\omega_nsin\theta}^{\omega_d}−ωn​cosθ​σ​±jωn​sinθ​ωd​​

where ζ=cosθ\zeta=cos\thetaζ=cosθ. These new paramters allow us to rewrite our transfer function as

H(s)=ωn2(s+σ)2+ωd2H(s) = \frac{\omega_n^2}{(s+\sigma)^2+\omega_d^2}H(s)=(s+σ)2+ωd2​ωn2​​

h(t)=ωn2ωde−σtsin(ωdt)u(t)h(t) = \frac{\omega_n^2}{\omega_d}e^{-\sigma t}sin(\omega_dt)u(t)h(t)=ωd​ωn2​​e−σtsin(ωd​t)u(t)

This is a damped sinusoid. Notice how ζ\zetaζ, which is related to σ\sigmaσ controls the exponential and therefore the damping factor. If we find the step response of the second order system, we will get

y(t)=[1−e−σt(cosωdt+σωdsinωdt)]u(t).y(t) = \left[1-e^{-\sigma t}\left(cos\omega_dt+\frac{\sigma}{\omega_d}sin\omega_dt\right)\right]u(t).y(t)=[1−e−σt(cosωd​t+ωd​σ​sinωd​t)]u(t).

Rise Time (trt_rtr​): Time to go from 10% to 90% of the steady state value

Peak Overshoot (MpM_pMp​): peak−steadysteady\frac{peak - steady}{steady}steadypeak−steady​

Peaking Time (tpt_ptp​): Time to peak overshoot

Settling Time (tst_sts​): Time after which step response stays within 1% if steady state.

tp=πωdMp=y(πωd)=1+e−σπωdts=ln(0.01)−σ\begin{array}{ccc} t_p=\frac{\pi}{\omega_d} & M_p=y\left(\frac{\pi}{\omega_d}\right)=1+e^{-\sigma\frac{\pi}{\omega_d}} & t_s = \frac{ln(0.01)}{-\sigma} \end{array}tp​=ωd​π​​Mp​=y(ωd​π​)=1+e−σωd​π​​ts​=−σln(0.01)​​