Linear Time-Invariant Systems

Definition 33

LTI systems are ones which are both linear and time-invariant.

Impulse Response of LTI systems

LTI systems are special systems because their output can be determined entirely the impulse response h[n]h[n].

The Discrete Case

We can think of the original signal x[n]x[n] in terms of the impulse function.

x[n]=x[0]δ[n]+x[1]δ[n1]+...=k=x[k]δ[nk]x[n] = x[0]\delta[n]+x[1]\delta[n-1]+... = \sum_{k=-\infty}^{\infty}{x[k]\delta[n-k]}

This signal will be transformed in some way to get the output y[n]y[n]. Since the LTI system applies a functional FF and the LTI is linear and time-invariant,

y[n]=F(k=x[k]δ[nk])=k=x[k]F(δ[nk])=k=x[k]h[nk]y[n] = F\left(\sum_{k=-\infty}^{\infty}{x[k]\delta[n-k]}\right) = \sum_{k=-\infty}^{\infty}{x[k]F(\delta[n-k])} = \sum_{k=-\infty}^{\infty}{x[k]h[n-k]}

Notice this operation is the convolution between the input and the impulse response.

The Continuous Case

We can approximate the function by breaking it into intervals of length Δ\Delta.

x(t)k=x(kΔ)δΔ(tkΔ)Δx(t) \approx \sum_{k=-\infty}^{\infty}{x(k\Delta)\delta_{\Delta}(t-k\Delta)\Delta}

x(t)=limΔ0k=x(kΔ)δΔ(tkΔ)Δx(t) = lim_{\Delta \rightarrow 0}\sum_{k=-\infty}^{\infty}{x(k\Delta)\delta_{\Delta}(t-k\Delta)\Delta}

After applying the LTI system to it,

y(n)=x(τ)h(tτ)y(n) = \int_{-\infty}^{\infty}{x(\tau)h(t-\tau)}

Notice this operation is the convolution between the input and the impulse response.

Determining Properties of an LTI system

Because an LTI system is determined entirely by its impulse response, we can determine its properties from the impulse response.

Causality

Theorem 9

An LTI system is causal when h[n]=0,n<0h[n] = 0, \forall n < 0

Proof. Assume h[n]=0,n<0h[n] = 0, \forall n < 0

y[n]=(xh)[n]=k=x[nk]h[k]=k=0x[nk]h[k]y[n] = (x*h)[n] = \sum_{k=-\infty}^{\infty}{x[n-k]h[k]}=\sum_{k=0}^{\infty}{x[n-k]h[k]}

Notice that this does not depend on time steps prior to n=0n=0

Memory

Theorem 10

An LTI system is memoryless if h[n]=0,n0h[n]=0, \forall n \ne 0

Memoryless means that the system doesn’t depend on past values, so its impulse response should just be a scaled version of δ\delta.

Stability

Theorem 11

A system is stable if n=h[n]\sum_{n=-\infty}^{\infty}{|h[n]|}converges.

Proof. 1. Assume x[n]Bx|x[n]| \le B_x to show y[n]<D|y[n]| < D where D is some bound.

y[n]=k=x[nk]h[k]kx[nk]h[k]=kx[nk]h[k]Bxkh[k]|y[n]| = |\sum_{k=-\infty}^{\infty}{x[n-k]h[k]}| \le \sum_{k}{|x[n-k]h[k]|} = \sum_{k}{|x[n-k]||h[k]|}\le B_x\sum_{k}{|h[k]}

This means as long as kh[k]\sum_{k}{|h[k]} converges, y[n]y[n] will be bounded.

2. Assume nh[n]\sum_{n}{|h[n]|} does not converge. Show that the system is unstable. Choose x[n]=sgn{h[n]}x[n]=sgn\{h[-n]\}

y[n]=kx[nk]h[k]y[n]=\sum_{k}{x[n-k]h[k]}

so

y[0]=kx[k]h[k]=kh[k]y[0] = \sum_{k}{x[-k]h[k]} = \sum_{k}{|h[k]|}

And this is unbounded, so y[n]y[n] is unbounded. ◻

Frequency Response and Transfer Functions

Definition 34

The frequency response of a system is the output when passed a purely oscillatory signal

If we pass a complex exponential into an LTI system, the output signal is the same signal but scaled. In otherwise, it is an eigenfunction of LTI systems.

y(t)=es(tτ)h(τ)dτ=estesτh(τ)y(t)=\int_{-\infty}^{\infty}{e^{s(t-\tau)}h(\tau)d\tau}=e^{st}\int_{-\infty}^{\infty}{e^{-s\tau}h(\tau)}

The integral is a constant, and the original function is unchanged. The same analysis can be done in the discrete case.

y[n]=k=znkh[k]=znk=zkh[k]y[n]=\sum_{k=-\infty}^{\infty}z^{n-k}h[k] = z^n \sum_{k=-\infty}^{\infty}z^{-k}h[k]

We give these constant terms a special name called the transfer function.

Definition 35

The frequency response of an LTI system H(jω)H(j\omega) is how the system scales a pure tone of frequency ω\omega

H(ω):=h(τ)ejωτdτ,H(ω):=k=h[k]ejωkH(\omega):=\int_{-\infty}^{\infty}{h(\tau)e^{-j\omega\tau}d\tau}, H(\omega):= \sum_{k=-\infty}^{\infty}{h[k]e^{-j\omega k}}

Notice: The frequency response is the fourier transform of the impulse response! This means the Fourier Transform takes us from the impulse response of the system to the frequency response. There is no reason to limit ourselves to the Fourier Domain though

Definition 36

The transfer function of an LTI system H(s)H(s)is how the system responds to complex exponentials.

The transfer function is merely the Laplace Transform of the impulse response. In many ways, this can be more useful than the frequency response.

Stability of transfer functions

Recall that an LTI system is stable if the impulse response is absolutely integrable. We can determine this from the transfer function.

Theorem 12

A causal continuous LTI system is stable iff all poles of H(s)H(s)have negative real parts.

The proof of this theorem stems from some facts about the Laplace Transform. If the system is causual, then the ROC is the half place demarcated by the right most pole. When this ROC includes the imaginary axis, the Fourier Transform is well defined, and this only happens when h(t)h(t) is absolutely integrable. Applying the same logic to the discrete case,

Theorem 13

A causal discrete LTI system is stable iff all poles of H(z)H(z)lie within the unit circle.

This is because we know the ROC extends from the right-most pole for causal systems, and for the Fourier Transform to exist (making h[n]h[n] absolutely integrable), the ROC must contain the unit circle.

Bode Plots

Because transfer functions, and hence the frequency response, can be quite complex, we need a easy way to visualize how a system responds to different frequencies.

Definition 37

A Bode Plot is a straight-line approximation plot of H(jω)|H(j\omega)| and H(jω)\angle H(j\omega)on a log-log scale

The log-log scale not only allows us to determine the behavior of Bode plots over a large range of frequencies, but they also let us easily figure out what the plot looks like because it converts the frequency response into piecewise linear components.

To see why, lets write our transfer function in polar form.

H(jω)=K(jω)Nz0(jω)Np0i=0n(1+jωωzi)k=0m(1+jωωpk)=Kejπ2(Nz0Np0)i=0nrzik=0mrpkej(i=0nzik=0mpk)H(j\omega) = K \frac{(j\omega)^{N_{z0}}}{(j\omega)^{N_{p0}}}\frac{\prod_{i=0}^{n}{(1+\frac{j\omega}{\omega_{zi}})}}{\prod_{k=0}^{m}{(1+\frac{j\omega}{\omega_{pk}})}} = Ke^{j\frac{\pi}{2}(N_{z0}-N_{p0})} \frac{\prod_{i=0}^{n}{r_{zi}}}{\prod_{k=0}^{m}{r_{pk}}} e^{j(\sum_{i=0}^{n}{z_i} - \sum_{k=0}^{m}{p_k})}

Each rr is the magnitude of a factor 1+jωωn1 + \frac{j\omega}{\omega_n} where ωn\omega_n is either a root or a pole, and the zi,pkz_i, p_k are the phases of each factor. By writing H(jω)H(j\omega) this way, it is clear that

H(ω)=Ki=0nrzik=0mrpk|H(\omega)| = K \frac{\prod_{i=0}^{n}{r_{zi}}}{\prod_{k=0}^{m}{r_{pk}}}

If we take the log of this, we get

log(H(ω))=log(K)+i=0nlog(rzi)k=0mlog(rpk)log(|H(\omega)|) = log(K) + \sum_{i=0}^{n}{log(r_{zi})} - \sum_{k=0}^{m}{log(r_{pk})}

For Bode plots, we use the decibel scale, meaning we will multiply this value by 20 when constructing our plot. The exponential form of H(jω)H(j\omega) tells us that

H(jω)=π2(Nz0Np0)+(i=0nzik=0mpk)\angle H(j\omega) = \frac{\pi}{2}(N_{z0}-N_{p0})+ \left(\sum_{i=0}^{n}{z_i} - \sum_{k=0}^{m}{p_k}\right)

Next, we should verify if we can approximate these equations as linear on a log-log scale. Take the example transfer function H(jω)=11+jωωp=1rpejθpH(j\omega) = \frac{1}{1+\frac{j\omega}{\omega_p}} = \frac{1}{r_p}e^{-j\theta_p}.

if ω=ωpH(jω)=11+jrp=2θp=π4if ω=10ωpH(jω)=11+10jrp10θpπ2if ω=0.1ωpH(jω)=11+0.1jrp1θp0\begin{array}{cccc} \text{if } \omega = \omega_p & H(j\omega) = \frac{1}{1+j} & r_p = \sqrt{2} & \theta_p = \frac{\pi}{4}\\ \text{if } \omega = 10\omega_p & H(j\omega) = \frac{1}{1+10j} & r_p \approx 10 & \theta_p \approx \frac{\pi}{2}\\ \text{if } \omega = 0.1\omega_p & H(j\omega) = \frac{1}{1+0.1j} & r_p \approx 1 & \theta_p \approx 0\\ \end{array}

Thus we can see at decades away from the poles and zeros, the magnitudes and the phases will have less of an effect. Let’s try constructing the Bode Plot for this transfer function.

For the magnitude plot, since there are no poles or zeros at ω=0\omega = 0, we draw a straight line until the pole kicks in at ω=ωp\omega = \omega_p at which point the slope of the line will be -1. For the phase plot, we apply the same logic, except the pole kicks in at ωp10\frac{\omega_p}{10} (to see why, look above to see how at ω=ωp\omega = \omega_p, the phase is π4-\frac{\pi}{4}). We can apply this same logic for more complicated transfer functions too. Lets take

H(jω)=109(1+jω109)(jω)(1+jω107)H(j\omega) = 10^9 \frac{(1+\frac{j\omega}{10^9})}{(j\omega)(1+\frac{j\omega}{10^7})}

Notice we have a zero at 10910^9, poles at 1,1071, 10^7, and 9 zeros at ω=0\omega = 0. With this information, we can see the plots will look like this:

The pole at 0 kicks in immediately, causing the decreasing magnitude and starting the phase at π2\frac{-\pi}{2}. The second pole at 10710^7 will kick in next, followed by the zero at 10910^9.

Special LTI Systems

Linear Constant Coefficient Difference/Differential Equations

Definition 38

A linear constant coefficient difference equation is a system of one of the following forms

Discrete: k=0Naky[nk]=k=0Mbkx[nk]\text{Discrete: } \sum_{k=0}^{N}{a_k y[n-k]} = \sum_{k=0}^{M}{b_k x[n-k]}

Continuous: k=0Nakdkydtk=k=0Mbkdkxdtk\text{Continuous: } \sum_{k=0}^{N}{a_k\frac{d^ky}{dt^k}} = \sum_{k=0}^{M}{b_k\frac{d^kx}{dt^k}}

Theorem 14

Systems described by a linear constant coefficient difference equation are causal LTI iff a00a_0 \ne 0 and the system is initially at rest (y[n]=0 for n<n0y[n] = 0 \text{ for } n < n_0 where n0n_0 is the first instant x[n]0x[n] \ne 0)

Notice that if a1..an=0a_1..a_n = 0, then the system will have a finite impulse response because eventually the signal will die out. It turns out that all causal FIR systems can be written as a linear constant coefficient difference equation.

Theorem 15

Systems of the form

y[n]=k=0Mbkx[nk]y[n] = \sum_{k=0}^{M}{b_k x[n-k]}

are causal, FIR LTI systems and their impulse response is

h[n]=k=0Mbkδ[nk]h[n] = \sum_{k=0}^{M}{b_k \delta[n-k]}

Theorem 16

Given a constant coefficient difference/differential equation, the transfer function is

H(s)=Y(ω)X(ω)=k=0Mbkskk=0Naksk [Continuous Case]H(s) = \frac{Y(\omega)}{X(\omega)} = \frac{\sum_{k=0}^{M}{b_ks^k}}{\sum_{k=0}^{N}{a_ks^k}}\text{ [Continuous Case]}

H(z)=Y(ω)X(ω)=k=0Mbkzkk=0Nakzk [Discrete Case]H(z) = \frac{Y(\omega)}{X(\omega)} = \frac{\sum_{k=0}^{M}{b_kz^{-k}}}{\sum_{k=0}^{N}{a_kz^{-k}}}\text{ [Discrete Case]}

Proof. The Continuous Case

k=0Nakdkydtk=k=0Mbkdkxdtk\sum_{k=0}^{N}{a_k\frac{d^ky}{dt^k}} = \sum_{k=0}^{M}{b_k\frac{d^kx}{dt^k}}

Taking the Laplace Transform,

k=0NakskY(ω)=k=0MbkskX(ω)\sum_{k=0}^{N}{a_ks^k Y(\omega)} = \sum_{k=0}^{M}{b_ks^k X(\omega)}

Y(s)X(s)=k=0Mbkskk=0Naksk\frac{Y(s)}{X(s)} = \frac{\sum_{k=0}^{M}{b_ks^k}}{\sum_{k=0}^{N}{a_ks^k}}

y(t)=(hx)(t)H(ω)X(ω)y(t) = (h*x)(t) \leftrightarrow H(\omega)X(\omega)

H(s)=Y(s)X(s)=k=0Mbkskk=0Naksk\therefore H(s) = \frac{Y(s)}{X(s)} = \frac{\sum_{k=0}^{M}{b_ks^k}}{\sum_{k=0}^{N}{a_ks^k}}

The Discrete Case

k=0Naky[nk]=k=0Mbkx[nk]\sum_{k=0}^{N}{a_k y[n-k]} = \sum_{k=0}^{M}{b_k x[n-k]}

Taking the Z Transform

k=0NakzkY(z)=k=0MbkzkX(z)\sum_{k=0}^{N}{a_k z^{-k}Y(z)} = \sum_{k=0}^{M}{b_k z^{-k}X(z)}

H(z)=k=0Mbkzkk=0NakzkH(z) = \frac{\sum_{k=0}^{M}{b_k z^{-k}}}{\sum_{k=0}^{N}{a_k z^{-k}}}

State Space Equations

When we have a LCCDE of the form

i=0Naidiydti=b0x(t)\sum_{i=0}^{N}{a_i\frac{d^iy}{dt^i}} = b_0x(t)

we can represent the system in state space form where we keep track of a state vector z(t)RN\vec{z}(t)\in\mathbb{R}^N.

ddtz(t)=Az(t)+Bx(t)y(t)=Cz(t)+Dx(t)\begin{aligned} \frac{d}{dt}\vec{z}(t) &= A\vec{z}(t)+Bx(t)\\ y(t) &= C\vec{z}(t)+Dx(t)\end{aligned}

The matrices A,B,C,DA,B,C,D describe the dynamics of the system. If we want to find the transfer function of the system, we can use the Laplace transform.

sZ(s)=AZ(s)+BX(s)    Z(s)=(sIA)1BX(s)Y(s)=C(sIA)1BX(s)+DX(s)H(s)=C(sIA)1B+D\begin{aligned} s\vec{Z}(s) &= A\vec{Z}(s)+BX(s) \implies \vec{Z}(s) = (sI-A)^{-1}BX(s)\\ \therefore Y(s) &= C(sI-A)^{-1}BX(s)+DX(s)\\ \therefore H(s) &= C(sI-A)^{-1}B+D\end{aligned}

Notice that the poles of the transfer function are simply the eigenvalues of AA. This is because if ss is an eigenvalue of AA, (sIA)1(sI-A)^{-1} is not invertible so the transfer function is undefined just like it is at the poles.

Important: This is valid in Discrete Time as well!

In general, state-space equations are useful because they allow us to find transfer functions of complex systems very easily.

  1. Label the output of delay (discrete) or differentiation (continuous) blocks as the state variables.

  2. Write the state equations using inputs and delays/derivatives. Express each as a weighted sum of states and inputs.

  3. Write y[n]y[n] in terms of x[n]x[n] and the state variables

  4. Use the formula above to find the transfer function.

Second Order Systems

Most of the time, higher order systems only have 2 dominant poles. Accordingly, they can be approximated by second order systems (i.e systems with two poles). One way to write the transfer function of this system is

H(s)=ωn2s2+2ζωns+ωn2.H(s)=\frac{\omega_n^2}{s^2+2\zeta\omega_ns+\omega_n^2}.

The parameter ζ\zeta is known as the damping ratio, and the parameter ωn\omega_n is known as the natural frequency. This parameterization is useful because it gives us an insight into how the system will behave. First, notice that when ζ[0,1]\zeta \in [0, 1], we will get two complex poles. Suppose we want to find the impulse response of this system. Because the poles are complex, we can write the poles in the form

ωncosθσ±jωnsinθωd\overbrace{-\omega_ncos\theta}^{\sigma} \pm j\overbrace{\omega_nsin\theta}^{\omega_d}

where ζ=cosθ\zeta=cos\theta. These new paramters allow us to rewrite our transfer function as

H(s)=ωn2(s+σ)2+ωd2H(s) = \frac{\omega_n^2}{(s+\sigma)^2+\omega_d^2}

Using common laplace transform pairs, this corresponds to

h(t)=ωn2ωdeσtsin(ωdt)u(t)h(t) = \frac{\omega_n^2}{\omega_d}e^{-\sigma t}sin(\omega_dt)u(t)

This is a damped sinusoid. Notice how ζ\zeta, which is related to σ\sigma controls the exponential and therefore the damping factor. If we find the step response of the second order system, we will get

y(t)=[1eσt(cosωdt+σωdsinωdt)]u(t).y(t) = \left[1-e^{-\sigma t}\left(cos\omega_dt+\frac{\sigma}{\omega_d}sin\omega_dt\right)\right]u(t).

There are several key features of this step response:

  • Rise Time (trt_r): Time to go from 10% to 90% of the steady state value

  • Peak Overshoot (MpM_p): peaksteadysteady\frac{peak - steady}{steady}

  • Peaking Time (tpt_p): Time to peak overshoot

  • Settling Time (tst_s): Time after which step response stays within 1% if steady state.

Using the step response, we can calculate some of these values.

tp=πωdMp=y(πωd)=1+eσπωdts=ln(0.01)σ\begin{array}{ccc} t_p=\frac{\pi}{\omega_d} & M_p=y\left(\frac{\pi}{\omega_d}\right)=1+e^{-\sigma\frac{\pi}{\omega_d}} & t_s = \frac{ln(0.01)}{-\sigma} \end{array}

Last updated