Berkeley Notes
  • Introduction
  • EE120
    • Introduction to Signals and Systems
    • The Fourier Series
    • The Fourier Transform
    • Generalized transforms
    • Linear Time-Invariant Systems
    • Feedback Control
    • Sampling
    • Appendix
  • EE123
    • The DFT
    • Spectral Analysis
    • Sampling
    • Filtering
  • EECS126
    • Introduction to Probability
    • Random Variables and their Distributions
    • Concentration
    • Information Theory
    • Random Processes
    • Random Graphs
    • Statistical Inference
    • Estimation
  • EECS127
    • Linear Algebra
    • Fundamentals of Optimization
    • Linear Algebraic Optimization
    • Convex Optimization
    • Duality
  • EE128
    • Introduction to Control
    • Modeling Systems
    • System Performance
    • Design Tools
    • Cascade Compensation
    • State-Space Control
    • Digital Control Systems
    • Cayley-Hamilton
  • EECS225A
    • Hilbert Space Theory
    • Linear Estimation
    • Discrete Time Random Processes
    • Filtering
  • EE222
    • Real Analysis
    • Differential Geometry
    • Nonlinear System Dynamics
    • Stability of Nonlinear Systems
    • Nonlinear Feedback Control
Powered by GitBook
On this page
  • Continuous Time
  • Discrete Time
  • Sampling as a System

Was this helpful?

  1. EE120

Sampling

PreviousFeedback ControlNextAppendix

Last updated 3 years ago

Was this helpful?

Continuous Time

Sampling a continuous-time signal means representing it as a sequence of points measured at regular intervals TTT. Notice that if we were to take a signal x(t)x(t)x(t) and multiply it by an impulse train, then we would get a series of impulses equal to x(t)x(t)x(t) at the sampling points and 000 everywhere else. We can call this signal xp(t)x_p(t)xp​(t).

p(t)=∑k=−∞∞δ(t−kT)p(t) = \sum_{k=-\infty}^{\infty}{\delta(t-kT)}p(t)=∑k=−∞∞​δ(t−kT)

xp(t)=x(t)p(t)=∑k=−∞∞x(t)δ(t−kT)x_p(t) = x(t)p(t) = \sum_{k=-\infty}^{\infty}{x(t)\delta(t-kT)}xp​(t)=x(t)p(t)=∑k=−∞∞​x(t)δ(t−kT)

In the Fourier Domain,

Xp(ω)=12πX(ω)∗P(ω)P(ω)=2πT∑k=−∞∞δ(ω−kω0)∴Xp(ω)=12π∫−∞∞X(θ)P(ω−θ)dθ=1T∑k=−∞∞X(ω−kω0)\begin{aligned} X_p(\omega) &= \frac{1}{2\pi}X(\omega)*P(\omega)\\ P(\omega) &= \frac{2\pi}{T}\sum_{k=-\infty}^{\infty}{\delta(\omega-k\omega_0)}\\ \therefore X_p(\omega) &= \frac{1}{2\pi}\int_{-\infty}^{\infty}{X(\theta)P(\omega-\theta)d\theta} = \frac{1}{T}\sum_{k=-\infty}^{\infty}{X(\omega-k\omega_0)}\end{aligned}Xp​(ω)P(ω)∴Xp​(ω)​=2π1​X(ω)∗P(ω)=T2π​k=−∞∑∞​δ(ω−kω0​)=2π1​∫−∞∞​X(θ)P(ω−θ)dθ=T1​k=−∞∑∞​X(ω−kω0​)​

What this tells us is that the Fourier Transform of our sampled signal is a series of copies of X(ω)X(\omega)X(ω), each centered at kω0k\omega_0kω0​ where ω0=2πT\omega_0 = \frac{2\pi}{T}ω0​=T2π​ For example, lets say that our original signal has the following Fourier Transform. Notice the signal is band-limited by ωM\omega_MωM​.

There are two major cases: if ω0>2ωm\omega_0 > 2\omega_mω0​>2ωm​ and ω0<2ωM\omega_0 < 2\omega_Mω0​<2ωM​. Case One: ωs>2ωm\omega_s > 2\omega_mωs​>2ωm​

When ωs>2ωM\omega_s > 2\omega_Mωs​>2ωM​, the shifted copies of the original X(ω)X(\omega)X(ω) (shown in blue) do not overlap with each other or which the original copy. If we wanted to recover the original signal, we could simply apply a low pass filter to isolate the unshifted copy of X(ω)X(\omega)X(ω) and then take the inverse Fourier Transform. Case Two: ωs<2ωm\omega_s < 2\omega_mωs​<2ωm​

Notice how in this case, the shifted copies overlap with the original X(ω)X(\omega)X(ω). This means in our sampled signal, the higher frequency information is bleeding in with the lower frequency information. This phenomenon is known as aliasing. When aliasing occurs, we cannot simply apply a low pass filter to isolate the unshifted copy of X(ω)X(\omega)X(ω).

When ω0=2ωM\omega_0 = 2\omega_Mω0​=2ωM​, then our ability to reconstruct the original signal depends on the shape of its Fourier Transform. As long as Xp(kωm)X_p(k\omega_m)Xp​(kωm​) are equal to X(ωm)X(\omega_m)X(ωm​) and X(−ωmX(-\omega_mX(−ωm​), then we can apply an LPF because we can isolate the original X(ω)X(\omega)X(ω) and take its inverse Fourier Transform.

Remember that an ideal low pass filter is a square wave in the frequency domain and a sinc in the time domain. Thus if we allow

Xr(ω)=Xp(ω)⋅{Tif ∣ω∣<ωs20else X_r(\omega) = X_p(\omega)\cdot \begin{cases} T & \text{if } |\omega| < \frac{\omega_s}{2}\\ 0 & \text{else } \end{cases}Xr​(ω)=Xp​(ω)⋅{T0​if ∣ω∣<2ωs​​else ​

then our reconstructed signal will be

xr(t)=xp(t)∗sinc(tT)=∑n=−∞∞X(nT)sinc(t−nTT).x_r(t) = x_p(t)*\text{sinc}\left(\frac{t}{T}\right) = \sum_{n=-\infty}^{\infty}{X(nT)\text{sinc}\left(\frac{t-nT}{T}\right)}.xr​(t)=xp​(t)∗sinc(Tt​)=∑n=−∞∞​X(nT)sinc(Tt−nT​).

This is why we call reconstructing a signal from its samples "sinc interpolation." This leads us to formulate the Nyquist Theorem.

Theorem 18 (CT Nyquist Theorem)

Suppose a continuous signal xxx is bandlimited and we sample it at a rate of ωs>2ωM\omega_s > 2\omega_Mωs​>2ωM​, then the signal xr(t)x_r(t)xr​(t) reconstructed by sinc interpolation is exactly x(t)x(t)x(t)

Discrete Time

Sampling in discrete time is very much the same as sampling in continuous time. Using a sampling period of NNN we construct a new signal by taking an impulse train and multiplying elementwise with the original signal.

p[n]=∑n=−∞∞δ[n−kN]xp[n]=x[n]p[n]=∑n=−∞∞x[kN]δ[n−kN]Xp(ω)=1N∑k=0N−1X(ω−kωs)\begin{aligned} p[n]=\sum_{n=-\infty}^{\infty}{\delta[n-kN]}\\ x_p[n] = x[n]p[n] = \sum_{n=-\infty}^{\infty}{x[kN]\delta[n-kN]}\\ X_p(\omega) = \frac{1}{N}\sum_{k=0}^{N-1}{X(\omega-k\omega_s)}\end{aligned}p[n]=n=−∞∑∞​δ[n−kN]xp​[n]=x[n]p[n]=n=−∞∑∞​x[kN]δ[n−kN]Xp​(ω)=N1​k=0∑N−1​X(ω−kωs​)​

Our indices only go from kkk to N−1N-1N−1 in the Fourier Domain because we can only shift a particular number of times before we start to get repeated copies. This is the impulse train sampled signal. It has 0’s at the unsampled locations. If we want to, we could simply remove those zeros and get a downsampled signal

xp[n]=x[Nn]x_p[n] = x[Nn]xp​[n]=x[Nn]

Like in continuous time, the reconstructed signal is recovered via sinc interpolation.

xr[n]=∑k=−∞∞xp[n]sinc(n−kNN)x_r[n] = \sum_{k=-\infty}^{\infty}{x_p[n]\text{sinc}\left(\frac{n-kN}{N}\right)}xr​[n]=∑k=−∞∞​xp​[n]sinc(Nn−kN​)

The Nyquist Theorem in DT will tell us when this works.

Theorem 19 (DT Nyquist Theorem)

Suppose a discrete signal xxx is bandlimited by πN\frac{\pi}{N}Nπ​ and we sample it at a rate of ωs>2ωM\omega_s > 2\omega_Mωs​>2ωM​, then the signal xr[n]x_r[n]xr​[n] reconstructed by sinc interpolation is exactly x[n]x[n]x[n]

Thus as long as the Nyquist Theorem holds, we can take a downsampled signal and upsample it (i.e reconstruct the missing pieces) by expanding yyy by a factor of NNN and putting 0′s0's0′s for padding, and then applying sinc-interpolation to it.

Sampling as a System

Notice that we have two ways of representing our sample signal. We can either write it as a discrete time signal xd[n]=x(nT)x_d[n] = x(nT)xd​[n]=x(nT) or we can write it as an impulse train xp(t)=∑−∞∞x(nT)δ(t−nT)x_p(t)=\sum_{-\infty}^{\infty}{x(nT)\delta(t-nT)}xp​(t)=∑−∞∞​x(nT)δ(t−nT). Based on their Fourier Transforms,

Xd(Ω)=∑n=−∞∞x(nT)e−jΩnXp(ω)=∑n=−∞∞x(nT)e−jωnT\begin{aligned} X_d(\Omega)=\sum_{n=-\infty}^{\infty}{x(nT)e^{-j\Omega n}}\\ X_p(\omega)=\sum_{n=-\infty}^{\infty}{x(nT)e^{-j\omega nT}}\end{aligned}Xd​(Ω)=n=−∞∑∞​x(nT)e−jΩnXp​(ω)=n=−∞∑∞​x(nT)e−jωnT​

Thus if we let Ω=ωT\Omega=\omega TΩ=ωT, then we see that these two representations of a signal have the same Fourier Transforms and thus contain the same information. This means that for some continuous signals, convert them to discrete time via sampling, use a computer to apply an LTI system, and convert the result back to a CT output.

We must be careful though because as long as the DT system we apply is LTI, the overall CT system will be linear too, but it will not necessarily be time invariant because sampling inherently depends on the signal’s timing.

Yd(Ω)=Hd(Ω)Xd(Ω)=Hd(Ω)Xp(ΩT)Yp(ω)=Yd(ωT)=Hd(ωT)Xp(ω)Y(ω)={T∣ω∣<ωs20∣ω∣≥ωs2}⋅Yp(ω)={THd(ωT)Xp(ω)∣ω∣<ωs20∣ω∣≥ωs2}\begin{aligned} Y_d(\Omega) &= H_d(\Omega)X_d(\Omega) = H_d(\Omega)X_p\left(\frac{\Omega}{T}\right)\\ Y_p(\omega) &= Y_d(\omega T) = H_d(\omega T)X_p(\omega)\\ Y(\omega) &= \left\{ \begin{array}{cc} T & |\omega| < \frac{\omega_s}{2}\\ 0 & |\omega| \ge \frac{\omega_s}{2} \end{array} \right\} \cdot Y_p(\omega) = \left\{ \begin{array}{cc} TH_d(\omega T)X_p(\omega) & |\omega| < \frac{\omega_s}{2}\\ 0 & |\omega| \ge \frac{\omega_s}{2} \end{array} \right\}\end{aligned}Yd​(Ω)Yp​(ω)Y(ω)​=Hd​(Ω)Xd​(Ω)=Hd​(Ω)Xp​(TΩ​)=Yd​(ωT)=Hd​(ωT)Xp​(ω)={T0​∣ω∣<2ωs​​∣ω∣≥2ωs​​​}⋅Yp​(ω)={THd​(ωT)Xp​(ω)0​∣ω∣<2ωs​​∣ω∣≥2ωs​​​}​

Assuming that the Nyquist theorem holds,

Xp(ω)=1TX(ω)∴Y(ω)={Hd(ωT)X(ω)∣ω∣<ωs20∣ω∣≥ωs2}∴Hsystem={Hd(ωT)∣ω∣<ωs20∣ω∣≥ωs2}\begin{aligned} X_p(\omega) &= \frac{1}{T}X(\omega)\\ \therefore Y(\omega) &= \left\{ \begin{array}{cc} H_d(\omega T)X(\omega) & |\omega| < \frac{\omega_s}{2}\\ 0 & |\omega| \ge \frac{\omega_s}{2} \end{array} \right\}\\ \therefore H_{system} &= \left\{\begin{array}{cc} H_d(\omega T) & |\omega| < \frac{\omega_s}{2}\\ 0 & |\omega| \ge \frac{\omega_s}{2} \end{array} \right\}\end{aligned}Xp​(ω)∴Y(ω)∴Hsystem​​=T1​X(ω)={Hd​(ωT)X(ω)0​∣ω∣<2ωs​​∣ω∣≥2ωs​​​}={Hd​(ωT)0​∣ω∣<2ωs​​∣ω∣≥2ωs​​​}​

This shows us that as long as the Nyquist theorem holds, we can process continuous signals with a disrete time LTI system and still have the result be LTI.