Berkeley Notes
  • Introduction
  • EE120
    • Introduction to Signals and Systems
    • The Fourier Series
    • The Fourier Transform
    • Generalized transforms
    • Linear Time-Invariant Systems
    • Feedback Control
    • Sampling
    • Appendix
  • EE123
    • The DFT
    • Spectral Analysis
    • Sampling
    • Filtering
  • EECS126
    • Introduction to Probability
    • Random Variables and their Distributions
    • Concentration
    • Information Theory
    • Random Processes
    • Random Graphs
    • Statistical Inference
    • Estimation
  • EECS127
    • Linear Algebra
    • Fundamentals of Optimization
    • Linear Algebraic Optimization
    • Convex Optimization
    • Duality
  • EE128
    • Introduction to Control
    • Modeling Systems
    • System Performance
    • Design Tools
    • Cascade Compensation
    • State-Space Control
    • Digital Control Systems
    • Cayley-Hamilton
  • EECS225A
    • Hilbert Space Theory
    • Linear Estimation
    • Discrete Time Random Processes
    • Filtering
  • EE222
    • Real Analysis
    • Differential Geometry
    • Nonlinear System Dynamics
    • Stability of Nonlinear Systems
    • Nonlinear Feedback Control
Powered by GitBook
On this page
  • Ideal Sampling
  • Nyquist Theorem
  • Discrete Time Processing of a Continuous Time Signal
  • Continuous Time Processing of Discrete Time Signals
  • Downsampling
  • Upsampling
  • Multi-Rate Signal Processing
  • Exchanging Filter Order During Resampling
  • Polyphase Decomposition
  • Practical Sampling (ADC)
  • Quantization
  • Practical Reconstruction (DAC)

Was this helpful?

  1. EE123

Sampling

PreviousSpectral AnalysisNextFiltering

Last updated 3 years ago

Was this helpful?

Ideal Sampling

In order to work with continuous signals using a computer, we need to sample them. This means recording the value at particular points of time. During uniform sampling, we take samples at an even sampling period TsT_sTs​ so x[n]=xc(nT)x[n]=x_c(nT)x[n]=xc​(nT) (where xcx_cxc​ is our continuous signal). This is done by passing the signal through an Analog-To-Digital converter. From there we can do discrete time processing and reconstruct our signal by passing it through a Digital-to-Analog converter with reconstruction period TrT_rTr​.

We mathematically model sampling as multiplication by an impulse train. Notice that if we were to take a signal x(t)x(t)x(t) and multiply it by an impulse train, then we would get a series of impulses equal to x(t)x(t)x(t) at the sampling points and 000 everywhere else. We can call this signal xp(t)x_p(t)xp​(t).

p(t)=∑k=−∞∞δ(t−kT)p(t) = \sum_{k=-\infty}^{\infty}{\delta(t-kT)}p(t)=∑k=−∞∞​δ(t−kT)

xp(t)=x(t)p(t)=∑k=−∞∞x(t)δ(t−kT)x_p(t) = x(t)p(t) = \sum_{k=-\infty}^{\infty}{x(t)\delta(t-kT)}xp​(t)=x(t)p(t)=∑k=−∞∞​x(t)δ(t−kT)

In the Fourier Domain,

Xp(jΩ)=12πX(jΩ)∗P(jΩ)P(jΩ)=2πT∑k=−∞∞δ(Ω−kΩs)∴Xp(jΩ)=12π∫−∞∞X(jθ)P(j(Ω−θ))dθ=1T∑k=−∞∞X(j(Ω−kΩs)\begin{aligned} X_p(j\Omega) &= \frac{1}{2\pi}X(j\Omega)*P(j\Omega)\\ P(j\Omega) &= \frac{2\pi}{T}\sum_{k=-\infty}^{\infty}{\delta(\Omega-k\Omega_s)}\\ \therefore X_p(j\Omega) &= \frac{1}{2\pi}\int_{-\infty}^{\infty}{X(j\theta)P(j(\Omega-\theta))d\theta} = \frac{1}{T}\sum_{k=-\infty}^{\infty}{X(j(\Omega-k\Omega_s)}\end{aligned}Xp​(jΩ)P(jΩ)∴Xp​(jΩ)​=2π1​X(jΩ)∗P(jΩ)=T2π​k=−∞∑∞​δ(Ω−kΩs​)=2π1​∫−∞∞​X(jθ)P(j(Ω−θ))dθ=T1​k=−∞∑∞​X(j(Ω−kΩs​)​

What this tells us is that the Fourier Transform of our sampled signal is a series of copies of X(jΩ)X(j\Omega)X(jΩ), each centered at kΩsk\Omega_skΩs​ where Ωs=2πT\Omega_s = \frac{2\pi}{T}Ωs​=T2π​. This is a good model because we can equivalently write the CTFT of the impulse train sampled signal as

Xp(jΩ)=∫−∞∞∑k=−∞∞x(t)δ(t−kT)∑k=−∞∞x(kT)e−jkTΩ.\begin{aligned} X_p(j\Omega) &= \int_{-\infty}^{\infty}\sum_{k=-\infty}^{\infty}{x(t)\delta(t-kT)} \sum_{k=-\infty}^{\infty}x(kT)e^{-jkT\Omega}.\end{aligned}Xp​(jΩ)​=∫−∞∞​k=−∞∑∞​x(t)δ(t−kT)k=−∞∑∞​x(kT)e−jkTΩ.​

Notice that this is just the DTFT of x[n]=x(nT)x[n]=x(nT)x[n]=x(nT) if we set ω=ΩT\omega = \Omega Tω=ΩT.

X(ejω)=∑n=−∞∞x(nT)e−jωn=Xp(jΩ)∣Ω=ωT=1T∑k=−∞∞X(ωT−k2πTs)X(e^{j\omega}) = \sum_{n=-\infty}^{\infty}x(nT)e^{-j\omega n}=X_p(j\Omega)|_{\Omega=\frac{\omega}{T}}=\frac{1}{T}\sum_{k=-\infty}^{\infty}{X\left(\frac{\omega}{T}-k\frac{2\pi}{T_s}\right)}X(ejω)=∑n=−∞∞​x(nT)e−jωn=Xp​(jΩ)∣Ω=Tω​​=T1​∑k=−∞∞​X(Tω​−kTs​2π​)

This means that the DTFT of our signal is just a bunch of shifted copies, and the frequency axis is scaled so Ωs→2π\Omega_s \rightarrow 2\piΩs​→2π.

Nyquist Theorem

To analyze this further, we will stay in continuous time. Lets say that our original signal has the following Fourier Transform. Notice the signal is band-limited by ΩM\Omega_MΩM​.

There are two major cases: if Ωs>2Ωm\Omega_s > 2\Omega_mΩs​>2Ωm​ and Ωs<2Ωm\Omega_s < 2\Omega_mΩs​<2Ωm​.

Case One: Ωs>2Ωm\Omega_s > 2\Omega_mΩs​>2Ωm​.

As shown in Figure 8, the shifted copies of the original X(jΩ)X(j\Omega)X(jΩ) (shown in blue) do not overlap with each other or with the original copy. If we wanted to recover the original signal, we could simply apply a low pass filter to isolate the unshifted copy of X(jΩ)X(j\Omega)X(jΩ) and then take the inverse Fourier Transform.

Case Two: Ωs<2Ωm\Omega_s < 2\Omega_mΩs​<2Ωm​

Notice how in Figure 9, the shifted copies overlap with the original X(ω)X(\omega)X(ω). This means in our sampled signal, the higher frequency information is bleeding in with the lower frequency information. This phenomenon is known as aliasing. When aliasing occurs, we cannot simply apply a low pass filter to isolate the unshifted copy of X(ω)X(\omega)X(ω).

When Ωs=2ΩM\Omega_s = 2\Omega_MΩs​=2ΩM​, then our ability to reconstruct the original signal depends on the shape of its Fourier Transform. As long as Xp(jkΩm)X_p(jk\Omega_m)Xp​(jkΩm​) are equal to X(jΩm)X(j\Omega_m)X(jΩm​) and X(−jΩmX(-j\Omega_mX(−jΩm​), then we can apply an LPF because we can isolate the original X(jΩ)X(j\Omega)X(jΩ) and take its inverse Fourier Transform. Remember that an ideal low pass filter is a square wave in the frequency domain and a sinc\text{sinc}sinc in the time domain. Thus if we allow

Xr(jΩ)=Xp(jΩ)⋅{T∣Ω∣<Ωs20 else }X_r(j\Omega) = X_p(j\Omega)\cdot \left\{ \begin{array}{cc} T & |\Omega| < \frac{\Omega_s}{2}\\ 0 & \text{ else } \end{array} \right\}Xr​(jΩ)=Xp​(jΩ)⋅{T0​∣Ω∣<2Ωs​​ else ​}

then our reconstructed signal will be

xr(t)=xp(t)∗sinc(tT)=∑n=−∞∞X(nT)sinc(t−nTT).x_r(t) = x_p(t)*\text{sinc}\left(\frac{t}{T}\right) = \sum_{n=-\infty}^{\infty}{X(nT)\text{sinc}\left(\frac{t-nT}{T}\right)}.xr​(t)=xp​(t)∗sinc(Tt​)=∑n=−∞∞​X(nT)sinc(Tt−nT​).

This is why we call reconstructing a signal from its samples "sinc interpolation." This leads us to formulate the Nyquist Theorem.

Theorem 4 (Nyquist Theorem)

Suppose a continuous signal xxx is bandlimited and we sample it at a rate of Ωs>2Ωm\Omega_s > 2\Omega_mΩs​>2Ωm​, then the signal xr(t)x_r(t)xr​(t) reconstructed by sinc interpolation is exactly x(t)x(t)x(t)

Discrete Time Processing of a Continuous Time Signal

As long as the DT system we apply is LTI, the overall CT system will be linear too, but it will not necessarily be time invariant because sampling inherently depends on the signal’s timing. If we want to find the overall CT transfer function (ω=ΩT\omega = \Omega Tω=ΩT) of a system like that depicted in Figure 6.

Yd(ejω)=Hd(ejω)Xd(ejω)=Hd(ejω)Xp(ωT)Yp(jΩ)=Yd(ejΩT)=Hd(ejΩT)Xp(jΩ)Y(jΩ)={T∣Ω∣<Ωs20∣Ω∣≥Ωs2}⋅Yp(jΩ)={THd(ejΩT)Xp(jΩ)∣Ω∣<Ωs20∣Ω∣≥Ωs2}\begin{aligned} Y_d(e^{j\omega}) &= H_d(e^{j\omega})X_d(e^{j\omega}) = H_d(e^{j\omega})X_p\left(\frac{\omega}{T}\right)\\ Y_p(j\Omega) &= Y_d(e^{j\Omega T}) = H_d(e^{j\Omega T})X_p(j\Omega)\\ Y(j\Omega) &= \left\{ \begin{array}{cc} T & |\Omega| < \frac{\Omega_s}{2}\\ 0 & |\Omega| \ge \frac{\Omega_s}{2} \end{array} \right\} \cdot Y_p(j\Omega) = \left\{ \begin{array}{cc} TH_d(e^{j\Omega T})X_p(j\Omega) & |\Omega| < \frac{\Omega_s}{2}\\ 0 & |\Omega| \ge \frac{\Omega_s}{2} \end{array} \right\}\end{aligned}Yd​(ejω)Yp​(jΩ)Y(jΩ)​=Hd​(ejω)Xd​(ejω)=Hd​(ejω)Xp​(Tω​)=Yd​(ejΩT)=Hd​(ejΩT)Xp​(jΩ)={T0​∣Ω∣<2Ωs​​∣Ω∣≥2Ωs​​​}⋅Yp​(jΩ)={THd​(ejΩT)Xp​(jΩ)0​∣Ω∣<2Ωs​​∣Ω∣≥2Ωs​​​}​

Assuming that the Nyquist criteria is met holds,

Xp(jΩ)=1TX(jΩ)∴Y(jΩ)={Hd(ejΩT)X(jω)∣Ω∣<Ωs20∣Ω∣≥Ωs2}∴Hsystem={Hd(ejωT)∣Ω∣<Ωs20∣Ω∣≥Ωs2}\begin{aligned} X_p(j\Omega) &= \frac{1}{T}X(j\Omega)\\ \therefore Y(j\Omega) &= \left\{ \begin{array}{cc} H_d(e^{j\Omega T})X(j\omega) & |\Omega| < \frac{\Omega_s}{2}\\ 0 & |\Omega| \ge \frac{\Omega_s}{2} \end{array} \right\}\\ \therefore H_{system} &= \left\{\begin{array}{cc} H_d(e^{j\omega T}) & |\Omega| < \frac{\Omega_s}{2}\\ 0 & |\Omega| \ge \frac{\Omega_s}{2} \end{array} \right\}\end{aligned}Xp​(jΩ)∴Y(jΩ)∴Hsystem​​=T1​X(jΩ)={Hd​(ejΩT)X(jω)0​∣Ω∣<2Ωs​​∣Ω∣≥2Ωs​​​}={Hd​(ejωT)0​∣Ω∣<2Ωs​​∣Ω∣≥2Ωs​​​}​

This shows us that as long as the Nyquist theorem holds, we can process continuous signals with a disrete time LTI system and still have the result be LTI.

Continuous Time Processing of Discrete Time Signals

While not useful in practice, it can be useful to model a discrete time transfer function in terms of Continuous Time processing (e.g a half sample delay).

Similar to the analysis of DT processing of a CT signal, we can write the discrete transfer function in terms of the continuous function. Our continuous signal will be bandlimited after reconstruction.

X(jΩ)={TXd(ejω)∣ω=ΩT∣Ω∣≤Ωs20X(j\Omega) = \begin{cases} T X_d(e^{j\omega})|_{\omega=\Omega T} & |\Omega| \le \frac{\Omega_s}{2}\\ 0 \end{cases}X(jΩ)={TXd​(ejω)∣ω=ΩT​0​∣Ω∣≤2Ωs​​

This means our reconstructed signal Y(jΩ)=H(jΩ)X(jΩ)Y(j\Omega)=H_(j\Omega)X(j\Omega)Y(jΩ)=H(​jΩ)X(jΩ) is also bandlimited, so we can say that

Yd(ejΩ)=H(jΩ)∣Ω=ωTX(ejω)Y_d(e^{j\Omega})=H(j\Omega)|_{\Omega=\frac{\omega}{T}}X(e^{j\omega})Yd​(ejΩ)=H(jΩ)∣Ω=Tω​​X(ejω)

Downsampling

When we downsample a signal by a factor of MMM, we create a new signal y[n]=x[nM]y[n]=x[nM]y[n]=x[nM] by taking every MthMthMth sample. What this means conceptually is that we are reconstructing the continuous signal and then sampling it at a slower rate MTMTMT where TTT was the original sampling rate. If xcx_cxc​ is the original continuous time signal and xdx_dxd​ is the sampled signal, then the downsampled signal y[n]y[n]y[n] will be

y[n]=x[nM]=xc(nMT)  ⟹  Y(ejω)=1MT∑k=−∞∞Xc(ωNT−k2πNT).y[n]=x[nM]=x_c(nMT)\implies Y(e^{j\omega}) =\frac{1}{MT}\sum_{k=-\infty}^{\infty}X_c\left(\frac{\omega}{NT}-k\frac{2\pi}{NT}\right).y[n]=x[nM]=xc​(nMT)⟹Y(ejω)=MT1​∑k=−∞∞​Xc​(NTω​−kNT2π​).

If we re-index and let k=Mp+mk=Mp+mk=Mp+m for m∈[0,N−1],p∈Zm\in [0, N-1],p\in \mathbb{Z}m∈[0,N−1],p∈Z

Y(ejω)=1M∑m=0M−1Xd(ejω−2πmM).Y(e^{j\omega})=\frac{1}{M}\sum_{m=0}^{M-1}X_d(e^{j\frac{\omega-2\pi m}{M}}).Y(ejω)=M1​∑m=0M−1​Xd​(ejMω−2πm​).

What this means is to obtain the new DTFT, we need to scale the frequency axis so πM→π\frac{\pi}{M}\rightarrow \piMπ​→π. To prevent aliasing when this happens, we include an LPF before the downsample step.

Upsampling

When we upsample a signal by a factor of L, we are interpolating between samples. Conceptually, this means we are reconstructing the original continuous time signal and resampling it at a faster rate than before. First we place zeros in between samples, effectively expanding our signal.

xe[n]={x[nL]n=0,±L,±2L,...0x_e[n] = \begin{cases} x\left[\frac{n}{L}\right] & n=0, \pm L, \pm 2L,...\\ 0 \end{cases}xe​[n]={x[Ln​]0​n=0,±L,±2L,...

Xe(ejω)=∑−∞∞xe[n]e−jωn=∑m=−∞∞x[m]e−jωmL=X(ejωL)X_e(e^{j\omega})=\sum_{-\infty}^{\infty}x_e[n]e^{-j\omega n}=\sum_{m=-\infty}^{\infty}x[m]e^{-j\omega mL} = X\left(e^{j\omega L}\right)Xe​(ejω)=∑−∞∞​xe​[n]e−jωn=∑m=−∞∞​x[m]e−jωmL=X(ejωL)

Then we interpolate by convolving with a sinc\text{sinc}sinc.

y[n]=xe[n]∗sinc(nL)=∑n=−∞∞x[k]sinc(n−kLL)y[n] = x_e[n]*\text{sinc}\left(\frac{n}{L}\right) = \sum_{n=-\infty}^{\infty}{x[k]\text{sinc}\left(\frac{n-kL}{L}\right)}y[n]=xe​[n]∗sinc(Ln​)=∑n=−∞∞​x[k]sinc(Ln−kL​)

In the frequency domain, this looks like compressing the frequency axis so π→πL\pi \rightarrow \frac{\pi}{L}π→Lπ​ and then taking a low pass filter.

The gain of L is used to scale the spectrum so it is identical to if we had sampled the continuous signal at a rate of TL\frac{T}{L}LT​.

Multi-Rate Signal Processing

In order to resample a signal to a rate T′=MTLT'=\frac{MT}{L}T′=LMT​ where T is the original sampling rate, we can do this by upsampling then downsampling our signal.

Notice that we only need one LPF to take care of both anti-aliasing and interpolation.

Exchanging Filter Order During Resampling

Notice that resampling with a very small change wastes a lot of computation. For example, resampling with T′=1.01TT'=1.01TT′=1.01T would upsample by 100 and then throw away most of those samples when we downsample. Thus it would be useful to exchange the order of operations when resampling to save computation.

During upsampling, we convolve our filter with a bunch of zeros caused by the expansion. Convolution with 0’s is a unnecessary, so instead we could convolve with a compressed version of the filter. Notice the results will be the same as long as H(z1L)H(z^{\frac{1}{L}})H(zL1​) is a rational function,

During downsampling, we do a convolution and then throw away most of our results. It would be much more efficient to instead compute only the quantities we need. This is accomplished by downsmapling first and then convolving. Just like before, the results are only going to be the same if H(z1M)H(z^{\frac{1}{M}})H(zM1​) is a rational function.

Polyphase Decomposition

The problem with interchanging filters is that it is not always possible. Most filters are not compressible. However, we can get around this issue and still get the efficiency gains of interchanging filter orders by taking a polyphase decomposition of our filters. First notice that h[n]h[n]h[n] can be written as a sum of compressible filters.

h[n]=∑k=0M−1hk[n−k]h[n] = \sum_{k=0}^{M-1}h_k[n-k]h[n]=∑k=0M−1​hk​[n−k]

This means if we let ek[n]=hk[nM]e_k[n] = h_k[nM]ek​[n]=hk​[nM], we can utilize the linearity of convolution to build a bank of filters.

Now each of our filters is compressible, so we can switch the order of downsampling and filtering while maintaining the same output.

Now for any filter, we can compute only what we need, so the result is correct and efficently obtained.

Practical Sampling (ADC)

Unfortunately, ideal analog to digital conversion is not possible for a variety of reasons. The first is that not all signals are bandlimited (or there may be noise outside of the bandwidth). Moreover, computers only have finite precision, so we cannot represent the full range of values that a continuous signal might take on with a finite number of bits per sample. The solution to the first issue is to include a “anti-aliasing” filter before the sampler. The solution to the second issue is to quantize.

However, sharp analog filters are difficult to implement in practice. To deal with this, we could make the anti-aliasing filter wider, but this would add noise and interference. If we keep the cutoff frequency the same, then we could alter part of the signal because our filter is not ideal. A better solution is to do the processing in Discrete Time because we have more control. We also sample higher than the Nyquist Rate and then downsample it to the required rate.

Quantization

If we have a dynamic range of XmX_mXm​ (i.e 2Xm2X_m2Xm​ is the length of the range of values we can represent), then our step between quantized values is Δ=Xm2B\Delta=\frac{X_m}{2^B}Δ=2BXm​​, assuming we are representing our data as 2’s complement numbers with BBB bits. We model the error caused by quantization as additive noise. Our quantized signal x^[n]\hat{x}[n]x^[n] is decribed by

x^[n]=x[n]+e[n]−Δ2≤e[n]≤Δ2\hat{x}[n] = x[n] + e[n] \qquad \frac{-\Delta}{2}\le e[n] \le \frac{\Delta}{2}x^[n]=x[n]+e[n]2−Δ​≤e[n]≤2Δ​

We do this under the following assumptions:

  1. e[n]e[n]e[n] is produced by a stationary random process

  2. e[n]e[n]e[n] is not correlated with x[n]x[n]x[n]

  3. e[n]e[n]e[n] is white noise (e[n]e[n]e[n] is not correlated with e[m]e[m]e[m])

  4. e[n]∼U[−Δ2,Δ2]e[n]\sim U\left[\frac{-\Delta}{2},\frac{\Delta}{2}\right]e[n]∼U[2−Δ​,2Δ​]

For rapidly changing signals with small Δ\DeltaΔ, this assumptions hold, and they are useful in modeling quantization error. Since Δ=2−BXm\Delta = 2^{-B}X_mΔ=2−BXm​

σe2=Δ212=2−2BXm212\sigma^2_e=\frac{\Delta^2}{12}=\frac{2^{-2B}X_m^2}{12}σe2​=12Δ2​=122−2BXm2​​

This means our Signal to Noise Ratio for quantization is

SNRQ=10log⁡(σx2σe2)=6.02B+10.8−20log⁡(Xmσs)SNR_Q=10\log\left(\frac{\sigma_x^2}{\sigma_e^2}\right)=6.02B+10.8-20\log\left(\frac{X_m}{\sigma_s}\right)SNRQ​=10log(σe2​σx2​​)=6.02B+10.8−20log(σs​Xm​​)

What this tells us is that every new bit we add gives us 6dB in improvement. It also tells us that we need to adapt the range of quantization to the RMS amplitude of the signal. This means there is a tradeoff between clipping and quantization noise. When we oversampling our signal, we can further limit the effects of quantization noise because this noise will be spread out over more frequencies and the LPF will eliminate noise outside the signal bandwidth. This makes σe2M\frac{\sigma_e^2}{M}Mσe2​​ the new noise variance (if we oversample by MMM). Thus we can modify the SNRQSNR_QSNRQ​ equation

SNRQ=6.02B+10.8−20log⁡(Xmσs)+10log⁡M.SNR_Q=6.02B+10.8-20\log\left(\frac{X_m}{\sigma_s}\right) + 10\log M.SNRQ​=6.02B+10.8−20log(σs​Xm​​)+10logM.

This shows that doubling MMM yields a 3dB improvement (equivalent to 0.5 more bits).

Practical Reconstruction (DAC)

In the ideal case, we reconstruct signals by converting them to impulses and then convolving with a sinc. However, impulses are require lots of power to generate, and sincs are infinitely long, so it is impractical to design an analog system to do this. Instead, we use an interpolation like Zero-Order-Hold to pulses and then filter with a reconstruction filter.

Xr(jΩ)=Hr(jΩ)Te−jΩT2sinc(ΩΩs)⏞Zero Order Hold1T∑k=−∞∞X(j(Ω−kΩs))⏞Sampled SignalX_r(j\Omega) = H_r(j\Omega)\overbrace{Te^{-j\Omega\frac{T}{2}}\text{sinc}\left(\frac{\Omega}{\Omega_s}\right)}^{\text{Zero Order Hold}}\overbrace{\frac{1}{T}\sum_{k=-\infty}^{\infty}X(j(\Omega-k\Omega_s))}^{\text{Sampled Signal}}Xr​(jΩ)=Hr​(jΩ)Te−jΩ2T​sinc(Ωs​Ω​)​Zero Order Hold​T1​k=−∞∑∞​X(j(Ω−kΩs​))​Sampled Signal​

We design Hr(jΩ)H_r(j\Omega)Hr​(jΩ) such that Hr(jω)H0(jω)H_r(j\omega)H_0(j\omega)Hr​(jω)H0​(jω) is approximately an LPF.

Figure 6: Uniform Sampling System
Figure 7: Example of the spectrum of a bandlimited signal
Figure 8: When $\Omega_s > 2\Omega_m$
Figure 9: When $\Omega_s < 2\Omega_m$
Figure 10: Continuous Time processing of a Discrete Time signal
Figure 11: Downsampling
Figure 12: Upsampling operation
Resampling
Figure 13: Interchanging an upsampling operation
Figure 14: Interchanging a downsampling operation
Figure 15: Example of decomposing a filter: M=2
Figure 16: A filter bank
Figure 17: Filter bank but with the downsampling done first
Figure 18: Sampling with quantization
Figure 19: A practical sampling system with quantization and anti-aliasing
Figure 20: Practical Reconstruction