This is why we call reconstructing a signal from its samples "sinc interpolation." This leads us to formulate the Nyquist Theorem.
Theorem 4 (Nyquist Theorem)
Discrete Time Processing of a Continuous Time Signal
Assuming that the Nyquist criteria is met holds,
This shows us that as long as the Nyquist theorem holds, we can process continuous signals with a disrete time LTI system and still have the result be LTI.
Continuous Time Processing of Discrete Time Signals
While not useful in practice, it can be useful to model a discrete time transfer function in terms of Continuous Time processing (e.g a half sample delay).
Similar to the analysis of DT processing of a CT signal, we can write the discrete transfer function in terms of the continuous function. Our continuous signal will be bandlimited after reconstruction.
Downsampling
Upsampling
When we upsample a signal by a factor of L, we are interpolating between samples. Conceptually, this means we are reconstructing the original continuous time signal and resampling it at a faster rate than before. First we place zeros in between samples, effectively expanding our signal.
Multi-Rate Signal Processing
Notice that we only need one LPF to take care of both anti-aliasing and interpolation.
Exchanging Filter Order During Resampling
Polyphase Decomposition
Now each of our filters is compressible, so we can switch the order of downsampling and filtering while maintaining the same output.
Now for any filter, we can compute only what we need, so the result is correct and efficently obtained.
Practical Sampling (ADC)
Unfortunately, ideal analog to digital conversion is not possible for a variety of reasons. The first is that not all signals are bandlimited (or there may be noise outside of the bandwidth). Moreover, computers only have finite precision, so we cannot represent the full range of values that a continuous signal might take on with a finite number of bits per sample. The solution to the first issue is to include a “anti-aliasing” filter before the sampler. The solution to the second issue is to quantize.
However, sharp analog filters are difficult to implement in practice. To deal with this, we could make the anti-aliasing filter wider, but this would add noise and interference. If we keep the cutoff frequency the same, then we could alter part of the signal because our filter is not ideal. A better solution is to do the processing in Discrete Time because we have more control. We also sample higher than the Nyquist Rate and then downsample it to the required rate.
Quantization
We do this under the following assumptions:
This means our Signal to Noise Ratio for quantization is
Practical Reconstruction (DAC)
In the ideal case, we reconstruct signals by converting them to impulses and then convolving with a sinc. However, impulses are require lots of power to generate, and sincs are infinitely long, so it is impractical to design an analog system to do this. Instead, we use an interpolation like Zero-Order-Hold to pulses and then filter with a reconstruction filter.
In order to work with continuous signals using a computer, we need to sample them. This means recording the value at particular points of time. During uniform sampling, we take samples at an even sampling period Ts so x[n]=xc(nT) (where xc is our continuous signal). This is done by passing the signal through an Analog-To-Digital converter. From there we can do discrete time processing and reconstruct our signal by passing it through a Digital-to-Analog converter with reconstruction period Tr.
We mathematically model sampling as multiplication by an impulse train. Notice that if we were to take a signal x(t) and multiply it by an impulse train, then we would get a series of impulses equal to x(t) at the sampling points and 0 everywhere else. We can call this signal xp(t).
What this tells us is that the Fourier Transform of our sampled signal is a series of copies of X(jΩ), each centered at kΩs where Ωs=T2π. This is a good model because we can equivalently write the CTFT of the impulse train sampled signal as
This means that the DTFT of our signal is just a bunch of shifted copies, and the frequency axis is scaled so Ωs→2π.
To analyze this further, we will stay in continuous time. Lets say that our original signal has the following Fourier Transform. Notice the signal is band-limited by ΩM.
There are two major cases: if Ωs>2Ωm and Ωs<2Ωm.
Case One:Ωs>2Ωm.
As shown in Figure 8, the shifted copies of the original X(jΩ) (shown in blue) do not overlap with each other or with the original copy. If we wanted to recover the original signal, we could simply apply a low pass filter to isolate the unshifted copy of X(jΩ) and then take the inverse Fourier Transform.
Case Two:Ωs<2Ωm
Notice how in Figure 9, the shifted copies overlap with the original X(ω). This means in our sampled signal, the higher frequency information is bleeding in with the lower frequency information. This phenomenon is known as aliasing. When aliasing occurs, we cannot simply apply a low pass filter to isolate the unshifted copy of X(ω).
When Ωs=2ΩM, then our ability to reconstruct the original signal depends on the shape of its Fourier Transform. As long as Xp(jkΩm) are equal to X(jΩm) and X(−jΩm), then we can apply an LPF because we can isolate the original X(jΩ) and take its inverse Fourier Transform.
Remember that an ideal low pass filter is a square wave in the frequency domain and a sinc in the time domain. Thus if we allow
Xr(jΩ)=Xp(jΩ)⋅{T0∣Ω∣<2Ωs else }
xr(t)=xp(t)∗sinc(Tt)=∑n=−∞∞X(nT)sinc(Tt−nT).
Suppose a continuous signal x is bandlimited and we sample it at a rate of Ωs>2Ωm, then the signal xr(t) reconstructed by sinc interpolation is exactly x(t)
As long as the DT system we apply is LTI, the overall CT system will be linear too, but it will not necessarily be time invariant because sampling inherently depends on the signal’s timing. If we want to find the overall CT transfer function (ω=ΩT) of a system like that depicted in Figure 6.
This means our reconstructed signal Y(jΩ)=H(jΩ)X(jΩ) is also bandlimited, so we can say that
Yd(ejΩ)=H(jΩ)∣Ω=TωX(ejω)
When we downsample a signal by a factor of M, we create a new signal y[n]=x[nM] by taking every Mth sample. What this means conceptually is that we are reconstructing the continuous signal and then sampling it at a slower rate MT where T was the original sampling rate. If xc is the original continuous time signal and xd is the sampled signal, then the downsampled signal y[n] will be
What this means is to obtain the new DTFT, we need to scale the frequency axis so Mπ→π. To prevent aliasing when this happens, we include an LPF before the downsample step.
In the frequency domain, this looks like compressing the frequency axis so π→Lπ and then taking a low pass filter.
The gain of L is used to scale the spectrum so it is identical to if we had sampled the continuous signal at a rate of LT.
In order to resample a signal to a rate T′=LMT where T is the original sampling rate, we can do this by upsampling then downsampling our signal.
Notice that resampling with a very small change wastes a lot of computation. For example, resampling with T′=1.01T would upsample by 100 and then throw away most of those samples when we downsample. Thus it would be useful to exchange the order of operations when resampling to save computation.
During upsampling, we convolve our filter with a bunch of zeros caused by the expansion. Convolution with 0’s is a unnecessary, so instead we could convolve with a compressed version of the filter. Notice the results will be the same as long as H(zL1) is a rational function,
During downsampling, we do a convolution and then throw away most of our results. It would be much more efficient to instead compute only the quantities we need. This is accomplished by downsmapling first and then convolving. Just like before, the results are only going to be the same if H(zM1) is a rational function.
The problem with interchanging filters is that it is not always possible. Most filters are not compressible. However, we can get around this issue and still get the efficiency gains of interchanging filter orders by taking a polyphase decomposition of our filters. First notice that h[n] can be written as a sum of compressible filters.
h[n]=∑k=0M−1hk[n−k]
This means if we let ek[n]=hk[nM], we can utilize the linearity of convolution to build a bank of filters.
If we have a dynamic range of Xm (i.e 2Xm is the length of the range of values we can represent), then our step between quantized values is Δ=2BXm, assuming we are representing our data as 2’s complement numbers with B bits. We model the error caused by quantization as additive noise. Our quantized signal x^[n] is decribed by
x^[n]=x[n]+e[n]2−Δ≤e[n]≤2Δ
e[n] is produced by a stationary random process
e[n] is not correlated with x[n]
e[n] is white noise (e[n] is not correlated with e[m])
e[n]∼U[2−Δ,2Δ]
For rapidly changing signals with small Δ, this assumptions hold, and they are useful in modeling quantization error. Since Δ=2−BXm
σe2=12Δ2=122−2BXm2
SNRQ=10log(σe2σx2)=6.02B+10.8−20log(σsXm)
What this tells us is that every new bit we add gives us 6dB in improvement. It also tells us that we need to adapt the range of quantization to the RMS amplitude of the signal. This means there is a tradeoff between clipping and quantization noise. When we oversampling our signal, we can further limit the effects of quantization noise because this noise will be spread out over more frequencies and the LPF will eliminate noise outside the signal bandwidth. This makes Mσe2 the new noise variance (if we oversample by M). Thus we can modify the SNRQ equation
SNRQ=6.02B+10.8−20log(σsXm)+10logM.
This shows that doubling M yields a 3dB improvement (equivalent to 0.5 more bits).
Xr(jΩ)=Hr(jΩ)Te−jΩ2Tsinc(ΩsΩ)Zero Order HoldT1k=−∞∑∞X(j(Ω−kΩs))Sampled Signal
We design Hr(jΩ) such that Hr(jω)H0(jω) is approximately an LPF.