Sampling
Last updated
Last updated
In order to work with continuous signals using a computer, we need to sample them. This means recording the value at particular points of time. During uniform sampling, we take samples at an even sampling period so (where is our continuous signal). This is done by passing the signal through an Analog-To-Digital converter. From there we can do discrete time processing and reconstruct our signal by passing it through a Digital-to-Analog converter with reconstruction period .
In the Fourier Domain,
then our reconstructed signal will be
This is why we call reconstructing a signal from its samples "sinc interpolation." This leads us to formulate the Nyquist Theorem.
Assuming that the Nyquist criteria is met holds,
This shows us that as long as the Nyquist theorem holds, we can process continuous signals with a disrete time LTI system and still have the result be LTI.
While not useful in practice, it can be useful to model a discrete time transfer function in terms of Continuous Time processing (e.g a half sample delay).
Similar to the analysis of DT processing of a CT signal, we can write the discrete transfer function in terms of the continuous function. Our continuous signal will be bandlimited after reconstruction.
When we upsample a signal by a factor of L, we are interpolating between samples. Conceptually, this means we are reconstructing the original continuous time signal and resampling it at a faster rate than before. First we place zeros in between samples, effectively expanding our signal.
Notice that we only need one LPF to take care of both anti-aliasing and interpolation.
Now each of our filters is compressible, so we can switch the order of downsampling and filtering while maintaining the same output.
Now for any filter, we can compute only what we need, so the result is correct and efficently obtained.
Unfortunately, ideal analog to digital conversion is not possible for a variety of reasons. The first is that not all signals are bandlimited (or there may be noise outside of the bandwidth). Moreover, computers only have finite precision, so we cannot represent the full range of values that a continuous signal might take on with a finite number of bits per sample. The solution to the first issue is to include a “anti-aliasing” filter before the sampler. The solution to the second issue is to quantize.
However, sharp analog filters are difficult to implement in practice. To deal with this, we could make the anti-aliasing filter wider, but this would add noise and interference. If we keep the cutoff frequency the same, then we could alter part of the signal because our filter is not ideal. A better solution is to do the processing in Discrete Time because we have more control. We also sample higher than the Nyquist Rate and then downsample it to the required rate.
We do this under the following assumptions:
This means our Signal to Noise Ratio for quantization is
In the ideal case, we reconstruct signals by converting them to impulses and then convolving with a sinc. However, impulses are require lots of power to generate, and sincs are infinitely long, so it is impractical to design an analog system to do this. Instead, we use an interpolation like Zero-Order-Hold to pulses and then filter with a reconstruction filter.
We mathematically model sampling as multiplication by an impulse train. Notice that if we were to take a signal and multiply it by an impulse train, then we would get a series of impulses equal to at the sampling points and everywhere else. We can call this signal .
What this tells us is that the Fourier Transform of our sampled signal is a series of copies of , each centered at where . This is a good model because we can equivalently write the CTFT of the impulse train sampled signal as
Notice that this is just the DTFT of if we set .
This means that the DTFT of our signal is just a bunch of shifted copies, and the frequency axis is scaled so .
To analyze this further, we will stay in continuous time. Lets say that our original signal has the following Fourier Transform. Notice the signal is band-limited by .
There are two major cases: if and .
Case One: .
As shown in Figure 8, the shifted copies of the original (shown in blue) do not overlap with each other or with the original copy. If we wanted to recover the original signal, we could simply apply a low pass filter to isolate the unshifted copy of and then take the inverse Fourier Transform.
Case Two:
Notice how in Figure 9, the shifted copies overlap with the original . This means in our sampled signal, the higher frequency information is bleeding in with the lower frequency information. This phenomenon is known as aliasing. When aliasing occurs, we cannot simply apply a low pass filter to isolate the unshifted copy of .
When , then our ability to reconstruct the original signal depends on the shape of its Fourier Transform. As long as are equal to and ), then we can apply an LPF because we can isolate the original and take its inverse Fourier Transform. Remember that an ideal low pass filter is a square wave in the frequency domain and a in the time domain. Thus if we allow
Suppose a continuous signal is bandlimited and we sample it at a rate of , then the signal reconstructed by sinc interpolation is exactly
As long as the DT system we apply is LTI, the overall CT system will be linear too, but it will not necessarily be time invariant because sampling inherently depends on the signal’s timing. If we want to find the overall CT transfer function () of a system like that depicted in Figure 6.
This means our reconstructed signal is also bandlimited, so we can say that
When we downsample a signal by a factor of , we create a new signal by taking every sample. What this means conceptually is that we are reconstructing the continuous signal and then sampling it at a slower rate where was the original sampling rate. If is the original continuous time signal and is the sampled signal, then the downsampled signal will be
If we re-index and let for
What this means is to obtain the new DTFT, we need to scale the frequency axis so . To prevent aliasing when this happens, we include an LPF before the downsample step.
Then we interpolate by convolving with a .
In the frequency domain, this looks like compressing the frequency axis so and then taking a low pass filter.
The gain of L is used to scale the spectrum so it is identical to if we had sampled the continuous signal at a rate of .
In order to resample a signal to a rate where T is the original sampling rate, we can do this by upsampling then downsampling our signal.
Notice that resampling with a very small change wastes a lot of computation. For example, resampling with would upsample by 100 and then throw away most of those samples when we downsample. Thus it would be useful to exchange the order of operations when resampling to save computation.
During upsampling, we convolve our filter with a bunch of zeros caused by the expansion. Convolution with 0’s is a unnecessary, so instead we could convolve with a compressed version of the filter. Notice the results will be the same as long as is a rational function,
During downsampling, we do a convolution and then throw away most of our results. It would be much more efficient to instead compute only the quantities we need. This is accomplished by downsmapling first and then convolving. Just like before, the results are only going to be the same if is a rational function.
The problem with interchanging filters is that it is not always possible. Most filters are not compressible. However, we can get around this issue and still get the efficiency gains of interchanging filter orders by taking a polyphase decomposition of our filters. First notice that can be written as a sum of compressible filters.
This means if we let , we can utilize the linearity of convolution to build a bank of filters.
If we have a dynamic range of (i.e is the length of the range of values we can represent), then our step between quantized values is , assuming we are representing our data as 2’s complement numbers with bits. We model the error caused by quantization as additive noise. Our quantized signal is decribed by
is produced by a stationary random process
is not correlated with
is white noise ( is not correlated with )
For rapidly changing signals with small , this assumptions hold, and they are useful in modeling quantization error. Since
What this tells us is that every new bit we add gives us 6dB in improvement. It also tells us that we need to adapt the range of quantization to the RMS amplitude of the signal. This means there is a tradeoff between clipping and quantization noise. When we oversampling our signal, we can further limit the effects of quantization noise because this noise will be spread out over more frequencies and the LPF will eliminate noise outside the signal bandwidth. This makes the new noise variance (if we oversample by ). Thus we can modify the equation
This shows that doubling yields a 3dB improvement (equivalent to 0.5 more bits).
We design such that is approximately an LPF.