We filter now the signal with :
This sum is a direct recipe how to filter the signal . We only need the impulse response of the filter and we can set up a digital filter (see Fig. 16). Of course in practise this impulse response cannot run till infinity but only for a limited number of samples. These are often called “taps”. So for example a filter with samples of has “taps”. This in turn then requires a delay line which can hold samples:
This is the formula of an Finite Impulse Response filter where we have sampled an analogue impulse response at time intervals of . However, usually the impulse response of the filter is directly derived in the digital domain where the argument of represents just the sample index and the sampling interval is implicitly the inverse of the sampling rate. For that reason one needs to distinguish between:
Eq 107 is the filter where the impulse response is based on an analogue filter and Eq. 108 is based on the impulse response originating from a digital filter design (see section 6.7.6 and 6.7.7) where the frequencies are normalised frequencies from to . From now on we will always refer to the “digital” one (Eq. 108) and the subscript will be omitted (Eq. 109 & Fig. 17).
In the time domain the FIR filter is then doing the following operation:
Remember that we have derived the FIR filters from the analogue domain by filtering a signal (Eq.105). The timedomain representation of this is a discrete and causal convolution operation (using the sampled Laplace transform). However, the problem is that this runs to infinite time (see Eq 110) and would require an infinite number of delay steps and thus infinite time to complete. So the FIR filter at best performs an approximation of the convolution operation but there are serious problems with the finite number of taps () which requires a technique called windowing which moves it even further away from a convolution operation. On the other hand if the impulse response is strictly limited in time one can use this property for example for matched filters. However, the bottomline is that sweeping statements such as that “FIR filters perform a convolution” are generally wrong if nothing is known of the impulse response.
float filter(float value) { // shift for (int i=taps-1;i>0;i--) { bufferFIR[i]=bufferFIR[i-l]; } //store new value bufferFIR[0]=value; //calculate result for (int i=0;i<taps;i++) { output +=bufferFIR[i]*coeffFIR[i]; } return output; }
class FIR_filter: def __init__(self,_coefficients): self.ntaps = len(_coefficients) self.coefficients = _coefficients self.buffer = np.zeros(self.ntaps) def filter(self,v): self.buffer = np.roll(self.buffer,1) self.buffer[0] = v return np.inner(self.buffer,self.coefficients)which again processes the signal sample by sample. It uses the numpy “roll” command to shift the samples and then the inner product to calculate the weighted sum between the buffer and the coefficients.
If one wants to filter a whole array one can use Python's lfilter command:
import scipy.signal as signal y = signal.lfilter(h,1,x)This filters the signal x with the impulse response h. Note that this operation is on an array and thus a-causal.
|
Fig. 18 shows a fixed point FIR filter. The input is an integer variable with I bit integers, the accumulator is an integer variable with A bits and the output as well (usually the same as the input in terms of bit width).
In contrast to a floating point FIR filter we need to scale up the coefficients so that they use full the integer range to avoid quantisation errors. For example if the coefficients of range from and and we have signed 16 bit integers then the scaling factor is .
However, the accumulator A which collects the data needs to have more bits because it receives scaled input values at I bits precision and these multiplied by factor . If we have taps then the additional bits we need is . The total number of bits we need in the accumulator in the worst case are:
A constant group delay can be achieved by restricting ourselves to the transfer function:
where is real and the phase is only defined by the exponential. The phase of this transfer function is then trivially . The group delay is the derivative of this term which is constant.Eq. 114 now imposes restrictions on the impulse response of the filter. To show this we use the inverse Fourier transform of Eq. 114 to get the impulse response:
The factor restricts the values of because the impulse response must be real. This gives us the final FIR design rule:
|
The standard technique is to multiply the coefficients with a window function which becomes and stays zero at a certain coefficient so that Eq. 106 need not to run to infinity:
Note that the higher the damping the wider the transition from pass- to stopband. This can be seen when comparing the Blackman window with the Hamming window (Fig. 19). For the lowpass filter this seems to be quite similar. However, for a bandstop filter the wider transition width might lead actually to very poor stopband damping. In such a case a Hamming window might be a better choice.
f_resp=np.ones(100) # note we need to add "+1" to the end because the end of # the range is not included. f_resp[4:6+1]=0 f_resp[94:96+1]=0 hc=np.fft.ifft(f_resp) h=np.real(hc) # this is from index 0 to index 49 on the left # and on the right hand side from index 50 to index 99 h_shift[0:50]=h[50:100] h_shift[50:100]=h[0:50] h_wind=h_shift*hamming(100)To get a nice symmetric impulse response we need to shift the inverse around samples.
We demonstrate this with a lowpass filter:
Use the inverse Fourier transform to get the impulse response:
(125) | |||
(126) | |||
(127) | |||
(128) |
(129) | |||
(130) |
Highpass, bandstop and bandpass can be calculated in exactly the same way and lead to the following ideal filter characteristics:
Here is an example code for a bandstop filter which fills the array h with the analytically calculated impulse response:
f1 = 45.0/fs f2 = 55.0/fs n = np.arange(-200,200) h = (1/(n*np.pi))*(np.sin(f1*2*np.pi*n)-np.sin(f2*2*np.pi*n)) h[200] = 1-(f2*2*np.pi-f1*2*np.pi)/np.pi; h = h * np.hamming(400)After the function has been calculated it is windowed so that the cut off is smooth. It's not very elegant as it's causes a division by zero first and then the coeffecient at 200 is fixed. Do it better!
from scipy.signal import firwin h = firwin(m,2*f)generates a lowpass FIR filter with the normalised frequency f. Note the factor two because in scipy the normalisation is not the sampling rate but the nyquist frequency. To be compatible with the math here the easiest approach is to multiply the frequency by 2. With this command one can also design high pass, bandstop and bandpass filters. Type help(firwin) which provides examples for all filter types and which parameters need to be set.
How to build a correlator with a filter? Definition of filtering?
(138) | |||
(139) |
How to improve the matching process? Square the output of the filter!
|
We need to derive how the error can change the FIR filter coefficients so that the error is actually minimised. This can be expressed as a so called “gradient descent” which minimises the squared error because both a positive and a negative error are equally bad:
where is the change of the coefficient at every time step: . defines how quickly the coefficients change at every time step and is called the “learning rate”. Note here the change in notation of the FIR filter coefficients which are changing much slower than the sampled signals and and can still be seen as constant for the realtime filtering operation. Thus, we have for the sample by sample processing as before and the index of for the very slowly changing FIR filter coefficients. To gain some intuition why Eq. 143 mimimises the error we look at what happens if we increase the FIR coefficient a little bit, for example, by the small amount of : . Then we can observe two cases:
Eq 143 now needs to be turned into a form which can directly run in software by
solving its partial derivative by inserting Eqs. 142 and 110.
Now we have the recipe of an adaptive filter where the FIR filter minimises its own error in a negative feedback loop and generating an output which is as closely as possible to the desired signal . The signal is often called “the remover” as it cancels out the signal . However, not everything in is cancelled out but only what is correlated with the input . This means that the error signal will not be zero but will contain any signal components which are not correlated with . This means that the error signal actually is also the cleaned up output signal of the adaptive filter.
Imagine you want to remove Hz mains from a signal