User Tools

Site Tools


analysis:nsb2015:week7

~~DISCUSSION~~

Filtering: filter design, use, caveats

Goals:

  • Familiarize yourself with basic filtering concepts: frequency and phase response, difference equation, roll-off, ripple
  • Learn to use MATLAB's filter design and visualization tools
  • Understand the tradeoffs inherent in filtering and use this knowledge to select the appropriate filter for a particular application

Exercise:

  • A function detectSWR() that can reliably identify sharp wave-ripple complexes in a hippocampal LFP using filtering

Resources:

  • (background) Leis Section 3.10 (difference equation)
  • (background) Leis Chapter 8 (filtering; note that this mentions some concepts that are beyond the scope of this course, so skim over this)

Introductory remarks

In the previous module we saw that any signal can be decomposed into a sum of sinusoids, described by a series of magnitude and phase coefficients of a harmonic series. The Fourier transform and the algorithm that performs it can be thought of as mapping the signal from the time domain into the frequency domain. The inverse Fourier transform does the opposite.

This raises the possibility of manipulating the signal in the frequency domain, for instance by removing or amplifying certain frequencies, and then reconstructing the signal. This is an intuitive way to think about filtering, defined as an operation or process that removes or attenuates certain features from a signal. This page has a nice graphical illustration of Fourier filtering.

Filtering is of central importance in neuroscience, both as an analysis tool (especially when dealing with continuously sampled data) and as a model for operations performed by neural circuits at multiple levels. For instance, the classical receptive fields of V1 neurons can be thought of as filters operating on visual input, and the characteristic time course of postsynaptic potentials imposes limits on how fast signals can be transmitted. Here, we focus on some basic data analysis applications.

Step-by-step

Filtering is a complex topic in signal processing, with a huge literature, much current research, and many mathematical derivations beyond the scope of this course. However, it is important to be familiar with some of the fundamentals so you can better place descriptions you encounter in the literature, as well as be aware of issues relating to your own analyses. Thus, we will begin with a brief conceptual overview.

Basic concepts in filtering

The difference equation

Removing specific frequencies in the Fourier domain is an intuitive way to think about filtering, but this is not how filters are generally implemented. (There are several reasons for this, such as the difficulty in filtering real-time as samples are coming in.) Instead, digital filtering is typically accomplished with a difference equation of the form:

This equation describes how to compute sample n of the filtered signal y from the original signal x. The value of y(n) in general can be a function of an arbitrary number of past samples (-1, -2, etc.) of the original signal x, as well as of y itself.

The coefficients b and a fully define the filter. Notice that a filter with only b coefficients depends only on the original signal x; that is, there are no feedback terms taken from the filtered signal y. The a coefficients describe the feedback components. This distinction is the basis for the commonly used terms “FIR” (Finite Impulse Response; only b components) and “IIR” (Infinite Impulse Response, a components) to describe a filter. The behavior of IIR filters can be much more complex because of this feedback.

Another useful term to be aware of is the order of a filter: this simply refers to the maximum number of samples back it looks. This is equivalent to the maximum number of coefficients a and b the filter has; a(0) and b(0) always need to be defined (a(0) is implicit on the left side in front of y(n), often omitted because it is generally 1) as well as at least one a or b for each sample back up to the maximum.

(Technical note: the difference equation can be related to the frequency domain through the Z-transform; it is not necessary to understand this, but if you are curious, the Leis book has an explanation.)

An example using filter()

An example of a simple filtering operation is to compute a running average. Looking at the difference equation above, we see we can accomplish this using b coefficients only. How many coefficients we use will determine the size of the window (in samples) that we average over; the magnitude of the coefficients should be set so that we in fact get the mean, and not for instance the sum.

So, to compute the running mean over four samples, we want:

Using filter(), we would simply do

a = 1; % a_0 is the (hidden) coefficient on the left side, in front of y(n)
b = [1/4 1/4 1/4 1/4]; % four b's of 1/4 each so we get the mean
 
y = filter(b,a,x); % x is the original signal, y the filtered version

Of course, this won't work because we don't have our input signal x defined yet. As a quick illustration (following this MATLAB page), we can do:

load count.dat;
x = count(:,1);
 
t = 1:length(x);
plot(t,x,'-.',t,y,'-'), grid on
legend('Original','Filtered',2)

Combining the above pieces should give:

The filtered signal looks roughly as expected – a nicely smoothed version of the original – but a few things are worth noting. Look at the first sample of the filtered signal: it equals x(1)/4, which means that any values of x which we did not have were assumed to be zero. This is an example of an “edge effect”; in general we don't really think the signal we didn't sample is truly zero, but this is filter()'s implicit, default assumption.

Another property of this filtered signal is that it is phase-shifted to the right; this of course arises because our y(n) is based only on past samples, not on the future. This is a key issue for neuroscience data analysis and we will return to it below.

Common filters and their applications

Computing a running mean has its uses, but for neural data we are typically interested in other applications of filtering. The canonical filter types, illustrated below, are the lowpass filter (pass low frequencies, suppress high frequencies), highpass filter (the reverse), bandpass filter (only pass frequencies within a certain range) and notch filter (suppress frequencies within a narrow range):

Note that these illustrations are in the frequency domain: some frequencies are unchanged (0 dB; recall that the decibel is a common unit of signal power), while others are attenuated. The -3dB point is a common reference and corresponds to a 50% reduction in signal power (recall that dB is a log scale).

How do we know what values of b and a will accomplish these filtering operations? Can the optimal b and a for a given application be found? These are questions about filter design, a field for which MATLAB (in particular the Signal Processing Toolbox) provides many useful tools.

As may be expected from your experience with windowing and spectral leakage in the previous module, there is no ideal filter that completely rejects all unwanted frequencies while leaving all desired frequencies intact. Rather, different filters make different tradeoffs in terms of their frequency and phase response.

For instance, a Butterworth filter has a maximally flat frequency response in the passband. However, its rolloff (the transition from passband to stopband) is not as steep as that of Chebyshev filters. In turn, Chebychev filters experience ripple (some distortion) in either the passband or the stopband and thus are not as flat as Butterworth filters. Many other filters, with different properties, exist. Thus, it becomes important to select the right filter for your application. To help with this process, MATLAB has a useful tool that provides the key properties of a given filter at a glance.

Frequency response of some common filter types

Let's start with designing a basic Butterworth bandpass filter. The help for butter() says:

>> help butter
 butter Butterworth digital and analog filter design.
   [B,A] = butter(N,Wn) designs an Nth order lowpass digital
   Butterworth filter and returns the filter coefficients in length 
   N+1 vectors B (numerator) and A (denominator). The coefficients 
   are listed in descending powers of z. The cutoff frequency 
   Wn must be 0.0 < Wn < 1.0, with 1.0 corresponding to 
   half the sample rate.
 
   If Wn is a two-element vector, Wn = [W1 W2], butter returns an 
   order 2N bandpass filter with passband  W1 < W < W2.

So, butter() computes for us the coefficients a and b of the difference equation. The “descending powers of z” referred to correspond to samples further in the past: z^0 is the current sample, z^-1 the previous one, et cetera. This is Z-transform stuff you don't need to understand; for now it's sufficient to know that butter() returns the a and b coefficients in the order we expect. Likewise, the numerator and denominator refer to where a and b would end up if we wrote out the “transfer function” for the filter; no need to worry about this either.

Note that we need to specify the cutoff frequencies as numbers between 0 and 1, where 1 correponds to the Nyquist frequency of our data. So we cannot directly say we want the cutoff to be e.g. 250Hz, we have to normalize by Fs/2.

☛ Generate a 10-second long white noise signal, sampled at 500Hz. Filter it using a bandpass Butterworth filter between 50 and 100 Hz of order 4. Plot the Welch spectrum, in dB, of the original and filtered signal, using a 512-sample Hanning window. Evaluate the FFT over 2^14 points.

Your code should look something like:

% set up time axis
Fs = ...
tvec = ...
 
% generate white noise
x = rand(...)
 
% get PSD
[Porig,Forig] = pwelch(x, ...)
 
% design filter
W1 = ...
W2 = ...
[b,a] = butter(4,[W1 W2]);
y = filter(...)
 
% get PSD
[Pfilt,Ffilt] = pwelch(y, ...)
 
% plot the resulting PSDs
subplot(121)
plot(... 10*log10(..));

When done correctly, the resulting PSDs should be similar to these:

As you can see, our filter is doing something, but it's also clear that it's not perfect. Frequencies outside the passband still get passed to some degree, and if you look carefully (grid on can help), you can see that frequencies in the passband but close to the rolloff frequencies are slightly attenuated already.

Of course, in general we are not interested in white noise, but it is a useful testbed to gauge the properties of a filter. Because we know white noise has a flat frequency spectrum, we can see at a glance which frequencies are attenuated after filtering.

One way to improve is to ask MATLAB to suggest an appropriate filter order for us:

Wp = [ 50 100] * 2 / Fs; % passband - between 50 and 100 Hz
Ws = [ 45 105] * 2 / Fs; % stopband
[N,Wn] = buttord( Wp, Ws, 3, 20); % determine filter parameters
[b2,a2] = butter(N,Wn); % builds filter

The buttord() function takes the filter specifications and returns a suggested filter order (N) and new frequency cutoffs Wn to feed to butter(). The way we specify this is to say that we require a minimum level of attenuation in the stopband (in this case, 20dB) and we are willing to tolerate a certain amount of distortion (“ripple”) in the passband (in this case, 3dB).

As it turns out, for this filter, buttord() suggests order 15! This is quite a difference from the 4th order filter we implemented above. Let's see how our new filter compares. Happily, we don't need to keep filtering white noise in order to see the frequency response of a filter, because MATLAB provides a nice tool:

fvtool(b,a,b2,a2)

You should get:

Notice how our new filter (in green) is much more effective than the previous one. It has sharper roll-off and better attenuation in the stopband. The units on the frequency axis are fractions of Fs/2, so 0.2 corresponds to 50Hz as expected.

☛ What happens if you get greedy and try to have a stopband of [48 102] as an input to buttord()?

Let's try a different filter, a Chebyshev Type I. With this one, we can be greedy:

Wp = [ 50 100] * 2 / Fs; 
Ws = [ 48 102] * 2 / Fs;
[N,Wn] = cheb1ord( Wp, Ws, 3, 20); 
[b_c1,a_c1] = cheby1(N,0.5,Wn);
fvtool(b2,a2,b_c1,a_c1)

Note that we use the same workflow of having MATLAB suggest a filter order and passband based on our specifications. The cheby1() function needs one additional input argument compared to butter(); this relates to the “ripple” that is visible in the frequency response:

As you can see, our Chebyshev filter (in green) has a sharper rolloff, but at a cost: the frequency response in our passband is now no longer flat, and has a “ripple” instead. There is also a Chebyshev Type II filter, which has a flat passband response but a ripple in the stopband, but its rolloff tends to be less sharp so is less commonly used.

Phase responses and filtfilt()

Let's apply our filter to a more realistic signal:

Fs = 500; dt = 1./Fs;
t = [0 10];
tvec = t(1):dt:t(2)-dt;
 
s1 = sin(2*pi*80*tvec+pi/6);
s2 = sin(2*pi*40*tvec);
s = s1 + s2;
 
sf = filter(b_c1,a_c1,s);
 
plot(tvec,s,'k',tvec,sf,'r--'); hold on;
legend({'original','filtered'});
xlim([0 0.2]);

The result:

The filter was effective in removing the lower-frequency (40Hz) component, with only the 80Hz oscillation remaining. However, the phase of the signal has clearly changed also, by what appears like 180 degrees – the faster-oscillation peaks in the original trace now are closely aligned with the troughs in the filtered trace. As was the case in our first filtering example (the moving average filter above), the filtered signal appears delayed relative to the original.

Clearly, such phase shifts can be devastating for the analysis of neural data. If features of a LFP are delayed because of filtering, this may obscure relationships between the LFP and behavioral or neural events. In addition, any analysis that relies on knowing the phase of a LFP, such as theta phase precession, or cross-frequency coupling, will be affected as well. In general, the phase response is an important characteristic of any filter, and indeed fvtool can display it.

☛ Run fvtool again on the Butterworth and Chebyshev filters above, and now select the Phase Response button in the top left of the window.

Notice that the phase response is not constant but in fact depends on the input frequency. This makes it very difficult to correct for such phase shifts. For neural data, where even small phase shifts can be problematic, we therefore take an alternative approach: we filter the signal forwards and backwards, such that the net phase response is zero: no phase shift! This is accomplished using the filtfilt() function:

sf = filtfilt(b_c1,a_c1,s);
 
plot(tvec,s,'k',tvec,sf,'r--'); hold on;
legend({'original','filtered'});
xlim([0 0.2]);

☛ Verify that there is no longer any detectable phase shift.

So far so good, but what are the consequences for the frequency response of doing this?

%% compare freq responses
Fs = 500; dt = 1./Fs;
t = [0 10];
tvec = t(1):dt:t(2)-dt;
 
x = rand(size(tvec)); % white noise input
[P,F] = pwelch(x,hanning(512),256,2^14,Fs);
 
y1 = filter(b_c1,a_c1,x);
[P1,F1] = pwelch(y1,hanning(512),256,2^14,Fs);
 
y2 = filtfilt(b_c1,a_c1,x);
[P2,F2] = pwelch(y2,hanning(512),256,2^14,Fs);
 
plot(F,10*log10(P),F,10*log10(P1),F,10*log10(P2));
legend({'original','filter','filtfilt'});

This gives:

As is often the case, the output from filtfilt() actually has a steeper rolloff than that from filter(). This is because we are effectively filtering twice, an effect that can be approximated by increasing order of the filter (if you were to filter it only once). filtfilt() tends to be more robust, but it is always a good idea to check your filter on white noise if you have not used it before.

Some typical neuroscience applications

Removing 60Hz line noise in the data

Let's try to design a notch filter to remove 60Hz line noise, using the familiar method:

[b,a] = butter(10, [59 61] * 2 / Fs, 'stop');
fvtool(b,a);

As you can see, the frequency response doesn't look good.

☛ Try some different filter orders and see if you can get the desired notch shape (i.e. attenuation at 60Hz, no attenuation everywhere else).

This issue is similar to what we encountered when trying to get a sharper rolloff for our bandpass filter. In that case we fixed things by going to a Chebyshev filter. Another method is the following:

[z,p,k] = butter(10, [59 61] * 2 / Fs, 'stop'); % note, we ask for 3 outputs instead of 2
[sos,g] = zp2sos(z,p,k); % convert to SOS format
h = dfilt.df2sos(sos,g); % create filter object
fvtool(h);

Now we have a good looking notch filter. This so-called “second-order section” format is more numerically precise than the standard difference equation [b,a] format. When dealing with higher order filters it can make a difference!

☛ Test this nice notch filter on white noise using filtfilt().

Detecting movement artifacts

Movement artifacts arising from EMG (electrical activity generated by muscles) are common when recording neural signals from behaving subjects. Ideally, these are removed by correct referencing, but this is not always possible. Chewing artifacts, when a rat consumes food pellets, can be particularly pernicious; as can eyeblinks when recording scalp EEG.

As an example:

%% cd to R016-2012-10-08 folder first
cfg = [];
cfg.fc = {'R016-2012-10-08-CSC02b.ncs'};
csc = LoadCSC(cfg);
 
cscR = restrict(csc,1270,1272);
plot(cscR.tvec,cscR.data)

This is a piece of LFP recorded as this rat was eating. Note the characteristic rhythmic pattern of high-frequency oscillation events that occur approximately 4 times each second:

We could try to remove these events from the signal and then pretend they were never there in further analysis, but a more conservative approach is simply to detect them and store the corresponding times so that we can exclude them from subsequent analysis.

Let's take a guess at a frequency band that may be able to detect these events:

Fs = cscR.cfg.hdr{1}.SamplingFrequency;
Wp = [ 180 220] * 2 / Fs;
Ws = [ 178 222] * 2 / Fs;
[N,Wn] = cheb1ord( Wp, Ws, 3, 20); % determine filter parameters
[b_c1,a_c1] = cheby1(N,0.5,Wn); % builds filter
 
%fvtool(b_c1,a_c1); % remember to check your filter!
 
y = filtfilt(b_c1,a_c1,cscR.data);
plot(cscR.tvec,cscR.data,'b',cscR.tvec,y,'r');

It looks like this filter is picking up something from the chewing events. But we are not interested in a chewing-band signal per se; we want to use it to detect the presence of chewing events. Thus, we can convert the oscillating signal into an unsigned quantity, signal power:

chew_power = y.^2;

Plot this instantaneous signal power and notice this is a pretty variable quantity. This is where our moving average filter comes in handy, but we can also use medfilt1(), a median filter which is a bit more robust to outliers:

chew_power = y.^2;
chew_power_filtered = medfilt1(chew_power,101); % filter window is specified in samples, so this is ~50ms
[h1 h2] = plotyy(cscR.tvec,cscR.data,cscR.tvec,chew_power_filtered);

The resulting envelope is well-behaved and matches the chewing events nicely:

FIXME update image file above to reflect use of plotyy()

The filtered chewband power can now be used for a simple thresholding operation to decide if a chewing event is present or not. By changing the filter properties, the same approach can be used to obtain a time series of any frequency band of interest, to examine for instance the relationship between running speed and theta power, or between gamma events and reward anticipation.

This approach illustrates that often we don't really want to replace the original signal with a filtered version. Rather, the same original signal is used to generate multiple different filtered signals, used together or separately to explore distinct analysis questions and to make include/exclude decisions.

Exercise

Recall the plots you made in Module 3. The piece of data you zoomed in on contained sharp wave-ripple complexes (SWRs) recorded from the dorsal CA1 area of the hippocampus, manifest as brief (~200ms) high-frequency oscilation in the LFP (the “ripple”) which often rides on top of a slower deflection (the “sharp wave”), like this:

These LFP events are associated with the synchronous activation of many cells, which is often structured to form “replay”, the sequential activation of place cells corresponding to a coherent spatial trajectory.

Studies of replay start with the detection of potential replay events. To do this, we need to isolate those features of the LFP that are associated with SWR events and distinguish them from those resulting from artifacts associated with chewing, grooming, et cetera.

Based on the filtering concepts above, we can implement a workflow for detecting SWR events, as follows:

%% some hippocampus data
cd('C:\data\R042-2013-08-18_recording');
cfg = []; cfg.fc = {'R042-2013-08-18-CSC03a.ncs'};
lfp = LoadCSC(cfg);
 
%% filter in SWR band
cfg = [];
cfg.f = [140 220];
cfg.display_filter = 0;
 
SWRf = FilterLFP(cfg,lfp);
 
%% obtain power and z-score it
SWRp = LFPpower([],SWRf);
SWRp_z = zscore_tsd(SWRp);
 
%% detect events
cfg = [];
cfg.method = 'raw';
cfg.threshold = 3;
cfg.dcn =  '>'; % return intervals where threshold is exceeded
cfg.merge_thr = 0.05; % merge events closer than this
cfg.minlen = 0.05; % minimum interval length
 
SWR_evt = TSDtoIV(cfg,SWRp_z);
 
%% to each event, add a field with the max z-scored power (for later selection)
cfg = [];
cfg.method = 'max'; % 'min', 'mean'
cfg.label = 'maxSWRp'; % what to call this in iv, i.e. usr.label
 
SWR_evt = AddTSDtoIV(cfg,SWR_evt,SWRp_z);
 
%% select only those events of >5 z-scored power
cfg = [];
cfg.dcn = '>';
cfg.threshold = 5;
 
SWR_evt = SelectIV(cfg,SWR_evt);
 
%% plot events in highlighted on top of full lfp
PlotTSDfromIV([],SWR_evt,lfp);
 
%% ..or the events alone (fixed 200ms window centered at event time)
close all;
 
cfg = [];
cfg.display = 'iv';
cfg.mode = 'center';
cfg.fgcol = 'k';
 
PlotTSDfromIV(cfg,SWR_evt,lfp);
%% ..hold on (highlight edges of event on top of previous plot)
cfg = [];
cfg.display = 'iv';
cfg.fgcol = 'r';
 
PlotTSDfromIV(cfg,SWR_evt,lfp);

☛ Try it, and inspect the results. What strategies can you think of to evaluate the accuracy and precision of the above detection? How might the workflow be improved?

analysis/nsb2015/week7.txt · Last modified: 2018/07/07 10:19 (external edit)