User Tools

Site Tools


analysis:course-w16:week4

This is an old revision of the document!


~~DISCUSSION~~

:!: Under construction, do not use! :!:

Anatomy of time series data, sampling theory

Goals:

  • Understand the fundamentals of sampling theory: Nyquist frequency, aliasing
  • Learn why and how to use anti-aliasing filters
  • Reconstruct a signal from sampled data
  • Examine the file structure of Neuralynx continuously sampled data in detail

Resources:

  • (intuitive background) nice, quick intro to aliasing by Bruno Olshausen, with some connections to the human visual system
  • (more technical background, optional) read Chapter 3 of the Leis book. Skip sections 3.4.3, 3.4.4, 3.4.5, 3.4.6, 3.4.7, 3.7. Skim section 3.6.

Introductory remarks

Systems for the acquisition of neural data give us a necessarily imperfect view of the brain. Some of these limitations are obvious, such as a finite number of recording sites and contamination of the data with movement artifacts and system noise. Other limitations are perhaps more subtle, and include the pitfalls associated with limited (temporal) sampling of the underlying process and the properties of specific recording systems.

An important phrase to remember in any data analysis workflow is "garbage in, garbage out." Making sure that your input is not garbage starts with a thorough understanding of the recorded signal and how it is stored. (Of course, if you abuse it enough, even the best data can rapidly turn to garbage. But that is beyond the scope of this module, which deals with the raw data.)

Motivating example: aliasing

Before you begin, do a git pull from the course repository. Also, to reproduce the figures shown here, change the default font size (set(0,'DefaultAxesFontSize',18) – a good place to put this is in your path shortcut).

Let's start with an example that illustrates what can go wrong if you are not aware of some basic sampling theory ideas. To do so, we will first construct a 10Hz signal, sampled at 1000Hz. Recalling that the frequency f of a sine wave is given by $y = sin(2 \pi f t)$:

Fs1 = 1000; % Fs is the conventional variable name for sampling freq
F1 = 10; twin = [0 1]; % use a 1-second time window (from 0 to 1s)
 
tvec1 = twin(1):1/Fs1:twin(2); % timebase for signal
signal1 = sin(2*pi*F1*tvec1);

Notice the general approach in defining time series data: we first construct a timebase (conventionally named tvec, or t) and then the signal.

☛ Plot the signal1 variable and verify that the result matches your expectation: for instance, it should have a specific number of peaks.

Let's say we are going to sample this signal at 12Hz:

Fs2 = 12;
tvec2 = twin(1):1/Fs2:twin(2);
signal2 = interp1(tvec1,signal1,tvec2,'nearest');

Note the use of the interp1() function. This is a very important and frequently used command with some interesting options we'll explore later. For now, we are telling it something like, “we have an input signal, specified by tvec1 and signal1; return those values of signal1 for those values of tvec1 closest to those in tvec2. An intuition for that is illustrated in the figure below – basically you want to get a value for some time (the red dot) that isn't explicitly present in your signal (the blue stems):

Now that we know how the signal2 variable – the signal we are seeing by sampling at 12 Hz – is obtained, we can plot it:

plot(tvec1,signal1);
hold on;
plot(tvec2,signal2,'.g','MarkerSize',20);
plot(tvec1,-sin(2*pi*2*tvec1),'r--','LineWidth',2);
xlabel('time (s)'); ylabel('y');

You should see:

The individual data points we obtain with our 12Hz sampling are shown in green. Notice that it describes a periodic signal (red dashed line) with a frequency nothing like the 10Hz in the original signal (blue sine wave)! We might be tempted to conclude that we are seeing a 2Hz signal, when in fact there is no such thing in the original data. This effect is known as aliasing (i.e. the existence of multiple underlying signals which could produce a given set of samples), and it can happen when certain conditions are met, which we will explore next.

The Nyquist limit

The Nyquist sampling theorem states that the minimum sampling frequency required to avoid aliasing is twice the frequency of the signal, i.e.

$$ F_s \geq 2*F_{orig} $$

This Nyquist criterion is intuitive, when you plot the signal that can be obtained by sampling our original signal at 20Hz, i.e. twice the frequency:

Note that we obtain a signal which is periodic (repeating) at 10Hz, as we know to be correct.

An important consequence of the Nyquist criterion is that

$$ F_{orig} \leq F_s / 2 $$

In other words, the highest-frequency signal that can be recovered when sampling is half the sampling frequency. This frequency is sometimes referred to as the Nyquist frequency or Nyquist limit (the precise terminology gets a bit confused, but in neuroscience they are interpreted as referring to the same thing). So, for instance, digital audio on CDs and most digital file formats is typically represented with Fs = 44100 Hz, which means that the highest frequency that theoretically can be reproduced is 22050 Hz, corresponding approximately to the upper limit of human hearing.

However, it is best not to cut things too fine: say you are interested in detecting a “gamma” frequency of 75Hz with your EEG recording system. If you acquire data at Fs = 150 Hz, you might run into problems.

☛ To see this, change Fs2 in the code above to 20, and plot the result. What do you see?

Thus, a (very safe) rule of thumb is to acquire data at a sampling frequency four times the minimum signal frequency you are interested in detecting.

Subsampling (decimating) time series data

Reconstructing a signal from sampled data

Detailed examination of Neuralynx time series data

analysis/course-w16/week4.1453397146.txt.gz · Last modified: 2018/07/07 10:19 (external edit)