User Tools

Site Tools


analysis:course:week7

This is an old revision of the document!


Time-frequency analysis II: second-order relationships, phase-amplitude relationships

Goals:

  • 1
  • 2

Deliverables:

  • 1

Resources:

  • 1
  • 2

Introduction

In the previous module we saw that a spectrogram can determine the frequency content of a signal over time. Often, spectrograms resulting from brain signals reveal rich patterns of interacting frequency bands. A simple example of this is the previously mentioned panel C of the first figure in the previous module: this shows that in the rat ventral striatum, gamma-50 and gamma-80 power tend to be slightly anticorrelated. More generally, we might ask, what structure is there in the spectrogram? What relationships exist between different frequency bands? Can we predict power based on the history of the signal? Beyond the power (amplitude) components of the spectrogram, an additional dimension is phase, which may enter in systematic relationships with power. For instance, in the hippocampus, gamma power tends to occur at certain phases of the slower theta (~8Hz) rhythm (Colgin et al. 2009). Such “second-order” relationships (built on the “first-order” spectrogram) are the focus of this module.

Reminder: correlation

A simple yet powerful tool for characterizing dependencies between variables – such as power in one frequency band and another – is Pearson's product-moment correlation coefficient (or “correlation coefficient” in short), which for a finite sample can be computed as follows:

where x and y are the signals to be correlated, with means of x-bar and y-bar respectively. In other words, it is the covariance between X and Y divided by the product of their standard deviations. It is 0 for independent (uncorrelated) signals, -1 for perfectly anticorrelated signals, and 1 for perfectly correlated signals.

Note that the correlation coefficient only captures linear relationships between variables, as can be seen from these scatterplots:

The MATLAB function corrcoef() computes the sample correlation coefficient, and in addition provides the probability of obtaining the reported r value under the null hypothesis that the signals are independent (“p-value”):

x = randn(100,4);  % uncorrelated data - 4 signals of 100 points each
x(:,4) = sum(x,2);   % 4th signal is now correlated with first 3
[r,p] = corrcoef(x)  % compute sample correlation and p-values; notice signal 4 is correlated with the others
imagesc(r) % plot the correlation matrix -- note symmetry, diagonal, and r values of ~0.5 for signal 4

corrcoef() computes the correlation coefficients and p-values for every pair of signals in a matrix – in this case, for 4 signals. We can see from its output that as expected, signal 4 is correlated with every other signal. Note also that, of course, the correlation of any signal with itself (the values along the diagonal) is 1.

Armed with this tool, we can compute the correlation matrix of a ventral striatal spectrogram:

% remember to cd to data first
fname = 'R016-2012-10-03-CSC04a.Ncs';
csc = myLoadCSC(fname);
 
cscR = Restrict(csc,2700,3300); % risk session only
Fs = 2000;
 
[S,F,T,P] = spectrogram(Data(cscR),hanning(512),384,1:0.25:200,Fs); % spectrogram
 
[r,p] = corrcoef(10*log10(P')); % correlation matrix (across frequencies) of spectrogram
 
% plot
imagesc(F,F,r); 
caxis([-0.1 0.5]); axis xy; colorbar; grid on;
set(gca,'XLim',[0 150],'YLim',[0 150],'FontSize',14,'XTick',0:10:150,'YTick',0:10:150);

You should get:

As we would expect, nearby frequencies tend to be correlated. The blob around 80-100Hz indicates that these frequencies tend to co-occur. In contrast, there is a dark patch centered around (60,95) indicating that power in those frequencies is slightly anti-correlated. Note also the positive correlation around (20,60) suggesting that beta and low-gamma power co-occur. This pattern in the cross-frequency correlation matrix is characteristic for ventral striatal LFPs (van der Meer and Redish, 2009).

Autocorrelation for continuous signals

The above analysis focused on the co-occurrence of power across different frequencies; it asks, “when power at some frequency is high, what does that tell me about the power at other frequencies at that same time”?

However, often we are also interested in relationships that are not limited to co-occurrence – for instance, it is possible that power at a certain frequency predicts that some time later a power increase in some other band will occur. To characterize such relationships, which involve a time lag, we first need the concept of autocorrelation.

The correlation of a signal with itself is 1 by definition. However, what about the correlation of a signal with the same signal, one sample later? For white noise, where each sample is generated independently, this should be zero:

wnoise = randn(1000,1);
[r,p] = corrcoef(wnoise(1:end-1),wnoise(2:end))

Note that when passing two arguments to corrcoef() the output is a 2×2 matrix, where the diagonals are 1 and the off-diagonal elements are of interest.

We can generalize this example to computing correlations not just for a lag of one sample, as above, but for any lag:

[acf,lags] = xcorr(wnoise,100,'coeff');
plot(lags,acf,'LineWidth',2); grid on;
set(gca,'FontSize',18); xlabel('time lag (samples)'); ylabel('correlation ({\itr})');

You should get

As expected, the autocorrelation of white noise is essentially zero for any lag (except 0).

What if we try a signal that is not just white noise, but has a clear periodicity?

Fs = 500; dt = 1./Fs;
tvec = 0:dt:2-dt;
 
wnoise = randn(size(tvec));
sgnl = sin(2*pi*10*tvec) + wnoise; % correlated signal
 
subplot(211);
plot(tvec,sgnl); grid on;
set(gca,'FontSize',18); title('signal');
 
subplot(212);
[acf,lags] = xcorr(sgnl,100,'coeff');
lags = lags.*dt; % convert samples to time
plot(lags,acf); grid on;
set(gca,'FontSize',18); xlabel('time lag (s)'); ylabel('correlation ({\itr})');
title('autocorrelation');

Now we get

The autocorrelation now has peaks at 0.1s, corresponding to the 10Hz component in the original signal.

☛ What would you expect the autocorrelation to look like if the above code were modified to use a rectified sine wave (abs(sin(..)))? Verify your prediction. When you do, note that the periodicity becomes difficult to see by eye in the raw signal, yet the acorr pulls it out.

Cross-correlation for continuous signals

analysis/course/week7.1382907414.txt.gz · Last modified: 2018/07/07 10:19 (external edit)