User Tools

Site Tools


analysis:course-w16:week12

~~DISCUSSION~~

Time-frequency analysis II: cross-frequency coupling

Goals:

  • Use the spectrogram as a basis to reveal correlations in power across frequencies
  • Understand how to compute and interpret autocorrelation and crosscorrelation functions for continuous data
  • Apply the Hilbert transform to obtain signal phase over time for a given frequency
  • Relate phase of one frequency to power at another by using histc()
  • Quantify the strength and statistical significance of phase-amplitude coupling

Resources:

Introduction

In the previous module we saw that a spectrogram can determine the frequency content of a signal over time. Often, spectrograms resulting from brain signals reveal rich patterns of interacting frequency bands. A simple example of this is the previously mentioned panel C of the first figure in the previous module: this shows that in the rat ventral striatum, gamma-50 and gamma-80 power tend to be slightly anticorrelated. More generally, we might ask, what structure is there in the spectrogram? What relationships exist between different frequency bands? Can we predict power based on the history of the signal? Beyond the power (amplitude) components of the spectrogram, an additional dimension is phase, which may enter in systematic relationships with power. For instance, in the hippocampus, gamma power tends to occur at certain phases of the slower theta (~8Hz) rhythm (Colgin et al. 2009). Such “second-order” relationships (built on the “first-order” spectrogram) are the focus of this module.

Reminder: correlation

A simple yet powerful tool for characterizing dependencies between variables – such as power in two different frequency bands – is Pearson's product-moment correlation coefficient (“correlation coefficient” in short). For a finite sample, the correlation coefficient r can be computed as follows:

where x and y are the signals to be correlated, with means of x-bar and y-bar respectively. In other words, r is the covariance between X and Y, divided by the product of their standard deviations. It is 0 for independent (uncorrelated) signals, -1 for perfectly anticorrelated signals, and 1 for perfectly correlated signals.

Note that the correlation coefficient only captures linear relationships between variables, as can be seen from these scatterplots:

(source: wikipedia)

The MATLAB function corrcoef() computes the sample correlation coefficient, and in addition provides the probability of obtaining the reported r value under the null hypothesis that the signals are independent (“p-value”):

x = randn(100,4);  % uncorrelated data - 4 signals of 100 points each
x(:,4) = sum(x,2);   % 4th signal is now correlated with first 3
[r,p] = corrcoef(x)  % compute sample correlation and p-values; notice signal 4 is correlated with the others
imagesc(r) % plot the correlation matrix -- note symmetry, diagonal, and r values of ~0.5 for signal 4

corrcoef() computes the correlation coefficients and p-values for every pair of signals in a matrix – in this case, for 4 signals. We can see from its output that as expected, signal 4 is correlated with every other signal. Note also that, of course, the correlation of any signal with itself (the values along the diagonal) is 1.

Armed with this tool, we can compute the correlation matrix of a ventral striatal spectrogram:

% remember to cd to data folder first
cfg = [];
cfg.fc = {'R016-2012-10-03-CSC04a.Ncs'};
csc = LoadCSC(cfg);
 
csc = restrict(csc,csc.cfg.ExpKeys.TimeOnTrack(2),csc.cfg.ExpKeys.TimeOffTrack(2)); % "risk" session only
Fs = csc.cfg.hdr{1}.SamplingFrequency;
 
[S,F,T,P] = spectrogram(csc.data,hanning(512),384,1:0.25:200,Fs); % spectrogram -- will take a while to compute!
 
[r,p] = corrcoef(10*log10(P')); % correlation matrix (across frequencies) of spectrogram
 
% plot
imagesc(F,F,r); 
caxis([-0.1 0.5]); axis xy; colorbar; grid on;
set(gca,'XLim',[0 150],'YLim',[0 150],'FontSize',14,'XTick',0:10:150,'YTick',0:10:150);

(Note, this takes about a minute to run on my laptop.)

You should get:

As we would expect, nearby frequencies tend to be correlated. The blob around 80-100Hz indicates that these frequencies tend to co-occur. In contrast, there is a dark patch centered around (60,95) indicating that power in those frequencies is slightly anti-correlated. Note also the positive correlation around (20,60) suggesting that beta and low-gamma power co-occur. This pattern in the cross-frequency correlation matrix is characteristic for ventral striatal LFPs ( van der Meer and Redish, 2009).

☛ As a second-order method, how this plot looks is sensitive to the parameters of the (first-order) spectrogram. What in this plot changes if you vary the width of the window used to compute the spectrogram?

Autocorrelation for continuous signals

The above analysis focused on the co-occurrence of power across different frequencies: it asks, “when power at some frequency is high, what does that imply about the power at other frequencies at that same time”?

However, often we are also interested in relationships that are not limited to co-occurrence – for instance, it is possible that power at a certain frequency predicts that some time later a power increase in some other band will occur. To characterize such relationships, which involve a time lag, we first need the concept of autocorrelation.

Autocorrelation of some synthetic signals

The correlation of a signal with itself is 1 by definition. However, what about the correlation of a signal with the same signal, one sample later? For white noise, where each sample is generated independently, this should be zero:

wnoise = randn(1000,1);
[r,p] = corrcoef(wnoise(1:end-1),wnoise(2:end)) % same signal, offset by one sample

Note that when passing two arguments to corrcoef() the output is a 2×2 matrix, where the diagonals are 1 and the off-diagonal elements are of interest.

We can generalize this example to computing correlations not just for a lag of one sample, as above, but for any lag. To do this, MATLAB provides the xcorr() function:

[acf,lags] = xcorr(wnoise,100,'coeff'); % compute correlation coefficients for lags up to 100
plot(lags,acf,'LineWidth',2); grid on;
set(gca,'FontSize',18); xlabel('time lag (samples)'); ylabel('correlation ({\itr})');

You should get

As expected, the autocorrelation of white noise is essentially zero for any lag (except 0). Note the use of LaTeX markup to italicize the r in the ylabel.

What if we try a signal that is not just white noise, but has a clear periodicity?

Fs = 500; dt = 1./Fs;
tvec = 0:dt:2-dt;
 
wnoise = randn(size(tvec));
sgnl = sin(2*pi*10*tvec) + wnoise; % 10 Hz sine wave plus Gaussian white noise
 
subplot(211);
plot(tvec,sgnl); grid on;
set(gca,'FontSize',18); title('signal');
 
subplot(212);
[acf,lags] = xcorr(sgnl,100,'coeff');
lags = lags.*dt; % convert samples to time
plot(lags,acf); grid on;
set(gca,'FontSize',18); xlabel('time lag (s)'); ylabel('correlation ({\itr})');
title('autocorrelation');

Now we get

The autocorrelation (acorr) has peaks at 0.1s, corresponding to the 10Hz component in the original signal, an idea formalized in the Wiener-Khinchin theorem we encountered in Module 11. An intuitive interpretation of this plot is: given a peak in the signal at time t = 0, what on average are the expected values at other times? For this signal, we can see that if there is a peak at 0, then we expect peaks at 0.1, 0.2, etc. and troughs at 0.5, 0.15… .

☛ What would you expect the autocorrelation to look like if the above code were modified to use a rectified sine wave (abs(sin(..)))? Verify your prediction.

You should notice that the autocorrelation now has twice the number of peaks compared to the original sine wave. In addition, this periodicity is perhaps not obvious in the original signal, but it is quite clear in the acorr. Autocorrelation functions average out noise that may be obscuring an underlying periodic signal; similarly, autocorrelations of grid cell rate maps often look much more impressive than the original (noisy) rate map!

A final difference is that there is now a baseline offset in the acorr. This occurs because after rectification, the original signal no longer has mean zero, and unlike corrcoef(), xcorr() does not actually subtract the mean from the signal(s).

☛ Subtract the mean from the rectified signal and plot the acorr again. Verify that the value at lag 1 matches the output of corrcoef() for lag 1.

Application to real data

Is there any periodicity in ventral striatal gamma power? That is, given high power at t = 0, what values do we expect at other times? If gamma power varies randomly, then we expect the acorr to look like that for white noise. If, on the other hand, high gamma power at time 0 tells us something about what powers to expect at other times, we will see something different.

First, let's get a finer-resolution version of the spectrogram:

% spectrogram with better time resolution (slow!)
Fs = 500;
csc.data = decimate(csc.data,4);
csc.tvec = downsample(csc.tvec,4);
[S,F,T,P] = spectrogram(csc.data,hanning(125),115,1:200,Fs);

What we need is a time series of gamma power, so that we can compute its autocorrelation. We will get this from the spectrogram, but it is worth noting that this is not the only way to obtain such a signal. Recall, for instance, that in Module 5 we used a filter to compute instantaneous power in the ripple band. An advantage of using the spectrogram is that it provides estimates of power over time across all frequencies, so it is easy to explore. However, more precise estimates can be obtained with the filtering method (as well as with better spectrograms such as the event-averaged one we computed with FieldTrip in Module 7).

For now we make do with what we have:

F_idx = find(F > 70 & F < 100); % high-gamma power
F_idx2 = find(F > 50 & F < 65); % low-gamma power
 
pwr = mean(P(F_idx,:)); % average across frequencies
pwr2 = mean(P(F_idx2,:));
 
[ac,lags] = xcorr(pwr2-mean(pwr2),50,'coeff'); % remember to subtract the mean!
lags = lags.*mean(diff(T));
figure;
plot(lags,ac)
set(gca,'FontSize',18); xlabel('time lag (s)'); ylabel('correlation ({\itr})');
title('low-gamma autocorrelation');

You should get:

Some differences with our reference acorrs from the previous section (white noise, and the 10Hz sine wave) are apparent. First, the central peak around t = 0 now has a certain nonzero width: if we have a peak in low-gamma power at time t = 0, power tends to stay elevated for a certain amount of time. Judging from the inflection point at around 0.2s, it appears that if low gamma power is high, it takes about 0.2s to return to a kind of baseline.

☛ What do you expect would happen to the width of this central peak if you were to change the width of the time window used to compute the spectrogram, say to twice the original width?

Second, a small dip in the baseline is apparent around 0.6s. This suggests that if we have high low-gamma power at t = 0, we are less likely than average to see high gamma power 0.6s later. This is an example of a “second-order” effect: we build on our “first-order” analysis (the spectrogram) to reveal further structure in how gamma power is organized over time.

Cross-correlation for continuous signals

The autocorrelation computes the correlation of a signal with itself over some different time lags. The cross-correlation does the same but between two different signals. It asks: given a peak in signal 1 at time zero, what do we know about signal 2?

Let's apply this idea to ventral striatal gamma power by asking how high-gamma power varies given a peak in low-gamma power at t = 0:

F_idx = find(F > 70 & F < 100); % high-gamma power
F_idx2 = find(F > 50 & F < 65); % low-gamma power
 
pwr = mean(P(F_idx,:)); % average across frequencies
pwr2 = mean(P(F_idx2,:));
 
[ac,lags] = xcorr(pwr-mean(pwr),pwr2-mean(pwr2),50,'coeff'); % note, two different input signals now
lags = lags.*mean(diff(T));
figure;
plot(lags,ac)
set(gca,'FontSize',18); xlabel('time lag (s)'); ylabel('correlation ({\itr})');
title('xcorr (high-gamma relative to low-gamma at t = 0)');

You should see:

Note that the xcorr is asymmetric: given a low-gamma peak at t = 0, high-gamma power tends to be higher before, rather than after this time. In other words, if there is strong high-gamma, then low-gamma is likely to follow, but the reverse is the case to a lesser extent (if at all). You can verify this by reversing the order of the inputs into xcorr(), which will yield a mirror image flipped around time zero. This result is consistent with the pattern we saw in the spectrogram in the previous module, where high-gamma tends to be closely followed by low-gamma, but not the other way around.

Extracting signal phases

In the previous section, we explored the relationships between different power bands of a signal. Such relationships are a form of cross-frequency coupling: what happens in one frequency band depends on what happens in another. Such cross-frequency relationships are not limited to power, but can also extend to phase, as the following figure (from Jensen and Colgin, 2007) illustrates:

The power-to-power coupling (a) we have explored using the correlation matrix of the spectrogram. To reveal cross-frequency coupling involving phase, we need a way to extract the phase of a signal. One way to accomplish this is to use the Hilbert transform:

Fs = 500; dt = 1./Fs;
tvec = 0:dt:1-dt;
 
wnoise = randn(size(tvec));
sgnl = sin(2*pi*4*tvec) + 0.1*wnoise;
 
h = hilbert(sgnl);
phi = angle(h);

The Hilbert transform of a periodic signal returns a complex quantity that, similar to the Fourier transform, contains both amplitude and phase information. Here, we access the phase component by angle(h) (the amplitude would be obtained by abs(h)), and plot it with some plotyy(), a nice function that gives two signals their own y-axis:

[ax_h,h1,h2] = plotyy(tvec,sgnl,tvec,phi);
set(ax_h(2),'YColor',[1 0 0],'YLim',[-pi pi],'YTick',-pi:pi/2:pi,'YTickLabel',{'-p','-p/2','0','p/2','p'}, ...
   'fontname','symbol','XTick',[]);
set(h2,'Color',[1 0 0]);
set(get(ax_h(2),'Ylabel'),'String','phase (radians)') 
box off;

The result:

Obviously, phase is only defined for a signal with sufficiently narrow frequency content. Thus, for real data, the application of the Hilbert transform to extract signal phase should be preceded by a bandpass filter.

A further caveat when extracting phase is that if the signal gets small relative to the noise, phases still get returned, but they can get unreliable without warning.

☛ Contaminate the above signal with increasing amounts of noise to verify this.

Relating signal phases to amplitude

Synthetic data

How can we detect relationships between phase and amplitude? It will be helpful to first generate a test signal that displays phase-amplitude coupling:

Fs = 500; dt = 1./Fs;
tvec = 0:dt:1-dt;
 
f1 = 8;
s1 = sin(2*pi*f1*tvec)+0.05*randn(size(tvec));
s1_phi = angle(hilbert(s1)); % phase of slow signal
dphi = exp(-abs(s1_phi)/1.5); % envelope with peaks at phase 0, based on slow phase
 
f2 = 80;
s2 = 0.3.*sin(2*pi*f2*tvec).*dphi; % fast oscillation multiplied by envelope
 
s = s1+s2;
plot(tvec,s); xlim([0 0.5]);

(Note that FieldTrip also has code for this.)

You should get:

Notice how the fast (80Hz) oscillation is strongest at the peaks of the slow (8Hz) signal. Given such a signal, we may wonder if we can extract this relationship by computing the correlation coefficient between phase (of the slow signal) and power (of the fast signal). However, as corrcoef(s1_phi,dphi) will demonstrate, the correlation is in fact zero. This is because the relationship between phase and power is non-linear: starting from the most negative phase, power first increases with phase until the phase is zero, and then decreases again in a symmetric fashion. A linear measure such as Pearson's correlation coefficient cannot detect this, as illustrated in the figure in the first section.

A general-purpose workaround is to divide signal phase into bins, and compute the average power for each bin. We can then ask if power varies as a function of bin, which if true would indicate cross-frequency phase-amplitude coupling.

To do this, we can define a function averageXbyYbin() as follows:

function x_avg = averageXbyYbin(x,y,y_edges)
 
[~,idx] = histc(y,y_edges); % idx returns which bin each point in y goes into
 
x_avg = zeros(size(y_edges));
for iBin = length(y_edges):-1:1 % for each bin...
 
   if sum(idx == iBin) ~= 0 % at least one sample in this bin
      x_avg(iBin) = nanmean(x(idx == iBin)); % compute average of those x's that go in it
   end
 
end

One quirk of MATLAB's phenomenally useful histc() function is that it creates N bins out of N edges, when perhaps N-1 bins would be more intuitive. This is because the Nth output bin is for those data points that fall exactly on the Nth edge. Thus, to plot our output correctly, we have to do something like

phi_edges = -pi:pi/8:pi;
pow_bin = averageXbyYbin(dphi,s1_phi,phi_edges);
 
pow_bin(end-1) = pow_bin(end-1)+pow_bin(end); % add counts on last edge to preceding bin
pow_bin = pow_bin(1:end-1); % trim -- see trim_histc()
 
phi_centers = phi_edges(1:end-1)+pi/16; % convert edges to centers
plot(phi_centers,pow_bin);

The output shows that we have recovered the original phase-amplitude relationship. The averageXbyYbin() function can be modified to also return standard deviations, or even the full distributions, for each bin to permit statistical significance testing.

Real data

The cross-frequency power correlation matrix above suggests that high-gamma power (~70-90Hz) co-occurs with alpha power (8-12Hz). This raises the possibility that high-gamma power is modulated by the phase of the alpha rhythm. As previous work has found alpha-gamma phase-amplitude coupling (PAC) in human subjects when they received feedback on a choice between alternative gambles (Cohen et al. 2009), we will focus our analysis on the times our rat subject received reward (food pellets).

To do so, we will use FieldTrip's data format, in which it is easy to detect and manipulate trials. Make sure you do a git pull and set up your path correctly:

restoredefaultpath;
addpath('C:\Users\mvdm\Documents\GitHub\fieldtrip');
ft_defaults;
addpath(genpath('C:\Users\mvdm\Documents\GitHub\neuraldata-w16\shared'));
 
rmpath('C:\Users\mvdm\Documents\GitHub\fieldtrip\external\signal');
 
cd('C:\data\R016\R016-2012-10-03_promoted');

Next, we define our frequency bands of interest, and load some data:

twin = [-1 3]; % time window of interest, relative to reward delivery
f_lg = [50 65]; % low-gamma freq band, to get power
f_hg = [70 85]; % high-gamma freq band, to get power
f_x = [8 12]; % low-frequency band, to get phase
 
LoadExpKeys;
fc = ExpKeys.goodGamma(1);
data = ft_read_neuralynx_interp(fc);

We obtain the instantaneous power in our frequency bands of interest using the Hilbert transform:

data2 = data; % temporary data variable
 
% low-gamma
data2f = ft_filterLFP(data2,f_lg,'ford',4); % a FieldTrip wrapper for the filtfilt() function
data.trial{1}(2,:) = data2f.trial{1};
data.label{2} = 'lg_pow';
data.trial{1}(2,:) = abs(hilbert(data.trial{1}(2,:))).^2;
 
% high-gamma
data2f = ft_filterLFP(data2,f_hg,'ford',4);
data.trial{1}(3,:) = data2f.trial{1};
data.label{3} = 'hg_pow';
data.trial{1}(3,:) = abs(hilbert(data.trial{1}(3,:))).^2;

To get instantaneous phase in the alpha band, we use the improved-precision filter form:

%% get phase of low-frequency oscillation
d = fdesign.bandpass('N,F3dB1,F3dB2',4,f_x(1),f_x(2),2e3);
Hd = design(d,'butter');
 
data2f = ft_filterLFP(data2,f_x,'B',Hd.sosMatrix,'A',Hd.scaleValues);
data.trial{1}(4,:) = data2f.trial{1};
data.label{4} = 'x_phase';
data.trial{1}(4,:) = angle(hilbert(data.trial{1}(4,:)));

Now that we have the quantities we are interested in, we can convert our data into trials:

all_cues = {'c1','c3','c5','lo','hi'}; % cell array with choice of elements {'c1','c3','c5','lo','hi'} (1, 3, 5 pellets; low and high risk)
 
cfg = [];
cfg.trialdef.cue = all_cues;
cfg.trialdef.block = 'both';
cfg.trialdef.eventtype = 'nosepoke';
cfg.trialdef.location = 'both';
cfg.trialfun = 'ft_trialfun_lineartracktone2';
cfg.trialdef.hdr = data.hdr;
cfg.trialdef.pre = 2; cfg.trialdef.post = 4;
 
trl = ft_trialfun_lineartracktone2(cfg); cfg.trl = trl;
this_data = ft_redefinetrial(cfg,data);

You may wonder why we did not chop up the LFP into trials first, and then compute power and phase.

☛ What are some possible pitfalls with this approach?

Now is a good time to inspect your data to see if it looks like what you expect. One way is ft_databrowser([],this_data).

Next, we'll get set up for the PAC analysis:

dpi = pi/18; % binsize for binning phase
pbin_edges = -pi:dpi:pi; pbin_centers = pbin_edges(1:end-1)+dpi/2;
maxlag = 2000; % samples for xcorr
nShuf = 100; % number of shuffles to perform
 
% restrict data
cfg = []; cfg.latency = twin;
temp = ft_selectdata(cfg,this_data);
trial_len = length(temp.trial{1}(1,:));
<code>
 
We are now ready to compute the observed PAC value:
 
<code matlab>
% first get observed PAC value
trial_mat = cell2mat(temp.trial); % trials concatenated
 
lg_idx = strmatch('lg_pow',temp.label);
obs_lg_pwr = trial_mat(lg_idx,:); % note, ft_selectdata switches labels!!
 
hg_idx = strmatch('hg_pow',temp.label);
obs_hg_pwr = trial_mat(hg_idx,:);
 
xf_idx = strmatch('x_phase',temp.label);
obs_xf_phase = trial_mat(xf_idx,:);
 
% explain how to quantify PAC.. mean vector length
obs_pac_lg = abs(mean(obs_lg_pwr.*exp(1i*obs_xf_phase))); % mean vector length
obs_pac_hg = abs(mean(obs_hg_pwr.*exp(1i*obs_xf_phase)));
 
% get observed phase-frequency relationships
obs_xf_lg = averageXbyYbin(obs_lg_pwr,obs_xf_phase,pbin_edges);
obs_xf_hg = averageXbyYbin(obs_hg_pwr,obs_xf_phase,pbin_edges);

The cryptic lines for obtaining obs_pac_lg and obs_pac_hg are shorthand to obtain the mean vector length of signal power as a function of phase. Since phase is a circular variable, we can conceptualize the mean power at each phase angle as a vector, whose length is proportional to power. If power was the same at each phase, the mean vector length would be zero (all the vectors would cancel out). On the other hand, if the mean vector had some length, it would indicate that power was not uniform across phase angles, i.e. evidence of phase-amplitude coupling.

☛ Plot the observed high-gamma power as a function of alpha phase (obs_xf_hg).

You should see that there appears to be a systematic change in power as a function of phase. But how do we determine if these changes are statistically reliably different from chance? To this end, we will compare the observed mean vector length to a distribution of mean vector lengths obtained from shuffled data, in which we systematically break the temporal relationship between phase and power, by shifting one relative to another by a random amount:

shuf_pac_lg = zeros(nShuf,1); shuf_pac_hg = zeros(nShuf,1);
shuf_xf_lg = zeros(nShuf,length(pbin_edges)-1); shuf_xf_hg = zeros(nShuf,length(pbin_edges)-1);
 
for iShuf = 1:nShuf
    shuf_data = temp;
 
    % random amount to shift data
    perm_shift1 = ceil(length(shuf_data.trial{1}(lg_idx,:)-1)*rand(1));
    perm_shift2 = ceil(length(shuf_data.trial{1}(lg_idx,:)-1)*rand(1));
 
    % for each trial, shift the data
    for iT = 1:length(shuf_data.trial)
 
        piece1 = shuf_data.trial{iT}(lg_idx,perm_shift1+1:end); piece2 = shuf_data.trial{iT}(lg_idx,1:perm_shift1);
        shuf_data.trial{iT}(lg_idx,1:trial_len-perm_shift1) = piece1;
        shuf_data.trial{iT}(lg_idx,end-perm_shift1+1:end) = piece2;
 
        piece1 = shuf_data.trial{iT}(hg_idx,perm_shift2+1:end); piece2 = shuf_data.trial{iT}(hg_idx,1:perm_shift2);
        shuf_data.trial{iT}(hg_idx,1:trial_len-perm_shift2) = piece1;
        shuf_data.trial{iT}(hg_idx,end-perm_shift2+1:end) = piece2;
 
 
    end
 
    % get PAC for this shuffle
    trial_mat = cell2mat(shuf_data.trial); % trials concatenated
 
    shuf_lg_pwr = trial_mat(lg_idx,:);
    shuf_hg_pwr = trial_mat(hg_idx,:);
 
    shuf_xf_lg(iShuf,:) = averageXbyYbin(shuf_lg_pwr,obs_xf_phase,pbin_edges);
    shuf_xf_hg(iShuf,:) = averageXbyYbin(shuf_hg_pwr,obs_xf_phase,pbin_edges);
 
    shuf_pac_lg(iShuf) = abs(mean(shuf_lg_pwr.*exp(1i*obs_xf_phase)));
    shuf_pac_hg(iShuf) = abs(mean(shuf_hg_pwr.*exp(1i*obs_xf_phase)));
 
end

Notice that for each shuffle, we obtain a PAC value, such that over 100 shuffles, we have a distribution of those values to which we can compare our observed value using a z-score (number of standard deviations our observed value is away from the mean):

pacz_lg = (obs_pac_lg-mean(shuf_pac_lg))/std(shuf_pac_lg);
pacz_hg = (obs_pac_hg-mean(shuf_pac_hg))/std(shuf_pac_hg);

This indicates the observed PAC for high-gamma is significantly greater than our shuffled distribution.

☛ What are two ways to obtain a p-value, i.e. probability that the observed PAC was drawn from the distribution of shuffles?

The above shuffling procedure is an example of resampling in which sets of “fake” data are obtained from particular rearrangements of actual data. A major advantage of resampling is that you can compare data sets that have identical underlying distributions of power, phase or whatever the relevant variables may be, avoiding assumptions of normality that are often required for parametric tests such as t-tests.

☛ The z-score computed above and other measures of significance do not tell us anything about effect size. How could we express PAC effect size in a way that does not depend on the overall signal power?

Challenges

★ In the above example, we had an a priori hypothesis about which frequencies would enter into a PAC relationship. However, it is also possible to estimate PAC relationships in an unbiased way across all frequencies. Implement such an analysis for the above data, and comment on the results.

★ Apply a PAC analysis to your own data. Make sure you motivate your choices of frequencies!

analysis/course-w16/week12.txt · Last modified: 2018/07/07 10:19 (external edit)