User Tools

Site Tools


Sidebar

[[people:ContactList|ContactList]]\\ [[protocols:Protocols|Protocols]]\\ [[logs:LogSheets|LogSheets]]\\ [[computing:Computing|Computing]]\\ [[protocols:EotW|EotW]]\\ [[protocols:IssueTracker|IssueTracker]]\\ **Reference** [[guides:Guides|HowToGuides]]\\ [[guides:Manuals|Manuals]]\\ [[literature:Literature]]\\ [[jclub:JournalClub|JournalClub]]\\ [[people:LabAlumni|LabAlumni]]\\ [[analysis:DataAnalysis|DataAnalysis]]\\ **Training** [[guides:TheBasics|TheBasics]]\\ [[tutorials:TutorialList|TutorialList]]\\ [[guides:About|About this wiki]] **Beyond the lab** [[Fellowships]]\\ [[Advice for...]]\\ **Admin** [[orphanswanted|OrphansWanted]]\\

analysis:course:week8

~~DISCUSSION~~ ===== Interactions between multiple signals: coherence and other connectivity measures ===== Goals: * Develop an intuition for what determines the coherence between two signals * Employ some different methods of estimating coherence to appreciate the tradeoffs involved * Understand the main limitations of the coherence measure, along with a sketch of advanced methods that attempt to address them Resources: * (background reading, a brief review) [[http://www.ncbi.nlm.nih.gov/pubmed/16150631 | Fries (2005) ]] Communication through coherence paper * (optional, a nice example application) [[http://www.ncbi.nlm.nih.gov/pubmed/17372196 | deCoteau et al. (2007)]] hippocampus-striatum coherence changes with learning Assignment: * No assignment because this is a longer module with several small changes to try out along the way ==== Introduction ==== So far, our analysis has been limited to single local field potentials. In this module we consider basic methods for characterizing the relationship between simultaneously recorded LFPs. Two such signals could in principle be completely unrelated (independent) or display various forms of coordination, such as transient synchronization in a specific frequency band. Characterizing and quantifying relationships between different signals, recorded from anatomically related areas in the brain, is an important tool in systems and cognitive neuroscience. Much evidence supports the idea that the effective flow of information along a fixed anatomical projection can be dynamically regulated, for instance by emphasizing bottom-up rather than top-down inputs in a task-related manner. One possible mechanism for this routing of information is "communication through coherence" and its many variants (Fries, 2005) which propose that effective connectivity depends on the degree to which two areas exhibit coherent oscillatory activity. In this module we define LFP coherence, explore its properties, and apply it to some example data. ==== Coherence: intuition and definition ==== === Intuition === Given two LFPs, one can imagine various possible relationships between the two, such as illustrated here (From [[http://www.ncbi.nlm.nih.gov/pubmed/22233726 | Siegel et al. 2012]]): {{ :analysis:course:siegel_etal12_fig1.jpg?600 |}} The top left panel shows a case where two signals are synchronous: their phases are essentially the same at every point in time. In the top right panel, the phases are not the same at every point, but there is a constant phase relationship (one signal is shifted relative to the other). An intuitive definition of the //coherence// between two signals (at a given frequency) is the extent to which two signals display a consistent phase relationship. It is 0 if the phases are completely unrelated, and 1 if the phase relationship is identical at every time point. ☛ Are the two signals in the lower left panel coherent? Ignore the amplitude changes for now. === Diversion: the Wiener-Khinchin theorem === We would like to capture formally the coherence between two signals as illustrated above. To do so, it is useful to gain an intuition for another piece of Fourier theory, the [[http://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin_theorem | Wiener-Khinchin theorem]]. This theorem (essentially) states that the Fourier transform of a given signal's autocorrelation corresponds to the Fourier transform of the signal itself. This can be illustrated by plotting the autocorrelation function of a periodic signal: <code matlab> Fs = 500; dt = 1./Fs; t = [0 2]; tvec = t(1):dt:t(2)-dt; f1 = 8; data1 = sin(2*pi*f1*tvec)+0.1*randn(size(tvec)); [acf,lags] = xcorr(data1,100,'coeff'); lags = lags.*(1./Fs); % convert samples to time plot(lags,acf); grid on; </code> The autocorrelation has the same periodicity as the original signal (8Hz, so peaks 0.125s apart). So, its Fourier transform would result in a spectral decomposition with a strong 8Hz peak. The key step underlying the formal definition of coherence is to take the Fourier transform, not of the //autocorrelation// function as above, but of the //cross-correlation// function between two signals. If two signals have a consistent phase relationship at a given frequency, this will show up in the cross-correlation: <code matlab> f2 = 8; data2 = sin(2*pi*f2*tvec+pi/4)+0.1*randn(size(tvec)); % phase-shifted version of data1 [ccf,lags] = xcorr(data1,data2,100,'coeff'); % now a cross-correlation lags = lags.*(1./Fs); % convert samples to time plot(lags,ccf); grid on; </code> Note that the xcorr no longer has a peak at time lag zero, as is the case for an autocorrelation. Rather, the peak is offset by an amount corresponding to the phase difference between the two signals. The correlation at this phase offset is nearly 1, indicating a strong correlation; in other words, signal 1 is very similar to signal 2 some time later; the xcorr is periodic at 8Hz as above. The Fourier transform of this xcorr would thus have a strong coefficient for 8Hz, but the phase for this component would be different from the autocorrelation. ☛ Verify that changing the phase shift in ''data2'' indeed changes the phase of the cross-correlogram. The Fourier spectrum of the cross-correlation function is known as the //cross-spectrum// or cross-spectral density (csd). === Definition and example === Now we are ready for the formal definition of coherence (full name: "magnitude-squared coherence") between two signals x and y: $$ C_{xy} = \frac{\lvert{P_{xy}}\rvert^2}{P_{xx}P_{yy}} $$ that is, the cross-spectrum $P_{xy}$ normalized by the auto-spectra of the two signals. $|x|$ indicates the modulus (real component) of $x$, so for now we are ignoring the angle (imaginary, phase) component of the cross-spectrum. Let's see this definition in action: <code matlab> figure; subplot(221); plot(tvec,data1,'r',tvec,data2,'b'); legend({'signal 1','signal 2'}); title('raw signals'); [Pxx,F] = pwelch(data1,hanning(250),125,length(data1),Fs); [Pyy,F] = pwelch(data2,hanning(250),125,length(data1),Fs); subplot(222) plot(F,abs(Pxx),'r',F,abs(Pyy),'b'); xlim([0 100]); xlabel('Frequency (Hz)'); ylabel('power'); title('PSD'); [Pxy,F] = cpsd(data1,data2,hanning(250),125,length(data1),Fs); subplot(223) plot(F,abs(Pxy)); xlim([0 100]); xlabel('Frequency (Hz)'); ylabel('power'); title('cross-spectrum'); [acf,lags] = xcorr(data1,data2,100,'coeff'); lags = lags.*(1./Fs); % convert samples to time subplot(224) plot(lags,acf); grid on; xlabel('time lag (s)'); ylabel('correlation ({\itr})'); title('xcorr'); </code> Note that the cross-spectrum ''Pxy'' is computed by a special function, ''cpsd()'', which takes the familiar arguments of window, overlap, nFFT, and Fs. You should get: {{ :analysis:course:week8_fig1.png?600 |}} Notice that the cross-spectrum has a clear peak at 8Hz as expected. ☛ What happens if you change the amplitude of one of the input signals? Make the amplitude of ''data1'' twice as large. As you can see, the cross-spectrum depends on the amplitude of the input signals. This is usually not what we want when analyzing brain signals, because the amplitude at any given time could depend on the electrical properties of our electrode and the precise recording location relative to a source of interest. Thus, coherence normalizes the cross-spectrum by the spectra of the individual signals. ☛ For the two signals of unequal amplitude, compute the coherence by normalizing the cross-spectrum (''C = (abs(Pxy).^2)./(Pxx.*Pyy);''). Verify that now there is no change in coherence when scaling ''data1'' by a factor 2 as above. This normalization means that coherence should theoretically be independent of signal amplitude. However, in practice we have noise to worry about: if the signal becomes small enough, the phases will be corrupted by noise. ☛ Find out how small you need to make ''data1'' relative to the noise before a drop in coherence occurs. Instead of computing the coherence manually from the cross-spectrum and the individual spectra, we can also use the ''mscohere'' function, which takes thes same arguments. An example follows below. ==== Properties of the coherence measure ==== === Example 1 === Thinking about coherence in terms of the cross-correlation between the signals is often helpful in interpreting coherence values. For instance, it can explain why two signals that have an 8Hz component in their power spectra are not necessarily coherent: <code matlab> %% just verify some cases where we break the phase relationship f = 2; % freq modulation (Hz) f2 = 8; m = 2; % freq modulation strength wsz = 200; % window size subplot(421) s2 = data2; plot(tvec,s2,tvec,data1); title('signal 1 - constant phase'); subplot(422) s3 = sin(2*pi*f2*tvec + m.*sin(2*pi*f*tvec - pi/2)) + 0.1*randn(size(tvec)); plot(tvec,s3,tvec,data1); title('signal 2 - varying phase'); subplot(423) [Ps2,F] = pwelch(s2,hanning(wsz),wsz/2,length(data2),Fs); plot(F,abs(Ps2)); title('PSD'); subplot(424) [Ps3,F] = pwelch(s3,hanning(wsz),wsz/2,length(data2),Fs); plot(F,abs(Ps3)); title('PSD'); subplot(425) [C,F] = mscohere(data1,s2,hanning(wsz),wsz/2,length(data1),Fs); % shortcut to obtain coherence plot(F,C); title('coherence'); xlabel('Frequency (Hz)'); subplot(426) [C,F] = mscohere(data1,s3,hanning(wsz),wsz/2,length(data1),Fs); plot(F,C); title('coherence'); xlabel('Frequency (Hz)'); [acf,lags] = xcorr(data1,s2,100,'coeff'); lags = lags.*(1./Fs); % convert samples to time subplot(427) plot(lags,acf); grid on; xlabel('time lag (s)'); ylabel('correlation ({\itr})'); title('xcorr'); [acf,lags] = xcorr(data1,s3,100,'coeff'); lags = lags.*(1./Fs); % convert samples to time subplot(428) plot(lags,acf); grid on; xlabel('time lag (s)'); ylabel('correlation ({\itr})'); title('xcorr'); </code> This should give something like: {{ :analysis:course:week8_fig2.png?600 |}} In the left column we have two signals with a constant phase relationship, so the cross-correlation has large values (we can predict one signal from the other with high accuracy). In the right column, we have a frequency-modulated signal, such that the phase relationship with the reference (8Hz) signal is much more variable. Accordingly, the cross-correlation values are much smaller (note the scale) and therefore the coherence at 8Hz is much lower compared to the left side as well. The coherence measure is subject to the same estimation tradeoffs and issues that we encountered previously for spectral estimation in general. These include sensitivity to window size and shape, and the number of windows used for averaging. As you can see from the above plot, our coherence measure looks quite noisy. ☛ Increase the length of the data generated for analysis to 10s instead of 2s and recompute the coherence. What do you notice? ''cpsd'' and ''mscohere'' use Welch's method of overlapping windows for spectral estimation. Thus, the robustness of the resulting estimate depends critically on the number of windows used. The coherence estimate should clean up nicely as you increase the data length. === Example 2 === Consider the following pair of signals: <code matlab> wsize = 50; Fs = 500; dt = 1./Fs; t = [0 2]; tvec = t(1):dt:t(2)-dt; f1 = 40; f2 = 40; % generate some strange sine waves mod1 = square(2*pi*4*tvec,20); mod1(mod1 < 0) = 0; mod2 = square(2*pi*4*tvec+pi,20); mod2(mod2 < 0) = 0; data1 = sin(2*pi*f1*tvec); data1 = data1.*mod1 + 0.01*randn(size(tvec)); data2 = sin(2*pi*f2*tvec); data2 = data2.*mod2 + 0.01*randn(size(tvec)) ; subplot(221); plot(tvec,data1,'r',tvec,data2,'b'); legend({'signal 1','signal 2'}); title('raw signals'); [P1,F] = pwelch(data1,hanning(wsize),wsize/2,length(data2),Fs); [P2,F] = pwelch(data2,hanning(wsize),wsize/2,length(data2),Fs); subplot(222) plot(F,abs(P1),'r',F,abs(P2),'b'); title('PSD'); subplot(223); [C,F] = mscohere(data1,data2,hanning(wsize),wsize/2,length(data1),Fs); plot(F,C); title('coherence'); xlabel('Frequency (Hz)'); [ccf,lags] = xcorr(data1,data2,100,'coeff'); lags = lags.*(1./Fs); % convert samples to time subplot(224) plot(lags,ccf); grid on; xlabel('time lag (s)'); ylabel('correlation ({\itr})'); title('xcorr'); </code> Note that the two signals both have 40Hz components in the PSD, but the times at which the 40Hz oscillation is present do not actually overlap between the two signals. With a window size of 50 samples (100ms) we accordingly do not see any coherence at 40Hz. If you look at the cross-correlation, there is nothing much different from zero in that 100ms window. ☛ What happens when you change the window size to 500ms? This pair of signals is clearly a pathological case, but it should be clear that coherence estimates can depend dramatically on the window size used. In this case, the larger window connects the two signals that do not actually overlap in time, causing "spurious" coherence values. === Other points about coherence === * Note that coherence is a //symmetric// measure, that is, the coherence for signals A and B is the same as that for B and A. This means that, as with a correlation coefficient, coherence has no sense of //directionality//. It cannot resolve whether A precedes or causes B, or vice versa. ==== Applications to real data ==== === Overall comparison of vStr-vStr and vStr-HC coherence === Let's load three simultaneously recorded LFPs, two from the same structure (but a different electrode, both in ventral striatum) and one from a different but anatomically related structure (hippocampus). First, cd to the directory containing 'R016-2012-10-03', then run: <code matlab> run(FindFile('*keys.m')); vStr1_csc = LoadCSC(ExpKeys.goodGamma{1}); vStr2_csc = LoadCSC(ExpKeys.goodGamma{2}); HC_csc = LoadCSC(ExpKeys.goodTheta{1}); % restrict to shorter segment for speed vStr1_csc = Restrict(vStr1_csc,ExpKeys.TimeOnTrack(2),ExpKeys.TimeOffTrack(2)); vStr2_csc = Restrict(vStr2_csc,ExpKeys.TimeOnTrack(2),ExpKeys.TimeOffTrack(2)); HC_csc = Restrict(HC_csc,ExpKeys.TimeOnTrack(2),ExpKeys.TimeOffTrack(2)); vStr1_csc = Data(vStr1_csc); vStr2_csc = Data(vStr2_csc); HC_csc = Data(HC_csc); </code> Next we can compute the PSDs for each signal in the familiar manner, as well as the coherence between signal pairs of interest: <code matlab> Fs = 2000; wsize = 2048; % compute PSDs and coherences for each signal and each pair respectively [P1,F] = pwelch(vStr1_csc,hanning(wsize),wsize/2,2*wsize,Fs); [P2,F] = pwelch(vStr2_csc,hanning(wsize),wsize/2,2*wsize,Fs); [P3,F] = pwelch(HC_csc,hanning(wsize),wsize/2,2*wsize,Fs); [C1,F] = mscohere(vStr1_csc,vStr2_csc,hanning(wsize),wsize/2,2*wsize,Fs); [C2,F] = mscohere(vStr1_csc,HC_csc,hanning(wsize),wsize/2,2*wsize,Fs); % plot subplot(121) h(1) = plot(F,10*log10(P1),'k','LineWidth',2); hold on; h(2) = plot(F,10*log10(P2),'g','LineWidth',2); hold on; h(3) = plot(F,10*log10(P3),'m','LineWidth',2); hold on; set(gca,'XLim',[0 150],'XTick',0:25:150,'FontSize',12); grid on; legend(h,{'vStr1','vStr2','HC'},'Location','Northeast'); legend boxoff; xlabel('Frequency (Hz)'); ylabel('Power (dB)'); subplot(122) h(1) = plot(F,C1,'LineWidth',2); hold on; h(2) = plot(F,C2,'r','LineWidth',2); set(gca,'XLim',[0 150],'XTick',0:25:150,'FontSize',12); grid on; legend(h,{'vStr1-vStr2','vStr1-HC'},'Location','Northeast'); legend boxoff; xlabel('Frequency (Hz)'); ylabel('Coherence'); </code> This should give: {{ :analysis:course:week8_fig3.png?600 |}} You can see that the PSDs show the profile characteristic for each structure: HC has a clear theta peak, which is just about visible as a slight hump in vStr. vStr has large gamma components absent from HC. The coherence between the to vStr signals is high overall compared to that between vStr and HC. The vStr gamma frequencies are particularly coherent within the vStr. This is what we would expect from plotting the raw signals alongside each other -- there is a clear relationship, as you can readily verify. However, it is more difficult to interpret if, say, a vStr-HC coherence value of 0.1 at 25Hz is meaningful. Such comparisons are easier to make by moving to FieldTrip. === Comparison of vStr-HC coherence between experimental conditions === Based on previous work (e.g. [[http://www.jneurosci.org/content/31/8/2843.short?rss=1 | van der Meer and Redish, 2011]]) we might ask: is the coherence between hippocampus and ventral striatum modulated by task events? Here we will determine if there is a change in coherence between //approach to the reward site// and //reward receipt//. This entails estimating the coherence spectrum for two different task epochs, both aligned to the time at which the rat nosepoked in the reward well. FieldTrip is ideal for this, especially because we will be doing the same operations on multiple LFPs. First, let's load the data: <code matlab> fc = {'R016-2012-10-03-CSC04d.ncs','R016-2012-10-03-CSC03d.ncs','R016-2012-10-03-CSC02b.ncs'}; data = ft_read_neuralynx_interp(fc); data.label = {'vStr1','vStr2','HC1'}; % reassign labels to be more informative, the original filenames can be retrieved from the header </code> As before, we will segment the data into trials: <code matlab> cfg = []; cfg.trialfun = 'ft_trialfun_lineartracktone2'; cfg.trialdef.hdr = data.hdr; cfg.trialdef.pre = 2.5; cfg.trialdef.post = 5; cfg.trialdef.eventtype = 'nosepoke'; % could be 'nosepoke', 'reward', 'cue' cfg.trialdef.location = 'both'; % could be 'left', 'right', 'both' cfg.trialdef.block = 'both'; % could be 'value', 'risk', 'both' cfg.trialdef.cue = {'c1','c3','c5'}; % cell array with choice of elements {'c1','c3','c5','lo','hi'} (1, 3, 5 pellets; low and high risk) [trl, event] = ft_trialfun_lineartracktone2(cfg); cfg.trl = trl; data_trl = ft_redefinetrial(cfg,data); </code> Next, we compute the trial-averaged cross-spectrum; note the similarity to the code used for computing spectrograms in the previous module -- we have changed ''cfg.output'' from 'pow' to 'powandcsd' (csd is for for cross-spectral density): <code> cfg = []; cfg.output = 'powandcsd'; cfg.method = 'mtmconvol'; cfg.taper = 'hanning'; cfg.foi = 1:1:100; % frequencies to use cfg.t_ftimwin = 20./cfg.foi; % frequency-dependent, 20 cycles per time window cfg.keeptrials = 'yes'; cfg.channel = {'vStr1', 'vStr2', 'HC1'}; cfg.channelcmb = {'vStr2', 'HC1'; 'vStr2' 'vStr1'}; % channel pairs to compute csd for cfg.toi = -2:0.05:0; % pre-nosepoke baseline (time 0 is time of nosepoke) TFR_pre = ft_freqanalysis(cfg, data_trl); </code> Now we can compute the coherence from the cross-spectrum and the indvidual spectra: <code matlab> cfg = []; cfg.method = 'coh'; % compute coherence; other measures of connectivity are also available fd = ft_connectivityanalysis(cfg,TFR_pre); </code> And finally plot the results -- for this we are bypassing ft's built-in plotter so that we can add some custom touches more easily: <code matlab> figure; cols = 'rgb'; for iCmb = 1:size(fd.labelcmb,1) lbl{iCmb} = cat(2,fd.labelcmb{iCmb,1},'-',fd.labelcmb{iCmb,2}); temp = nanmean(sq(fd.cohspctrm(iCmb,:,:)),2); h(iCmb) = plot(fd.freq,temp,cols(iCmb)); hold on; end legend(h,lbl); </code> ☛ The resulting coherence spectra are for the pre-nosepoke period (see ''cfg.toi'' in the frequency analysis step above). Also compute the coherence spectrum for the post-nosepoke period (0 to 2 seconds). As you will see, a few differences in the HC-vStr coherence between the two windows are visible, such as the elevated 4Hz coherence during reward approach. To get a sense of where this may be coming from (artifact or biological? a single trial or reliable?) it would be helpful to plot the raw LFPs aligned to the nosepoke time; of course, more sessions would need to be analyzed as well to obtain errorbars on the data. Notice that in our trial selection, we included trials on which 1 food pellet, 3 food pellets, and 5 food pellets were all included together (''cfg.trialdef.cue = {'c1','c3','c5'};''). ☛ Compare the coherence spectra for the 1-pellet and 5-pellet trials. What do you notice? ==== Time-frequency coherence analysis ==== A limitation of the coherence analyses up to this point has been that, like Welch's PSD, they have been averages. Just like the spectrogram provided a time-frequency view of signal power, we can attempt to compute a //coherogram//, coherence as a function of time and frequency. In fact, the previous steps in FieldTrip already did this, so we can plot it (note, this is the "post-nosepoke" epoch): <code matlab> iC = 1; % which signal pair to plot lbl = [fd.labelcmb{1,:}]; % get the label of this pair imagesc(fd.time,fd.freq,sq(fd.cohspctrm(iC,:,:))); axis xy; colorbar xlabel('time (s)'); ylabel('Frequency (Hz)'); title(lbl); </code> You should get: {{ :analysis:course:week8_fig4.png?600 |}} As you can see, the coherence at higher frequencies in particular looks very noisy. These "spurious" high coherence bins commonly show up when estimating coherence. ☛ Change the time window to a fixed 1s. How does the coherogram change? We can improve the robustness of our estimate by giving up some time resolution. Of course, averaging over more trials is another approach; other spectral estimation methods such as wavelets can also improve things (if you are interested in this, there is a nice MATLAB tutorial on wavelet coherence [[http://www.mathworks.com/help/wavelet/examples/wavelet-coherence.html | here]]). ==== Beyond coherence ==== Coherence is only one of many measures that attempt to characterize the relationship between LFPs. A glance at the [[http://fieldtrip.fcdonders.nl/reference/ft_connectivityanalysis | documentation]] for ''ft_connectivityanalysis()'' reveals a who's who of popular neuroscience tools for assessing functional connectivity. A review of these methods is beyond the scope of this module, but in general they address some of the limitations of the coherence measure. For instance: * Phase slope index (PSI), Granger causality, and partial directed coherence (PDC) are //directional// measures that under certain circumstances can capture the direction of the flow of information between two signals. * Weighted phase lag index (WPLI) can exclude contributions from a volume-conducted source common to both signals * Pairwise phase consistency addresses some statistical issues of how coherence estimates are affected by the amount of data For an illustration of how these improved methods can give a more reliable estimate of interactions than coherence, let's give pairwise phase consistency a try: <code> cfg = []; cfg.method = 'ppc'; fd = ft_connectivityanalysis(cfg,TFR_post); </code> ☛ Plot the coherogram as above, changing ''cohspectrm'' to ''ppcspctrm'' in the plotting code. You should get (again for the post-nosepoke epoch): {{ :analysis:course:week8_fig5.png?600 |}} Most of the spurious high-frequency events have now been eliminated.

analysis/course/week8.txt · Last modified: 2018/04/17 15:20 (external edit)