### Site Tools

analysis:course-w16:week16

# Differences

This shows you the differences between two versions of the page.

 analysis:course-w16:week16 [2016/02/29 13:59]mvdm [Steps 5-7: Get co-activation probabilities] analysis:course-w16:week16 [2018/07/07 10:19] (current) Both sides previous revision Previous revision 2016/03/01 09:50 eirvine [Steps 5-7: Get co-activation probabilities] 2016/03/01 09:00 eirvine [Setup and data loading] 2016/02/29 14:13 mvdm 2016/02/29 14:12 mvdm [Challenges] 2016/02/29 14:10 mvdm [Implementing a basic co-occurrence analysis from scratch] 2016/02/29 13:59 mvdm [Steps 5-7: Get co-activation probabilities] 2016/02/29 13:53 mvdm [Steps 5-7: Get co-activation probabilities] 2016/02/29 13:50 mvdm [Step 1: Generating candidate events] 2016/02/29 13:49 mvdm [Step 4: Make a Q-matrix] 2016/02/29 13:42 mvdm [Step 3: Categorizing place cells] 2016/02/29 13:28 mvdm [Step 1: Generating candidate events] 2016/02/29 13:26 mvdm [Step 1: Generating candidate events] 2016/02/28 21:54 mvdm [Co-activation: concept and overall workflow] 2016/02/28 21:50 mvdm 2016/02/28 21:39 mvdm [Interpreting coactivation analysis results] 2016/02/28 21:31 mvdm 2016/02/28 21:23 mvdm [Step 4: Make a Q-matrix] 2016/02/28 21:18 mvdm 2016/02/28 21:13 mvdm [Categorizing place cells] 2016/02/28 21:08 mvdm [Co-activation: concept and overall workflow] 2016/02/28 21:06 mvdm [Co-activation: concept and overall workflow] 2016/02/28 21:03 mvdm [Introduction] 2016/02/28 20:56 mvdm [Introduction] 2016/02/26 14:57 mvdm 2016/02/26 14:27 mvdm created Next revision Previous revision 2016/03/01 09:50 eirvine [Steps 5-7: Get co-activation probabilities] 2016/03/01 09:00 eirvine [Setup and data loading] 2016/02/29 14:13 mvdm 2016/02/29 14:12 mvdm [Challenges] 2016/02/29 14:10 mvdm [Implementing a basic co-occurrence analysis from scratch] 2016/02/29 13:59 mvdm [Steps 5-7: Get co-activation probabilities] 2016/02/29 13:53 mvdm [Steps 5-7: Get co-activation probabilities] 2016/02/29 13:50 mvdm [Step 1: Generating candidate events] 2016/02/29 13:49 mvdm [Step 4: Make a Q-matrix] 2016/02/29 13:42 mvdm [Step 3: Categorizing place cells] 2016/02/29 13:28 mvdm [Step 1: Generating candidate events] 2016/02/29 13:26 mvdm [Step 1: Generating candidate events] 2016/02/28 21:54 mvdm [Co-activation: concept and overall workflow] 2016/02/28 21:50 mvdm 2016/02/28 21:39 mvdm [Interpreting coactivation analysis results] 2016/02/28 21:31 mvdm 2016/02/28 21:23 mvdm [Step 4: Make a Q-matrix] 2016/02/28 21:18 mvdm 2016/02/28 21:13 mvdm [Categorizing place cells] 2016/02/28 21:08 mvdm [Co-activation: concept and overall workflow] 2016/02/28 21:06 mvdm [Co-activation: concept and overall workflow] 2016/02/28 21:03 mvdm [Introduction] 2016/02/28 20:56 mvdm [Introduction] 2016/02/26 14:57 mvdm 2016/02/26 14:27 mvdm created Line 1: Line 1: ~~DISCUSSION~~ ~~DISCUSSION~~ - - :!: **UNDER CONSTRUCTION,​ PLEASE DO NOT USE YET** :!: ===== Pairwise co-occurrence ===== ===== Pairwise co-occurrence ===== Line 56: Line 54: === Setup and data loading === === Setup and data loading === - First, make sure you do a ''​git pull''​ as usual, and get the data. We'll be using session ''​R064-2015-04-22''​. You'll also need to include the ''​tasks\Alyssa-Tmaze''​ and ''​tasks\ReplayAnalysis''​ folders in your path. + First, make sure you do a ''​git pull''​ as usual, and get the data. We'll be using session ''​R064-2015-04-22''​. You'll also need to include the ''​tasks\Alyssa-Tmaze''​ and ''​tasks\Replay_Analysis''​ folders in your path. Then, we load the data: Then, we load the data: Line 359: Line 357: ylabels = {{'​Proportion of';'​SWRs active'​},​{'​Cell pair'; 'joint probability'​},​{'​SWR coactivation';​ '​Z-score'​}};​ ylabels = {{'​Proportion of';'​SWRs active'​},​{'​Cell pair'; 'joint probability'​},​{'​SWR coactivation';​ '​Z-score'​}};​ arms = {'​L','​R'​};​ arms = {'​L','​R'​};​ + colors = flipud(linspecer(2));​ location = [1 2.5]; location = [1 2.5]; Line 367: Line 366: ​ ​ h(iP) = subplot(1,​3,​iP);​ h(iP) = subplot(1,​3,​iP);​ - for iBar = 1:2 + for iBar = 1:length(arms) ​ ​ bar(location(iBar),​p_data(iP,​iBar),'​FaceColor',​colors(iBar,:​),'​EdgeColor','​none'​) bar(location(iBar),​p_data(iP,​iBar),'​FaceColor',​colors(iBar,:​),'​EdgeColor','​none'​) Line 500: Line 499: ​ + This shuffling procedure is an example of **resampling**:​ we create a number of different data sets based on some rearrangement of the original data. In this case, the specific resampling is a shuffle or permutation,​ which breaks any relationship between neurons (because we shuffle each neuron independently) and therefore functions as a control for the amount of co-occurrence we expect by chance (i.e. if the neurons were independently active). In general, a major advantage of resampling methods is that they preserve aspects of the underlying distribution -- in this case, of spike counts -- and make no particular assumptions about its shape, whereas many parametric statistical tests require data to be e.g. normally distributed. ==== Challenges ==== ==== Challenges ==== ★ Why don't we just look at single-cell activation? Under what conditions would the results from that be the same, or different, from pairwise co-occurrence?​ ★ Why don't we just look at single-cell activation? Under what conditions would the results from that be the same, or different, from pairwise co-occurrence?​ + + ★ Comment on the choice of bins used in constructing the Q-matrix. It it reasonable to assume that SWR events are always 100ms in length? Modify the code to use the actual length of SWR events. Is the resampling statistic still doing the right thing in this case? + + ★ Implement co-occurrence analysis on your own data. ==== Credits ==== ==== Credits ==== This module was developed by [[https://​github.com/​aacarey | Alyssa Carey]]. This module was developed by [[https://​github.com/​aacarey | Alyssa Carey]].