User Tools

Site Tools


analysis:course-w16:week10

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
analysis:course-w16:week10 [2016/02/14 15:50]
mvdm [Challenges]
analysis:course-w16:week10 [2018/07/07 10:19] (current)
Line 1: Line 1:
 ~~DISCUSSION~~ ~~DISCUSSION~~
- 
-:!: **UNDER CONSTRUCTION -- PLEASE DO NOT USE YET** :!: 
  
 ===== Spike train analysis II: tuning curves, encoding, decoding ===== ===== Spike train analysis II: tuning curves, encoding, decoding =====
Line 9: Line 7:
   * Learn to estimate and plot tuning curves, raw and smoothed   * Learn to estimate and plot tuning curves, raw and smoothed
   * Implement a basic Bayesian decoding algorithm   * Implement a basic Bayesian decoding algorithm
-  * Compare decoded and actual position by exporting to a movie file+  * Compare decoded and actual position by computing the decoding error
  
 Resources: Resources:
Line 86: Line 84:
  
 <code matlab> <code matlab>
-ENC_S = restrict(S,run_start,​run_end); +LoadMetadata;​ 
-ENC_pos = restrict(pos,​run_start,​run_end); +ENC_S = restrict(S,metadata.taskvars.trial_iv); 
 +ENC_pos = restrict(pos,​metadata.taskvars.trial_iv); 
 + 
 % check for empties and remove % check for empties and remove
 keep = ~cellfun(@isempty,​ENC_S.t);​ keep = ~cellfun(@isempty,​ENC_S.t);​
 ENC_S.t = ENC_S.t(keep);​ ENC_S.t = ENC_S.t(keep);​
 ENC_S.label = ENC_S.label(keep);​ ENC_S.label = ENC_S.label(keep);​
 + 
 S.t = S.t(keep); S.t = S.t(keep);
 S.label = S.label(keep);​ S.label = S.label(keep);​
 </​code>​ </​code>​
  
-We have created ''​ENC_''​ versions of our spike trains and position data, containing only data from when the rat was running on the track (the ''​run_start''​ and ''​run_end''​ variables have been previously generated by a different script) and removed all cells from the data set that did not have any spikes on the track.+We have created ''​ENC_''​ versions of our spike trains and position data, containing only data from when the rat was running on the track (using experimenter annotation stored in the metadata; ​''​trial_iv'' ​contains the start and end times of trials) and removed all cells from the data set that did not have any spikes on the track.
  
 ☛ Plot the above scatterfield again for the restricted spike train. Verify that no spikes are occurring off the track by comparing your plot to the previous one for the full spike trains, above. ☛ Plot the above scatterfield again for the restricted spike train. Verify that no spikes are occurring off the track by comparing your plot to the previous one for the full spike trains, above.
Line 116: Line 115:
 y_edges = SET_ymin:​SET_yBinSz:​SET_ymax;​ y_edges = SET_ymin:​SET_yBinSz:​SET_ymax;​
  
-occ_hist = histcn(pos_mat,​y_edges,​x_edges);​+occ_hist = histcn(pos_mat,​y_edges,​x_edges); ​% 2-D version of histc()
  
-no_occ_idx = find(occ_hist == 0); % NaN out bins rat never visited+no_occ_idx = find(occ_hist == 0); % NaN out bins never visited
 occ_hist(no_occ_idx) = NaN; occ_hist(no_occ_idx) = NaN;
  
-occ_hist = occ_hist .* (1/30); % convert to seconds using video frame rate+occ_hist = occ_hist .* (1/30); % convert ​samples ​to seconds using video frame rate (30 Hz)
  
 subplot(221);​ subplot(221);​
Line 161: Line 160:
 {{ :​analysis:​course-w16:​raw_tc.png?​nolink&​900 |}} {{ :​analysis:​course-w16:​raw_tc.png?​nolink&​900 |}}
  
-Note that from the occupancy map, you can see the rat spent relatively more time at the choice point compared to other segments of the track. However, the rough binning is not very satisfying. Let's see if we can do better with some smoothing:+Note that from the occupancy map, you can see the rat spent relatively more time at the base of the stem compared to other segments of the track. However, the rough binning is not very satisfying. Let's see if we can do better with some smoothing:
  
 <code matlab> <code matlab>
Line 170: Line 169:
  
 occ_hist(no_occ_idx) = NaN; occ_hist(no_occ_idx) = NaN;
-occ_hist = occ_hist .* (1/​30); ​% convert to seconds using video frame rate+occ_hist = occ_hist .* (1/30);
  
 subplot(221);​ subplot(221);​
Line 178: Line 177:
 % %
 spk_hist = histcn(spk_mat,​y_edges,​x_edges);​ spk_hist = histcn(spk_mat,​y_edges,​x_edges);​
-spk_hist = conv2(spk_hist,​kernel,'​same'​);​+spk_hist = conv2(spk_hist,​kernel,'​same'​); ​% 2-D convolution
 spk_hist(no_occ_idx) = NaN; spk_hist(no_occ_idx) = NaN;
  
Line 267: Line 266:
 In general, from the [[http://​en.wikipedia.org/​wiki/​Poisson_distribution | definition of the Poisson distribution]],​ it follows that In general, from the [[http://​en.wikipedia.org/​wiki/​Poisson_distribution | definition of the Poisson distribution]],​ it follows that
  
-\[P(n_i|\mathbf{x}) = \frac{(\tau f_i(\mathbf{x}))^{n_i}}{n_i!} e^{-\tau f_i (x)}\]+\[P(n_i|\mathbf{x}) = \frac{(\tau f_i(\mathbf{x}))^{n_i}}{n_i!} e^{-\tau f_i (\mathbf{x})}\]
  
 $f_i(\mathbf{x})$ is the average firing rate of neuron $i$ over $x$ (i.e. the tuning curve for position), $n_i$ is the number of spikes emitted by neuron $i$ in the current time window, and $\tau$ is the size of the time window used. Thus, $\tau f_i(\mathbf{x})$ is the mean number of spikes we expect from neuron $i$ in a window of size $\tau$; the Poisson distribution describes how likely it is that we observe the actual number of spikes $n_i$ given this expectation. $f_i(\mathbf{x})$ is the average firing rate of neuron $i$ over $x$ (i.e. the tuning curve for position), $n_i$ is the number of spikes emitted by neuron $i$ in the current time window, and $\tau$ is the size of the time window used. Thus, $\tau f_i(\mathbf{x})$ is the mean number of spikes we expect from neuron $i$ in a window of size $\tau$; the Poisson distribution describes how likely it is that we observe the actual number of spikes $n_i$ given this expectation.
Line 276: Line 275:
  
 \[P(\mathbf{n}|\mathbf{x}) = \prod_{i = 1}^{N} \frac{(\tau f_i(\mathbf{x}))^{n_i}}{n_i!} \[P(\mathbf{n}|\mathbf{x}) = \prod_{i = 1}^{N} \frac{(\tau f_i(\mathbf{x}))^{n_i}}{n_i!}
-e^{-\tau f_i (x)}\]+e^{-\tau f_i (\mathbf{x})}\]
  
 An analogy here is simply to ask: if the probability of a coin coming up heads is $0.5$, what is the probability of two coints, flipped simultaneously,​ coming up heads? If the coins are independent then this is simply $0.5*0.5$. An analogy here is simply to ask: if the probability of a coin coming up heads is $0.5$, what is the probability of two coints, flipped simultaneously,​ coming up heads? If the coins are independent then this is simply $0.5*0.5$.
Line 482: Line 481:
 % get trial id for each sample % get trial id for each sample
 trial_id = zeros(size(Q_tvec_centers));​ trial_id = zeros(size(Q_tvec_centers));​
-trial_idx = nearest_idx3(run_start,​Q_tvec_centers);​ % NOTE: on non-Windows,​ use nearest_idx.m+trial_idx = nearest_idx3(metadata.taskvars.trial_iv.tstart,​Q_tvec_centers);​ % NOTE: on non-Windows,​ use nearest_idx.m
 trial_id(trial_idx) = 1; trial_id(trial_idx) = 1;
 trial_id = cumsum(trial_id);​ trial_id = cumsum(trial_id);​
Line 498: Line 497:
  
 {{ :​analysis:​course-w16:​dec_err.png?​nolink&​600 |}} {{ :​analysis:​course-w16:​dec_err.png?​nolink&​600 |}}
 +
 +(Note, your plot might look a little different.)
  
 Thus, on average our estimate is 2.14 pixels away from the true position. Earlier laps seem to have some more outliers of bins where our estimate is bad (large distance) but there is no obvious trend across laps visible. Thus, on average our estimate is 2.14 pixels away from the true position. Earlier laps seem to have some more outliers of bins where our estimate is bad (large distance) but there is no obvious trend across laps visible.
analysis/course-w16/week10.1455483036.txt.gz · Last modified: 2018/07/07 10:19 (external edit)