User Tools

Site Tools


analysis:cosmo2014:module4

This is an old revision of the document!


~~DISCUSSION~~

Module 4

Goals:

  • Construct some movement kernels for estimation of spatial priors
  • Incorporate a dynamic spatial prior in your decoder

Introduction

In the previous module, we applied the decoder to each time bin independently, using a flat spatial prior. In effect, this assumes that the place representation can move around arbitrarily from one time step to the next. Clearly, however, a rat cannot move around arbitrarily but instead moves subject to smoothness and continuity constraints! We can use this domain knowledge to improve the performance of our decoder.

Our approach will be similar to Kalman filtering, in that we can construct a model of the rat's movement. We can then use this model to generate a prediction of the rat's position ($P(\hat{\mathbf{x}}_t)|P(\hat{\mathbf{x}}_{t-1}))$) which can function as a prior in out Bayesian decoder. The hat is meant to indicate that these are all estimates, since the decoder does not have access to the rat's true position $x$.

Do do this, first we need to estimate $P(\hat{\mathbf{x}}_t)|P(\hat{\mathbf{x}}_{t-1}))$ from the data.

Estimating movement kernels

First, start with a clean slate:

%% load the data
clear all; pack
cd('C:\Users\mvdm\Dropbox\teaching\CoSMo2014\R042-2013-08-18'); % isidro
 
load(FindFile('*vt.mat')); % from position_sandbox.m
load(FindFile('*times.mat'));
load(FindFile('*CoordL.mat')); 
 
cfg = [];
cfg.load_questionable_cells = 1;
S = LoadSpikes(cfg);
 
cfg = [];
cfg.fc = {'R042-2013-08-18-CSC03a.ncs'};
csc = LoadCSC(cfg);

Next, we resample the position data at our desired binsize that matches the decoding (50ms in this case):

%% resample position data at desired bin size
binsize = 0.05;
 
pos_ups = tsd;
pos_ups.tvec = pos.tvec(1):binsize:pos.tvec(end);
pos_ups.data(1,:) = interp1(pos.tvec,pos.data(1,:),pos_ups.tvec,'linear');
pos_ups.data(2,:) = interp1(pos.tvec,pos.data(2,:),pos_ups.tvec,'linear');

As before, some housekeeping to take care of:

%% restrict to trials
ENC_S = restrict(S,run_start,run_end);
ENC_pos = restrict(pos,run_start,run_end);
ENC_pos_ups = restrict(pos_ups,run_start,run_end);
 
% check for empties and remove
keep = ~cellfun(@isempty,ENC_S.t);
ENC_S.t = ENC_S.t(keep);
ENC_S.label = ENC_S.label(keep);
 
S.t = S.t(keep);
S.label = S.label(keep);

Now, we can obtain the distribution of position changes between adjacent time bins:

clear diff_mat;
diff_mat(:,2) = diff(ENC_pos_ups.data(1,:));
diff_mat(:,1) = diff(ENC_pos_ups.data(2,:));
 
dx = 0.5; xl = 10.25;
x_edges = -xl:dx:xl; x_centers = x_edges(1:end-1)+dx./2;
y_edges = -xl:dx:xl; y_centers = y_edges(1:end-1)+dx./2;
 
[diff_hist,~,~,pos_idx] = histcn(diff_mat,y_edges,x_edges);
diff_hist = diff_hist./sum(diff_hist(:)); % normalize to 1 total area
 
subplot(221);
imagesc(y_centers,x_centers,log(diff_hist)); shading flat; colorbar
hold on;
plot([x_edges(1) x_edges(end)],[0 0],'k--');
plot([0 0],[y_edges(1) y_edges(end)],'k--');

This shows an overall tendency for the rat to travel right, and to a lesser extent up ad down, which makes sense given the shape of the maze.

Let's construct the same at the same spatial resolution as our tuning curves:

dx = 10; dx = 10;
xl = 105;
x_edges = -xl:dx:xl; x_centers = x_edges(1:end-1)+dx./2;
y_edges = -xl:dx:xl; y_centers = y_edges(1:end-1)+dx./2;
 
[diff_hist,~,~,pos_idx] = histcn(diff_mat,y_edges,x_edges);
 
diff_hist = diff_hist + eps;
diff_hist = diff_hist./sum(diff_hist(:)); % normalize to 1 total area

The + eps term above can be helpful in preventing things from getting stuck during the decoding later.

The above kernel corresponds to our model of $P(\hat{\mathbf{x}}_t)|P(\hat{\mathbf{x}}_{t-1}))$. By recursively applying this kernel we can obtain $P(\hat{\mathbf{x}}_t)|P(\hat{\mathbf{x}}_{t-n}))$:

%% plot a few more at different speeds
x1 = diff_hist;
subplot(221);
imagesc(y_centers,x_centers,log(x1)); shading flat; colorbar
hold on;
plot([x_edges(1) x_edges(end)],[0 0],'k--');
plot([0 0],[y_edges(1) y_edges(end)],'k--');
title('1x');
 
x7 = diff_hist;
for iI = 2:7
    x7 = conv2(x7,diff_hist,'same');
end
subplot(222);
imagesc(y_centers,x_centers,log(x7)); shading flat; colorbar
hold on;
plot([x_edges(1) x_edges(end)],[0 0],'k--');
plot([0 0],[y_edges(1) y_edges(end)],'k--');
title('7x');
 
x15 = diff_hist;
for iI = 2:15
    x15 = conv2(x15,diff_hist,'same');
end
subplot(223);
imagesc(y_centers,x_centers,log(x15)); shading flat; colorbar
hold on;
plot([x_edges(1) x_edges(end)],[0 0],'k--');
plot([0 0],[y_edges(1) y_edges(end)],'k--');
title('15x');
 
x99 = diff_hist;
for iI = 2:99
    x99 = conv2(x99,diff_hist,'same');
end
subplot(224);
imagesc(y_centers,x_centers,log(x99)); shading flat; colorbar
hold on;
plot([x_edges(1) x_edges(end)],[0 0],'k--');
plot([0 0],[y_edges(1) y_edges(end)],'k--');
title('99x');

This gives:

These kernels are the expected probability distributions for different movement speeds, with 1x being the distribution actually observed in our data set. We are now ready to implement an improved version of our decoding algorithm that can use these kernels.

analysis/cosmo2014/module4.1407971329.txt.gz · Last modified: 2018/07/07 10:19 (external edit)