User Tools

Site Tools


analysis:cosmo2014:module5

~~DISCUSSION~~

Module 5

Goals:

  • Move to a 1-dimensional representation of the maze by linearizing the position data
  • Use the movement kernels from Module 4 as a generative model
  • Characterize internally generated sequences with generative models

Introduction

The previous module used the movement model $P(x_t|x_{t-1})$ as a tool to improve decoding accuracy. This module takes a different approach, using movement kernels to characterize internal dynamics of the place representation.

To simplify things, instead of working in two dimensions (x,y), we will first linearize our position data onto one dimension (z), linear position along left laps of the maze.

Linearizing position data

First, load the data as before:

%% load the data
clear all; pack
cd('C:\Users\mvdm\Dropbox\teaching\CoSMo2014\R042-2013-08-18'); % isidro
%cd('D:\My_Documents\Dropbox\teaching\CoSMo2014\R042-2013-08-18'); % equinox
%cd('D:\My_Documents\My Dropbox\teaching\CoSMo2014\R042-2013-08-18'); % athena
 
load(FindFile('*vt.mat')); % from position_sandbox.m
load(FindFile('*times.mat'));
load(FindFile('*CoordL.mat')); 
 
cfg = [];
cfg.load_questionable_cells = 1;
S = LoadSpikes(cfg);
 
cfg = [];
cfg.fc = {'R042-2013-08-18-CSC03a.ncs'};
csc = LoadCSC(cfg);

We will go to an even smaller time bin size (20ms):

%% resample position data at desired bin size
binsize = 0.02;
 
pos_ups = tsd;
pos_ups.tvec = pos.tvec(1):binsize:pos.tvec(end);
pos_ups.data(1,:) = interp1(pos.tvec,pos.data(1,:),pos_ups.tvec,'linear');
pos_ups.data(2,:) = interp1(pos.tvec,pos.data(2,:),pos_ups.tvec,'linear');
pos_ups.label = {'x','y'};

Now, we linearize the position data:

%% linearize posdata
%CoordL = MakeCoord(getd(pos,'x'),getd(pos,'y'));
%CoordL = MakeCoord(getd(pos,'x'),getd(pos,'y'),'YDir','reverse','rot',270); % for new version of MakeCoord
 
% should resample Coord to some more reasonable values (say 1cm bins)
nBins = 180; % track is 180cm, aim for 1cm bins
CoordLrs(1,:) = interp1(1:size(CoordL,2),CoordL(1,:),linspace(1,size(CoordL,2),nBins),'linear');
CoordLrs(2,:) = interp1(1:size(CoordL,2),CoordL(2,:),linspace(1,size(CoordL,2),nBins),'linear');
 
cfg = [];
cfg.Coord = CoordLrs;
posL = LinearizePos(cfg,pos);
posL_ups = LinearizePos(cfg,pos_ups);

The way this works is that the user can draw a linear path (the MakeCoord() function above) which the position data gets mapped onto (LinearizePos());

Next, we want to select only those trials where the rat went left:

%% detect trial types
cfg = [];
cfg.eventList = {'TTL Input on AcqSystem1_0 board 0 port 1 value (0x0020).',...
    'TTL Input on AcqSystem1_0 board 0 port 1 value (0x0080).',...
    'TTL Input on AcqSystem1_0 board 0 port 1 value (0x0040).',...
    'TTL Output on AcqSystem1_0 board 0 port 0 value (0x0004).',...
    'TTL Output on AcqSystem1_0 board 0 port 0 value (0x0040).'};
cfg.eventLabel = {'center_pb','left_pb','right_pb','food_dispensed','water_dispensed'};
evt = LoadEvents(cfg);
 
 
%%
for iRun = 1:length(run_start)
    temp = getd(evt,'water_dispensed');
    if ~isempty(temp(temp > run_start(iRun) & temp < run_end(iRun)))
        evt.trial_ID(iRun) = 1; % water/right
    else
        evt.trial_ID(iRun) = 2; % food/left
    end
end
evt.run_start = run_start;
evt.run_end = run_end;
evt.ped_start = ped_start*10^-6; % original data file did not convert to sec (s) so fix here
evt.ped_end = ped_end*10^-6;

The above code loads some relevant events, such as when food and water were dispensed. We use these to select our trials of interest:

left_trials = find(evt.trial_ID == 1);
evtL = evt;
evtL.trial_ID = evtL.trial_ID(left_trials);
evtL.run_start = evtL.run_start(left_trials);
evtL.run_end = evtL.run_end(left_trials);
 
S_orig = S;
S = restrict(S,evtL.run_start,evtL.run_end);
posL = restrict(posL,evtL.run_start,evtL.run_end);
posL_ups = restrict(posL_ups,evtL.run_start,evtL.run_end);
 
DEC_S = restrict(S_orig,evt.ped_start,evt.ped_end); % use pedestal intervals for decoding

We will use the left trials on the maze to fit our encoding model (tuning curves) and movement kernels. Then, we will use the data when the rat is on the pedestal (in between trials) to examine the properties of its internally generated place representation.

Some housekeeping:

%% check for empties, remove; should be a function
keep = ~cellfun(@isempty,S.t);
S.t = S.t(keep);
S.label = S.label(keep);
 
S_orig.t = S_orig.t(keep);
S_orig.label = S_orig.label(keep);
 
DEC_S.t = DEC_S.t(keep);
DEC_S.label = DEC_S.label(keep);

We can now check that our position linearization and trial selection worked by doing plot(posL.tvec,getd(posL,'z'),'.k'); you should see 8 trials where the rat runs from a low position number (~0-40, the beginning of the track) to the end of the left arm (position 180). These linearized coordinates now correspond approximately to centimeters (cm). (Not exactly because of distortion in the camera's field of vision.)

Analogous to what we did in the previous module, we want to create some movement kernels. This is now a bit simpler in 1-D:

%% get transition model
clear diff_mat;
diff_mat = diff(posL_ups.data(1,:));
 
dx = 1; xl = 20.5;
x_edges = -xl:dx:xl; x_centers = x_edges(1:end-1)+dx./2;
 
[diff_hist,pos_idx] = histc(diff_mat,x_edges);
diff_hist = diff_hist(1:end-1); % ignore last edge bin
 
diff_hist = diff_hist./sum(diff_hist(:)); % normalize to 1 total area
 
subplot(221);
plot(x_centers,diff_hist);
hold on;
plot([0 0],ylim,'k--');
 
% make symmetric version
diff_histS = circshift(diff_hist,[0 -1]);
half_len = floor(length(x_centers)./2);
diff_histS(half_len:-1:1) = diff_histS(half_len+2:end);
diff_histS = diff_histS./sum(diff_histS(:));
 
subplot(222);
plot(x_centers,diff_histS);
hold on;
plot([0 0],ylim,'k--');
 
% reverse version
diff_histR = diff_hist(end:-1:1);
subplot(223);
plot(x_centers,diff_histR);
hold on;
plot([0 0],ylim,'k--');

As before, we can recursively apply these kernels to obtain different movement speeds:

%% set up a selection of kernels for decoding
x1_fwd = diff_hist;
DEC_kernel_labels{1} = 'x1_fwd';
DEC_all_kernels(1,:) = x1_fwd;
 
x1_rev = diff_histR;
DEC_kernel_labels{2} = 'x1_rev';
DEC_all_kernels(2,:) = x1_rev;
 
x7_fwd = diff_hist;
for iI = 2:7
    x7_fwd = conv2(x7_fwd,diff_hist,'same');
end
DEC_kernel_labels{3} = 'x7_fwd';
DEC_all_kernels(3,:) = x7_fwd;
 
x7_rev = diff_histR;
for iI = 2:7
    x7_rev = conv2(x7_rev,diff_histR,'same');
end
DEC_kernel_labels{4} = 'x7_rev';
DEC_all_kernels(4,:) = x7_rev;
 
x15_fwd = diff_hist;
for iI = 2:15
    x15_fwd = conv2(x15_fwd,diff_hist,'same');
end
DEC_kernel_labels{5} = 'x15_fwd';
DEC_all_kernels(5,:) = x15_fwd;
 
x15_rev = diff_histR;
for iI = 2:15
    x15_rev = conv2(x15_rev,diff_histR,'same');
end
DEC_kernel_labels{6} = 'x15_rev';
DEC_all_kernels(6,:) = x15_rev;
 
x40_fwd = diff_hist;
for iI = 2:40
    x40_fwd = conv2(x40_fwd,diff_hist,'same');
end
DEC_kernel_labels{7} = 'x40_fwd';
DEC_all_kernels(7,:) = x40_fwd;
 
x40_rev = diff_histR;
for iI = 2:40
    x40_rev = conv2(x40_rev,diff_histR,'same');
end
DEC_kernel_labels{8} = 'x40_rev';
DEC_all_kernels(8,:) = x40_rev;
 
DEC_kernel_labels{9} = 'uniform';
DEC_all_kernels(9,:) = ones(size(DEC_all_kernels(1,:)))./length(DEC_all_kernels(1,:));
DEC_kernel_centers = x_centers;

To plot these, we can do this:

%% plot them
set(0,'defaulttextinterpreter','none')
for iK = 1:length(DEC_kernel_labels)
 
   eval(sprintf('current_kernel = DEC_all_kernels(%d,:);',iK));
 
   subplot(3,3,iK);
   plot(DEC_kernel_centers,current_kernel);
   hold on;
   plot([0 0],ylim,'k--');
   title(DEC_kernel_labels{iK});
 
end

This gives

You can see that faster speeds correspond to probability mass further removed from the zero starting position.

Next, let's obtain some tuning curves, first for our example neuron:

%% sample tuning curve for decoding
kernel = gausskernel(11,2); % 11 cm wide, 2cm SD
 
SET_xmin = 0.5; SET_xmax = 180.5;
SET_xBinSz = 1;
 
x_edges = SET_xmin:SET_xBinSz:SET_xmax;
SET_nxBins = length(x_edges)-1;
 
clear pos_mat;
pos_mat = getd(posL,'z');
 
[occ_hist,pos_idx] = histc(pos_mat,x_edges); % note, pos_idx will be used later!
occ_hist = occ_hist(1:end-1); % ignore last edge bin
no_occ_idx = find(occ_hist == 0);
 
occ_hist = conv(occ_hist,kernel,'same');
occ_hist(no_occ_idx) = NaN;
 
occ_hist = occ_hist .* (1/30); % convert to seconds using video frame rate
 
subplot(221);
plot(occ_hist); 
title('occupancy');
 
%
iC = 7;
spk_z = interp1(posL.tvec,getd(posL,'z'),S.t{iC},'linear');
 
clear spk_mat;
spk_mat = spk_z;
spk_hist = histc(spk_mat,x_edges);
spk_hist = spk_hist(1:end-1); % ignore last edge bin
spk_hist = conv(spk_hist,kernel,'same');
 
spk_hist(no_occ_idx) = NaN;
 
subplot(222)
plot(spk_hist);
title('spikes');
 
%
tc = spk_hist./occ_hist';
 
subplot(223)
plot(tc);
title('rate map');

You should get:

As expected from the 2-D tuning curve we obtained earlier for this neuron, we see a nicely defined place field.

We now do this for all neurons:

%% do it for all spikes
clear tc all_tc
nCells = length(S.t);
for iC = 1:nCells
    spk_z = interp1(posL.tvec,getd(posL,'z'),S.t{iC},'linear');
 
    spk_mat = spk_z;
    spk_hist = histc(spk_mat,x_edges,1);
    spk_hist = spk_hist(1:end-1);
    spk_hist = conv(spk_hist,kernel,'same');
 
    spk_hist(no_occ_idx) = NaN;
 
    tc = spk_hist./occ_hist';
 
    all_tc(iC,:) = tc;
 
end
tc = all_tc'; % for compatibility with 2d version

In 1D, it is easier to visualize all tuning curves: imagesc(tc'); ylabel('neuron #'); xlabel('z position (cm)');!

It will be helpful if we have a way of detecting which neurons have clear place fields, and assigning a preferred location to them:

%% select only certain place cells
[idx,peak_idx,peak_loc] = DetectPlaceCells1D([],tc);
hold on;
plot(peak_idx,idx,'ok');

You should see:

These are the tuning curves of 90 neurons that had spikes on the left trials on the maze, with peak locations of cells deemed “place cells” indicated by the black circles.

Since we now have the peak firing locations, we can use these to order a rasterplot:

S.t = S.t(idx);
S.label = S.label(idx);
figure;
PlotSpikeRaster([],S);
xlim([5484 5492]);

This gives:

As the animal runs you can see each place cell firing (approximately) in turn.

Plotting the spikes for the data we are interested in decoding (on the pedestal):

DEC_S.t = DEC_S.t(idx);
DEC_S.label = DEC_S.label(idx);
figure;
PlotSpikeRaster([],DEC_S);
xlim([5042 5043]);

Here the animal is not on the track, yet a similar sequence of place cells is activated. Note however the different timescale! We wish to characterize these “replay” events (or more generally, “internally generated sequences”) in terms of their content (what segment of the track is represented) and their dynamics (direction, speed).

Decoding internally generated sequences

As before, we construct our Q-matrix of spike counts:

%% make Q-mat
clear Q;
binsize = 0.02;
 
% assemble tvecs
Q_tvec_edges = evt.ped_start(1):binsize:evt.ped_end(end);
Q_tvec_centers = Q_tvec_edges(1:end-1)+binsize/2;
 
for iC = length(DEC_S.t):-1:1
 
    spk_t = DEC_S.t{iC};
    Q(iC,:) = histc(spk_t,Q_tvec_edges);
    Q(iC,end-1) = Q(iC,end-1)+Q(iC,end);
 
end
Q = Q(:,1:end-1);
 
imagesc(Q_tvec_centers,1:nCells,Q);
set(gca,'FontSize',16); xlabel('time(s)'); ylabel('cell #');
 
%% only include data we care about (times on the pedestal, off track)
Q_tsd = tsd(Q_tvec_centers,Q);
Q_tsd = restrict(Q_tsd,evt.ped_start,evt.ped_end);
 
Q_tvec_centers = Q_tsd.tvec;
Q = Q_tsd.data;
nActiveNeurons = sum(Q > 0);

Now, we can decode:

p = nan(length(Q_tvec_centers),nBins);
 
DEC_kernel_pst = nan(length(DEC_kernel_labels),length(Q_tvec_centers));
 
occUniform = ones(size(tc(:,1)));
occUniform = occUniform./length(occUniform);
 
nActiveNeurons = sum(Q > 0);
 
for iT = 1:size(p,1)
 
    L = ones(1,nBins); L = L./length(L); % start with uniform likelihood
 
    if nActiveNeurons(iT) ~= 0
        % obtain likelihood p(x|s_t)
        for iB = 1:nBins
            tempProd = nansum(log(tc(iB,:)'.^Q(:,iT)));
            tempSum = exp(-binsize*nansum(tc(iB,:),2));
            L(iB) = exp(tempProd)*tempSum*occUniform(iB);
        end
 
    end
 
    L = L./nansum(L);
    p(iT,:) = L;
 
    if iT == 1
 
        % first sample or new trial, no movement prior
        continue;
 
    else
 
        % obtain movement prior p(x|x_{t-1}) using various kernels
        pr = p(iT-1,:); % could optimize by carrying over to next iteration
 
        for iK = 1:length(DEC_kernel_labels)
 
            % convolve to obtain prediction from kernel
            eval(sprintf('current_kernel = DEC_all_kernels(%d,:);',iK));
            pr = conv2(pr,current_kernel,'same');
            pr = pr./nansum(pr); % normalize to 1
 
            % compare to observation
            DEC_kernel_pst(iK,iT) = nansum(p(iT,:).*pr);
 
        end
 
    end
 
end

Note that this is essentially the same as the 1-step decoding algorithm we used previously, with an important addition: for each time step, we now loop over our different movement kernels, and track how well the prediction from each matches the likelihood. This is how we evaluate which kernel best describes the dynamics of the evolving internal sequence.

Because of the nine different kernels being run, this will take a moment to evaluate (approx. four minutes on my laptop).

Let's plot how well the different kernels matched the data:

figure; set(0,'Interpreter','none');
toPlot = [1 2 5 6 7 8 9];
plot(Q_tvec_centers,DEC_kernel_pst(toPlot,:));
legend(DEC_kernel_labels(toPlot));
 
xlim([5042.3 5043.3]);

This gives:

Note that these times match those of the replay event shown above. The results are noisy, but the 15 and 40-speed forward kernels seem to outperform the others over the duration of the event, matching the impression from the rasterplot.

analysis/cosmo2014/module5.txt · Last modified: 2018/07/07 10:19 (external edit)