User Tools

Site Tools


analysis:nsb2014:week0

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
analysis:nsb2014:week0 [2014/06/26 17:58]
mvdm [2. Plan ahead.]
analysis:nsb2014:week0 [2018/07/07 10:19] (current)
Line 1: Line 1:
-=== 1. Garbage in, garbage out===+==== Introduction:​ Principles of careful data analysis ==== 
 + 
 +This module sketches some of the overall principles that inform careful data analysis.  
 + 
 +If this is your first contact with analysis of neural data and/or with MATLAB, some of the more technical points below may not resonate with you. This is OK -- for now, focus on the ideas as you read through. Later modules will refer back to these principles, so as you build up your experience you can revisit this page. 
 + 
 +Conversely, if you are an experienced analyst, what follows will be likely be familiar. In either case, please feel free to contribute your thoughts, questions, and contributions either by using the Discussion panel below, or by editing the wiki directly! 
 + 
 +==== Principles ===== 
 + 
 +=== 1. Garbage in, garbage out ===
  
 Analysis will be meaningless if performed on bad data. Even if you start out with good data, there are many analysis steps that have the power to corrupt. Analysis will be meaningless if performed on bad data. Even if you start out with good data, there are many analysis steps that have the power to corrupt.
  
-An important corollary of this is that you need to determine at every step whether you are dealing with garbage or not. Two essential ​habits that help with this are visualization (explored in [[analysis:​nsb2014:​week5|Module 5]]) and unit testing (put simply, the practice of testing specific pieces of functionality or "​units";​ employed throughout the modules). ​+//An important corollary of this principle ​is that you need to determine at every step whether you are dealing with garbage or not//. Two habits that help with this are visualization (explored in [[analysis:​nsb2014:​week5|Module 5]]) and unit testing (put simply, the practice of testing specific pieces of functionality or "​units";​ employed throughout the modules). ​
  
-To see why this principle is critical, consider ​performing ​a complex multistep experimental procedure such as implanting a recording probe. In this setting, the surgeon //always// verifies the success of the previous step before proceeding. One would never attempt to insert a probe without making sure dura is removed first. Or apply dental cement to the skull without first making sure it is dry. Apply the same mindset to analysis and confirm the success of every step before proceeding!+To see why this principle is critical, consider a complex multistep experimental procedure such as surgically ​implanting a recording probe into the brain. In this setting, the surgeon //always// verifies the success of the previous step before proceeding. One would never attempt to insert a probe without making sure dura is removed first; or apply dental cement to the skull without first making sure it is dry. Apply the same mindset to analysis and confirm the success of every step before proceeding!
  
-=== 2. Plan ahead===+=== 2. Plan ahead (from raw data to result) ​===
  
-Before the start of data collection, you should identify the steps in your data processing pipeline. Doing so often highlight key dependencies and potentially important controls that ensure ​you collect the data in such a way that you are then in a position to do or test what you set out to do. Work from the raw data all the way to the statistical tests, plots, or resource that is the final outcome of the analysis.+Before the start of data collection, you should identify the steps in your data processing ​"pipeline" -- that is, the flow from raw data to the figures in the resulting publication. Doing this can often highlight key dependencies and potentially important controls that help you collect the data such that you will can actually ​test what you set out to do. 
  
-There are two steps to this:+This sort of planning is especially important when performing experiments with long timelines that are not easily changed, such as when chronically implanting animals for in vivo recording, where it may take up to two months ​to collect data from a single animal. For smaller projects or those with faster iteration times (e.g. a new slice every day) you can be more flexible.
  
-First, think in terms of data, and transformations on those data, and create a schematic that captures each data type and the transformations. For instance, to determine whether the number of sharp wave-ripple complexes (SWRs) that occur depends on an experimental manipulation, ​this analysis workflow might be represented as follows (generated with [[http://​www.graphviz.org/​|GraphViz]]):​+There are two steps to this planning process: 
 + 
 +**First**, think in terms of data, and transformations on those data, to create a schematic that illustrates your analysis workflow at a conceptual level. 
 + 
 +For instance, to determine whether the number of sharp wave-ripple complexes (SWRs; these are candidate "​replay"​ events in the hippocampus) that occur depends on an experimental manipulation, ​a possible ​analysis workflow might be represented as follows (generated with the [[https://​www.dokuwiki.org/​plugin:​graphviz|DokuWiki plugin]] for [[http://​www.graphviz.org/​|GraphViz]]):​
  
 <​graphviz>​ <​graphviz>​
Line 43: Line 57:
 </​graphviz>​ </​graphviz>​
  
-The above workflow shows how raw local field potential (LFP) data is first loaded (by the ''​LoadCSC()''​ function) and then filtered (''​FilterLFP()''​). Note that at this stage, you can simply make up function names, as long as they are descriptive (see Principle 3, below). Next, SWRs events are detected from the filtered LFP, and the number for each trial counted before applying a statistical test.+The above workflow shows how raw local field potential (LFP) data is first loaded (by the ''​LoadCSC()''​ function) and then filtered (''​FilterLFP()''​). Note that at this stage, you can simply make up function names, as long as they are descriptive (see Principle 3, below). Next, SWR events are detected from the filtered LFP, and the number for each trial counted before applying a statistical test.
  
-The square brackets such as %%[TSD]%% refer to standardized data types, introduced in [[analysis:​nsb2014:​week2|Module 2]]. Briefly, a TSD object describes one or more time-varying signals (such as LFP or videotracker data), an IV object describes interval data (such as SWR events, which have a start and end time as well as some properties such as their power), and a TS object describes timestamps (for instance spikes). By standardizing the form in which these data types are handled, we can more easily implement unit tests and write clean, modular code.+The square brackets such as %%[TSD]%% refer to standardized data types, introduced in [[analysis:​nsb2014:​week2|Module 2]]. Briefly, a TSD object describes one or more time-varying signals (such as LFP or videotracker data), an IV object describes interval data (such as SWR events, which have a start and end time as well as some properties such as their power), and a TS object describes timestamps (such as spike times). By standardizing the form in which these data types are handled, we can more easily implement unit tests and write clean, modular code.
  
-The second step: based on a data analysis workflow such as the above, write out the pseudocode that would implement the workflow. For the workflow above, this might look something like:+**Second**: based on a data analysis workflow such as the above, write out some example ​pseudocode that would implement the analysis in MATLAB. For the workflow above, this might look something like:
  
 <code matlab> <code matlab>
Line 65: Line 79:
 </​code>​ </​code>​
  
-Note that each analysis step is implemented by a function, with a ''​cfg''​ struct to specify some parameters of the transformation (e.g. the frequency band to filter).+Note that each analysis step is implemented by a function, with a ''​cfg''​ struct to specify some parameters of the transformation (e.g. the frequency band to filter). ​The overall workflow is accomplished by calling the appropriate functions on evolving data types. Perhaps some of the functions you need already exist, or you may need to write some of them. Either way, making the analysis steps explicit in this way provides a good starting point for writing well-organized code. 
 + 
 +=== 3. Use good programming practice === 
 + 
 +There are [[http://​stackoverflow.com/​questions/​550861/​improving-code-readability | many]] resources and opinions on what constitutes good programming practice. A few of the most important are: 
 + 
 +  * //​Don'​t repeat yourself//. Implementing each piece of functionality only once means your code will be easier to troubleshoot,​ re-use, and extend -- as well as easier to read. 
 +  * //Unit testing//. Provide test scenarios with key pieces of code where you know what the expected outcome is. For data analysis this commonly involves generating artificial data such as white noise or Poisson spike trains of a certain average firing rate. These tests will be extremely helpful in interpreting your data later, and to check if changes you make to the code have not broken its functionality. 
 +  * //​Readability//​. Generally, whatever analysis you are doing, you will probably have to do again. Maybe on the same data after you make a change to the code, maybe after you collect more data. Maybe tomorrow, maybe next year. It is tempting to assume you will remember what you did and why, but this will not always be the case! Plus, even if //you// do, it's likely someone else (such as your adviser, or a collaborator) will have to run and understand your code. Whether or not they can will reflect on you.  
 +  * //​Consistency//​. Use consistent naming schemes for different kinds of variables and functions; always place constants and parameters at the start of each file. 
 +=== 4. Write to share ===
  
-Making the analysis steps explicit in this way provides ​good starting point for writing well organized ​code+A desirable endpoint of successful analysis is that you can share the code and the raw data with anyoneand they will be able to generate all the figures and results in the paper.
-=== 3. Learn about, ​and implementgood programming practice===+
  
-There are many resources and opinions on what constitutes thisbut some of the most important ideas are:+This meansamong other things, that:
  
-  * Readability. Whatever analysis you do, you will have to do it again. Maybe tomorrow, maybe next year. You might think you will remember what you did and why, but you probably won't. Even if somehow you do, it's likely someone else will have to run and understand your code. Whether or not they can will reflect on you. So, use expressive variable ​and function names. Comment a lot. Write example workflows. +  * Don'​t ​hard-code the locations of any filesFollow the database format ​and file naming conventions so that it is sufficient ​to specify the root folder where the data are located.
-  * Don't repeat yourself. Implementing each piece of functionality only once means your code will be easier ​to troubleshoot,​ re-use, and extend.+
  
-=== 4. Use protection===+=== 5. Use protection ===
  
-Disk, computer, and connection failures happen, usually when you are least prepared. ​Make sure your least prepared level is sufficient ​to withstand it.+Disk, computer, and connection failures happen, usually when you are least prepared. ​Take steps to ensure that you don't lose more than a couple of hours of work, and that you NEVER lose data!
  
analysis/nsb2014/week0.1403819912.txt.gz · Last modified: 2018/07/07 10:19 (external edit)