ORACLE Special Edition: A recent preprint compares 3 models of LIP activity

May 24, 2018

In this paper, Zhao and Kording aim to offer insight into the best model to describe the responses of LIP neurons during a perceptual decision-making task.

Approach:

The authors fit 3 models to LIP data from the Roitman dataset.

blogFigure

Model 1: “constant” or “baseline” model in which the authors attempted to model the data by a constant firing rate that fluctuates from trial-to-trial (top panel). A clarification: although this model is sometimes referred to in the paper as a “baseline” model, this doesn’t refer to activity during the time before the trial starts. Instead, it reflects that the constant that is added to the GLM; there is a single constant for the whole trial.

Model 2: “Stepping” model , in which the data jumps from a low to a high state and a time that varies trial-to-trial.

Model 3:  “Ramp” model in which a linear, time dependent rise is used. Its slope can vary trial-to-trial, but all ramps start at zero. A few details:  (1) This model doesn’t actually reflect a true candidate hypothesis about LIP activity: the atual hypothesis is that the activity reflects a random walk (Diffusion model), the average over many instances will approximate a ramp. However, without knowing the actual incoming evidence on the trial (not known for this dataset), modeling the random walk isn’t possible. (2) Although the models schematically depict the model the way I have drawn it here (see Fig 2B), the authors actually implemented this as an exponential ramp (I am not sure why they chose this parameterization). 

Main take-home, as Zhao and Kording see it: The “constant” model fit the data best. If true, this would suggest an entirely different view of LIP activity. Specifically, not only would it suggest that neither steps nor random walks underlie the neural activity, but also that there is no time-varying change in firing rate in LIP neurons during decision-making. 

Skeptics Corner:

  1. We had some concerns about the method for model assessment. These are nicely summarized in another blog about the paper.
  2. The selected model will not account for the trial-averaged data (nor the VarCE, a measure of spike-count variability [1]). To be concrete, the trial-averaged firing rate (and VarCE) progressively increases during decision-formation (a ramp) but the model is a flat line.  The authors acknowledge this, but fail to explain why. In our view, the fact that the constant model was the best fit for single trial activity, but fails to explain the average, exposes a critically important truth about neural activity: modulation of neural activity due to cognitive processes (like decision-making) is strongly affected by fluctuations that arise from other neural processes, which we might refer to as “Internal Backdrop“. These other processes likely include a number of things: for instance the animal’s overall state of arousal will vary quite a bit trial-to-trial, leading the baseline firing rate to fluctuate up and down. Accounting for this baseline activity is challenging, but possible [2], and allows an investigator to separate components of the neural signal due to the internal backdrop of brain activity from decision-related activity.
  3. An easy way around this problem is blogFigure2 o include a fourth model, perhaps termed “Ramp with trial-to-trial variation” [2]. I know it isn’t a great name. I am not famous for picking catchy names (remember the VarCE?). This would allow for the reality that decision-related signals ride on top of trial-to-trial variability in baseline firing rate and should outperform models that only account for ramps or constant changes. The authors actually did mention such a model in passing, but stated that it did worse than the model with only baseline fluctuations which at first seems odd (how could adding a parameter worsen the fit?). My hypothesis is that the failure of the model with more parameters stems from overfitting. Here’s why: they are fitting individual time bins (again, see this blog) and then cross validating on left out time bins. However, estimating a ramp in a small time bin is very challenging. The ramps are meant to unfold slowly over the whole trial (and they aren’t ramps on single trials at all). Plus, the point process variance will prevent one from getting a meaningful estimate of a ramp at all. So of course, much of the time one would estimate the wrong ramp, leading to bad predictions on the left-out time bins and hence the poor model fit. 
  4. The Roitman dataset (available here) was appropriate in some ways, most notably that it was the dataset analyzed in a related paper examining this issue [3]. However, this dataset was ill-suited to some of the analyses in the paper because there are low trial counts for many neurons (large trial counts are critical because there are 10 conditions: 2 motion directions and 5 coherence levels). This is already a challenge when estimating firing rate mean, and will be an enormous problem when estimating firing rate variance, as was done to compute the Fano Factor in Figure 1. The paper reports FFs as large as 8, which, while not impossible, likely result from an uncertain estimate of spike count variance.
  5. Finally, the Roitman dataset only includes information about average stimulus coherence for each trial; the motion energy on individual trials is missing. This prevents the possibility of actually modeling the random walk (Model 4) that is the true alternative hypothesis to the baseline and stepping models. A better test of this hypothesis could be made using stimuli for which stimulus strength is explicit such as in this paper or this paper.

Outlook

The authors emphasize the importance of simple models and highlight the importance of trial-by-trial variability when explaining data variance. We agree with this notion and believe that explanations of neural dynamics during decision-making must take the trial-by-trial internal backdrop into account. We hope that the authors will therefore consider models that combine trial-to-trial baseline variability with stimulus-evoked dynamics. A fruitful avenue for model comparison would also be to use more than only the Roitman dataset; while the authors recorded what was at the time a very large number of neurons, there are other, larger datasets available that may make it easier to arbitrate between models.  

References

 1.    Churchland, A. K. et al. Variance as a signature of neural computations during decision making. Neuron 69, 818-831, doi:10.1016/j.neuron.2010.12.037 (2011).

2       Musall, S., Kaufman, M. T., Gluf, S. & Churchland, A. K. Movement-related activity dominates cortex during sensory-guided decision making. BiorXiv (2018).

3       Latimer, K. W., Yates, J. L., Meister, M. L., Huk, A. C. & Pillow, J. W. Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science 349, 184-187, doi:10.1126/science.aaa4056 (2015).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

Fairhall lab

Computational neuroscience at the University of Washington

Pillow Lab Blog

Neural Coding and Computation Lab @ Princeton University

Churchland lab

Perceptual decision-making at Cold Spring Harbor

%d bloggers like this: