Park et al’s multi-parameter statistical model reveals which simple model can decode decisions from neural activity
September 24, 2014
My lab met this week to discuss a new paper by Park, Meister, Huk and Pillow, recently published in Nature Neuroscience. They leveraged neural data generated via a tried and true approach: measuring the responses of neurons in the parietal cortex during a random dot motion decision task. What’s new here was their analysis. Unlike previous work, which has focussed on normative (what the brain SHOULD do) or mechanistic models, these folks took a statistical approach. They said, look, we just want to describe the responses of each neuron, taking into account the inputs on that particular trial. And they wanted to do this on a trial-by-trial basis, no small feat since single-trial spike trains are highly variable.
They did this with success: as you can see in the example (right), the model firing rate (yellow) approximates the true single-trial response (black). But capturing the detailed, time varying responses of many trials required a model with a lot of parameters. Like, a whole lot. This seemed at first discouraging, but then again, the goal of these models was not to inform us about the nature of neural mechanisms, but instead to try and figure out which of many incoming signals modulate each neuron, and how much they do so at different moments in time. Once they fit the model, they used it to decode the
data, and then looked at the time course of this decode for a whole bunch of neurons. What they realized at this point is cool: the very complex model could be distilled to something much simpler (left) and could still predict the trial-to-trial choices. By integrating the firing rates of a pool of neurons using leaky integrators with two time constants (way simple!) they could predict choice almost as accurately as the full-blown model. The net effect is that the analysis ended up telling us something interesting about how parietal cortex neurons mix multiple inputs, and also how they might be decoded easily.
Thanks for the plug, Anne! I’ve been meaning to write a blog post about this one, but you said it better than I could. Thanks!