James Roach and Simon Musall, two postdocs in my lab, took the lead on this write-up.

This new article by Adam Calhoun, Jonathan Pillow & Mala Murthy leverages detailed behavioral data during courtship of fruit flies to demonstrate that sensorimotor transformation is highly state dependent. The authors combine a hidden-markov-model (HMM) and generalized linear models (GLMs) to automatically identify different internal states from behavioral data. Each state has its own rules (transfer functions) that govern how sensory inputs drive different types of fly songs. Lastly, they use optogenetic stimulation to test if song-promoting neurons might instead be causing specific internal state transitions.

Big Question: How can we identify different internal states and understand how they shape the transformation from sensory input to motor output? This is a question that goes far beyond flies and has. broad relevance. 

Approach: The authors analyze 2765 minutes of fly movies featuring 276 fly couples. Using automated methods, they identify a bunch of behaviors called ‘feedback cues’ that the male flies could get from themselves and a female partner. To identify internal states, they developed a GLM-HMM that predicts different mating songs based on feedback cues and song history. The model was able to predict held out data really well, far better than a traditional GLM approach. In contrast to a traditional HMM, the GLM-HMM also uses feedback cues to determine when to switch between internal states and was also much better in predicting when flies transition between different song types.

The authors then dug deep to better understand what defines the states and how they differ in terms of the relationship between feedback cues and song production. They find that states aren’t simply defined by the current song or the incoming feedback cues. Instead, what defines a state is the exact relationship between the feedback cues and song production: each state could produce diverse song outputs, but the ways that feedback cues predicted which song was produced were largely different across states.

Finally, they optogenetically manipulated 3 cell types in the brain and observed that stimulation of one type, PiP10, drove the animal into a “close” state. There is a subtlety here: the animal didn’t just sing more, instead, switching internal states made some song types in response to particular feedback cues more likely, ruling out a much simpler model (a summary is below).

Screenshot 2019-07-17 14.31.34

Take homes:

  1. Sensorimotor transformation is highly state dependent and different feedback cues can lead to state changes.
  2. One can identify states form behavioral data alone in an unsupervised way. These differ from experimenter-imposed states like hunger and satiety because the animal engages in them voluntarily, and switches states on a fairly rapid timescale (e.g., seconds).
  1. States can have highly overlapping behavioral repertoires, both in terms of the sensory cues that are present and the song outputs that are observed.
  1. Behavioral states are not fixed, as we often assume, but vary continuously. What is really novel here is that they used an HMM to identify latent states, as opposed to experimenter defined ones like satiety and hunger. Assuming an animal is in a fixed state throughout an experiment can lead us astray and we can miss important information about how animals interact with their environments.

Skeptics’ corner:

We were surprised that the addition of so many new parameters doesn’t improve performance of the GLM-HMM relative to the HMM more. A closer comparison between the HMM and GLM-HMM, (e.g. in Fig. 3a) would have helped us understand how the addition of state-dependent emission GLMs improves sequence prediction compared to a fixed emission HMM. Also, autocorrelations seem to be a strong factor in the success of the HMM and mixed models. It would be interesting to see how the standard GLM would perform when an autoregressive term is added to it. 

Activation of PiP10 promotes a ‘close’ state transition and yet the animal does LESS of the sine song. This is intriguing because the sine song is the most probable output in the close state state, so this divergence seems counter-intuitive (Fig. 2). In a way, this is exciting! It reiterates that the fly is in a state NOT because of what it is doing now, but because of how the feedback cues shape the behavior. But we still found the magnitude of that difference confusing. In a related point, how do males behave beyond song production in each state? Does PiP10 stimulation lead to the male moving as if it is in the “close” state even if they are far away from the female?

Manipulating neural activity to induce state transitions will likely be a widely used and informative probe into animal brain states. Interestingly, this will lead to brain states that are inappropriate for a given context. We think of this as being a bit like a multisensory “conflict condition”: the brain is telling the animal it is in one state, but everything around the animal (e.g., its distance from the female) might be more consistent with a different state. How should we be thinking about the fact that the optogenetics push the animal into a conflict condition? Is this an off-manifold perturbation? 

Outlook:

The term ‘feedback cues’ combines self-initiated components like male velocity with externally-imposed components like female velocity. It would be interesting to separate those out further to better understand these different components influence state-transitions and song production. Functionally grouping ‘feedback cues’ might also provide additional insight into which features they influence the most.

More emphasis on state-transition GLMs would be very interesting to better understand how transition are guided by sensory feedback cues. The kernels shown in Fig. S4 indicate different patterns of high ethological significance. Highlighting these more would further demonstrate the usefulness of the GLM-HMM approach in general.

We wished there were a low-dimensional summary that allowed us to more easily visualize what the collection of behaviors were in each state. This maybe underscores a general problem in the field which is when you probe behavior with unsupervised learning tools, you end up with results that are deeply informative, and very powerful, but hard to summarize. We struggled with this as well, when connecting video to neural activity using unsupervised methods. I’m hoping folks will have emerging ideas about how to do this.

The writing of this post was led by Chaoqun Yin, a graduate student in the Churchland lab. This paper is actually not on the biorxiv- note to authors: pls put preprints there.

Today’s paper is Cortical Areas Interact through a Communication Subspace, by João D. Semedo, Amin Zandvakili, Christian K. Machens, Byron M. Yu, and Adam Kohn. In this paper, the authors argue that different subspaces are used in V1 intra-areal communication vs. V1-V2 communication. This mechanism may help to route selective activity to different areas, and reduce the unwanted co-fluctuations.

Approach:

Data Collection: The neuronal data Semedo et al. used was recorded in three anesthetized macaque monkeys. They measured neuronal activity as spike counts in 100 ms bins during the presentation of drifting sinusoidal gratings. All analyzed neurons had overlapping receptive fields, and were located in the output layers of V1 and the middle layers of V2 (the primary downstream target of V1).

Neuron Grouping: To distinguish the V1 intra-areal interaction and V1-V2 interaction, the authors divided V1 neurons into source and target populations by matching the target V1 population to the neuron count and firing rate distribution of V2 population.

Subspace Analysis: To test whether the activities of target V1 and V2 only depend on a subspace (“predictive dimensions”) of source V1 population activity, Semedo et al. used reduced-rank regression (RRR)— which constrains the regression result into a low-dimensional space— on the source V1 population. After getting the predictive dimensions, natural next question is this: do the target V1 predictive dimensions align to the V2 predictive dimensions? That is, do the V1-V1 and V1-V2 interaction share the same subspace? To address this, Semedo et al. removed neuronal activity along target V1 or V2 predictive dimensions and tested how the predictive performance changed across areas.

Main take-home:

communication subspace

Surprisingly, V2 activity is only related to a small subset of population activity patterns in V1 (the “source” populations). Further, these patterns are distinct from the most dominant V1 dimensions.

The predictive dimensions of V1-V1 and V1-V2 communications are largely non-overlapping. This implies that the V1-V1 and V1-V2 interactions may leverage different subspaces. (See the right figure). Why might such a configuration occur? The authors proposed that this configuration would allow V1 to route selective activity to different downstream areas, and reduce unwanted co-fluctuations in downstream areas (related to an idea in Kaufman, Shenoy et al 2014)

Skeptics Corner: A small one: In thePrediction paper, fig 2B (the figure at left) shows the source V1 population activity can predict V2 activity as well as V1 activity can predict its own activity. This was puzzling: we would have thought that it is easier to predict neural activity in the same area because the neurons in the same area may be more interconnected compared to neurons in different areas. We aren’t quite sure about the anatomy here. Perhaps the V1-V1 and V1-V2 connections are similar in amount and strength if all these neurons share overlapped receptive field?

Outlook:

This paper raises an intriguing mechanism for inter-areal interaction: one brain area can project selective information to specific downstream areas through different communication subspaces. Semedo et al. tested this idea on a dataset recorded in anesthetized monkeys. We wonder if this mechanism can be found in awake, and even free-moving animals. Moreover, the authors mainly used a passive grating watching task. But if this is a general mechanism for inter-areal interaction, it would be interesting to look for similar phenomena in more complicated visual tasks and especially multisensory tasks.

Finally, the authors plotted the neuronal activity in a neural space where each axis represents the activity of one neuron. In this case, the weights of all neurons can be represented as a regression dimension across the neural space. If we can keep recording the same neuron group for a long period, we would get the long-term changes of these weights. Then maybe we can use the weights as axes to get a weight space, which shows the change of each neuron’s contribution to the population activity.

References:

  1. Semedo, J. D., Zandvakili, A., Machens, C. K., Byron, M. Y., & Kohn, A. (2019). Cortical areas interact through a communication subspace. Neuron102(1), 249-259.
  2. Kaufman, M. T., Churchland, M. M., Ryu, S. I., & Shenoy, K. V. (2014). Cortical activity in the null space: permitting preparation without movement. Nature neuroscience17(3), 440.

 

We’ve all had the experience of botching an easy decision. Laboratory subjects, both human and animal, also sometimes make the wrong choice when categorizing stimuli that should be really easy to judge. We recently wrote a paper about this which is on biorxiv. We argued that these lapses are not goof-ups but instead reflect the need for subjects to explore an environment to better understand its rules and rewards. We also made a cake about this finding, which was delicious.

We were happy to hear that Jonathan Pillow‘s lab picked our paper to discuss in their lab meeting. Pillow’s team have, like us, been enthusiastic about new ways to characterize lapses and in fact have a rather interesting (and complimentary account) which you can read if interested. We really enjoyed reading this thoughtful blog by Zoe Ashwood about their lab meeting discussion.

They raised a few concerns which we address below:

Concern #1: The first concern had to do with the probability of attending (p_{attend}), the parameter that determined the overall rate of lapses in the traditional inattention model. We would have liked to see further justification for keeping p_{attend} the same across the matched and neutral experiments and we question if this is a fair test for the inattention model of lapses. Previous work such as Körding et al. (2007) makes us question whether the animal uses different strategies to solve the task for the matched and neutral experiments. In particular, in the matched experiment, the animal may infer that the auditory and visual stimuli are causally related; whereas in the neutral experiment, the animal may detect the two stimuli as being unrelated. If this is true, then it seems strange to assume that p_{attend} and p_{bias} for the inattention model should be the same for the matched and neutral experiments.
 Our response: In the inattention model, p_{attend} represents the probability of not missing the stimulus – hence it should be influenced by (a) the animal’s attentional state before experiencing the stimulus & (b) the bottom-up salience that allows the stimulus to “pop” into the animal’s attention. Since the matched & neutral stimuli are interleaved and both consisted of equally salient multisensory events, we reasoned that p_{attend} should be the same on these trials. Also note that even on matched trials, the auditory and visual events are not presented synchronously (the two event streams are independently generated). Surprisingly, this does little to deter a causal inference: animals integrate nonetheless (see Raposo, 2012). So from the point of view of the animal, a trial isn’t obviously a neutral trial right from the outset. In keeping with that, we found that animals were influenced by stimuli over the entire course of the trials, even for neutral trials (see psychophysical kernels below).
Screenshot 2019-06-18 17.53.30

But we agree about the different strategies- *after* the animal attends to the stimulus & estimates their rates, it could potentially use this information to infer that a trial is neutral, and should discard the irrelevant visual information (a “causal inference” strategy akin to Kording et. al) rather than integrating it (a “forced fusion” strategy). However, this retrospective discarding differs from inattention because it requires knowledge of the rates and doesn’t produce lapses, instead affecting the \sigma – causal inference predicts comparable neutral and auditory sigmas, while forced fusion predicts neutral values of \sigma that are higher than auditory, due to inappropriately integrated noise. Indeed we see comparable neutral and auditory values of \sigma (and values of \beta too), suggesting causal inference.

The second concern had to do with how asymmetric lapses could be accounted for in our new exploration model:
Concern #2 When there are equal rewards for left and right stimuli, is there only a single free parameter determining the lapse rates in the exploration model (namely \beta)? If so, how do the authors allow for asymmetric left and right lapse rates for the exploration model curves of Figure 3e (that is, the upper and lower asymptotes look different for both the matched and neutral curves despite equal left and right reward, yet the exploration model seems able to handle this – how does the model do this?).

Our response: In the exploration model,  in addition to \beta, the lapse rates on either side are determined by the *subjective* values of left and right actions (rL & rR), which must be learnt from experience and hence could be different even when the true rewards are equal, permitting asymmetric lapse rates . When one of the rewards is manipulated, we only allow the corresponding subjective value to change. Since there is an arbitrary scale factor on rR & rL and we only ever manipulate one of the rewards, we can set the un-manipulated reward (say rL) to unity & fit 2 parameters to capture lapses – \beta & rR in units of rL.

The final concern had to do with \beta, the parameter that determined the overall rate of lapses in the exploration model:
Concern #3 How could uncertainty \beta be calculated by the rat? Can the empirically determined values of \beta be predicted from, for example, the number of times that the animal has seen the stimulus in the past? And what were some typical values for the parameter \beta when the exploration model was fit with data? How exploratory were the rats for the different experimental paradigms considered in this paper?

Our response: From the rat’s perspective, \beta can arise naturally as a consequence of Thompson sampling from action value beliefs (Supplementary Fig. 2, also see Gershman 2018) yielding a beta inversely proportional to the root sum of squared variances of action value beliefs. This should also naturally depend on the history of feedback – if the animal receives unambiguous feedback (like sure-bet trials), then these beliefs should be well separated, yielding a higher beta. Supplementary 2 simulates this for 3 levels of sensory noise for a particular sequence of stimuli & a Thompson sampling policy.

Screen Shot 2019-06-20 at 10.58.05 AM

On unisensory trials the \beta values were ~5, meaning that the average uncertainty (S.D.) for unit expected rewards was ~.2, this was reduced to ~.14 on multisensory trials. The near-perfect performance on sure-bet trials suggests negligible uncertainty/exploration on those trials. These values remained unchanged for reward/neural manipulations, which only affected rR/rL depending on the side.

 

ORACLE: June 7, 2019

June 7, 2019

Today’s paper is, “Simultaneous mesoscopic and two-photon imaging of neuronal activity in cortical circuits”, by Barson D, Hamodi AS, Shen X, Lur G, Constable RT, Cardin JA, Crair MC & Higley MJ. We read this in our lab meeting, and James Roach took the lead on writing it up.

This article brings together two powerful experimental approaches for calcium imaging of cortical activity: 1) using viral injections to the transverse sinus to achieve high GCaMP expression throughout the cortex and thalamus and 2) using a right angle prism and two orthogonal imaging paths to simultaneously capture mesoscale activity from the dorsal cortex and 2-photon single cell activity.

Big Question: How diverse is the cortex-wide functional connectivity of neurons within a local region of the cortex and do these patterns depend on cell type?

Approach: The authors performed mesoscale Printcalcium imaging paired with 2-photon imaging from the primary somatosensory cortex (S1) in awake and behaving mice. Leveraging the novel technical approaches that they introduce here (and [1]), the authors quantify the relationship between the activity of individual cells and populations across the cortex. Using a method similar to spike-triggered-averaging (the cell centered network; CCN), they show that the activity-defined correlations patterns of S1 pyramidal neurons are super diverse (right*). Interestingly, neurons with similar cortex-wide functional connectivity are not necessarily spatially organized in S1.

To examine whether linkages to cortical networks differ based on cell type and behavioral role, animals expressing td-tomato in VIPneurons were injected pan-neuronally with GCaMP. VIP+ neurons were far more homogeneous in responsiveness to whisking and running behaviors than presumptive pyramidal cells. For both neuron types, membership in cortical networks was predicted by whether the cell was correlated with whisking or running behaviors. These combined effects lead to VIP+ neurons being far less diverse than non-VIP+ neurons in functional cortical connectivity.

Main take-home: Within a small patch of cortex, the diversity of functional relationship neurons form across the cortex is surprisingly high and that the local spatial arrangement of neurons is not in a “corticotopic map”: two neighboring cells can differ greatly in functional connectivity. The behavioral tuning of a neuron determines, or is the result of, its membership within a functional network. VIP+interneurons have a reduced diversity of functional connectivity, but this may be a result of VIP+ neurons being more similarly tuned to running or whisking.

Methodologically, this paper makes a significant advance in multiscale recording of neural activity. Pairing 2-photon and mesoscale calcium imaging provides a meaningful advantage in that neuronal cell types can be genetically targeted, cells can be recorded from stably for multiple days, and meso- and microscale activity can be recorded from the same brain region. We were quite impressed by how good the signals from both modalities were (although we would love to see more data comparing the two, especially pixel-by-pixel correlations). Plus, viral injections into the transvers sinus provide for high expression levels without the drawback of other possible sites.

Skeptics Corner:  First, we were a bit concerned that the functional connectivity analysis loses a lot of detail after significance thresholding of the cell centered network data. Preserving some of the complexity in the raw CCN values might support a better alignment to AIBS-defined brain regions. For example cell 2 in figure 3d (below*) shows peaks in CCN in sensory and motor areas separated by regions with lower correlations, but thresholding for significance leads to these areas being treated equally.

Print

Second, many of the results depend on the functional parcellation of the cortex based upon mesoscale data and we’d love to know a lot more about the outcome of alternative parcellation strategies. Sixteen parcels per hemisphere was chosen to match to number of regions in the Allen CCF,  but how does this parameter affect the analysis? An alternative method, Louvain parcellation (Vanni et al., 2017), does not require the user to specify number of clusters in advance, so we were curious about what that would look like. Also, presenting the functional parcels color-coded for modality implies a bilateral symmetry which is not reliably supported by the borders of the regions (i.e. the hemispheres in figure 3e would look quite different without the color coding). Quantifying the bilateral symmetry of the parcel boundaries would be useful both for  imposing a cost for deviating from symmetry, but also for identifying states or conditions which lead to lateralized cortical activity.

Third, when comparing VIP neurons to putative pyramidal, the distinction of GCaMP+/TdT+ for VIP and GCaMP+/TdT- for pyramidal excludes the possibility that non-VIP interneurons could be expressing GCaMP. Might this contribute to the observed increase in functional diversity for pyramidal neurons reported?

Finally, reporting the cortex-wide functional relationships of the neurons that are not modulated by running or whisking would be an interesting result to add. Are these cells a diverse subset of the S1 population or are they a single functional block?

Outlook: This paper builds on results that highlight the functional diversity of cortical neurons within local circuits. The indication that cell-to-cortex functional relationships can be modulated by behavior highlight that shaping how individual neurons interact with brain wide networks is a central feature of brain states. Multiscale neurophysiology will be a crucial tool in establishing these relationships. An intriguing morsel of information in the article is this: while cell-spiking was uncorrelated with mesoscale activity at a given location, summed spiking and neuropil signal were. This will be important when interpreting a brain region’s mesoscale signal as representing the inputs to, outputs from, or a combination of the two. Mesoscale recordings with soma-targeted calcium reporters (once available) will be useful in disentangling the components of the signal.

*Note we present figure panels from the manuscript with slight modification (cropping) in accordance with the CC BY-NC-ND 4.0 license.

[1] Hamodi A, Martinez Sabino A, Fitzgerald ND, Crair MC (2019) Transverse sinus injections: A novel method for whole-brain vector-driven gene delivery. BioRxiv.

[2] Vanni, M.P., Chan, A.W., Balbi, M., Silasi, G., and Murphy, T.H. (2017). Mesoscale Mapping of Mouse Cortex Reveals Frequency-Dependent Cycling between Distinct Macroscale Functional Modules. J Neurosci37, 7513-7533.

In this paper, Zhao and Kording aim to offer insight into the best model to describe the responses of LIP neurons during a perceptual decision-making task.

Approach:

The authors fit 3 models to LIP data from the Roitman dataset.

blogFigure

Model 1: “constant” or “baseline” model in which the authors attempted to model the data by a constant firing rate that fluctuates from trial-to-trial (top panel). A clarification: although this model is sometimes referred to in the paper as a “baseline” model, this doesn’t refer to activity during the time before the trial starts. Instead, it reflects that the constant that is added to the GLM; there is a single constant for the whole trial.

Model 2: “Stepping” model , in which the data jumps from a low to a high state and a time that varies trial-to-trial.

Model 3:  “Ramp” model in which a linear, time dependent rise is used. Its slope can vary trial-to-trial, but all ramps start at zero. A few details:  (1) This model doesn’t actually reflect a true candidate hypothesis about LIP activity: the atual hypothesis is that the activity reflects a random walk (Diffusion model), the average over many instances will approximate a ramp. However, without knowing the actual incoming evidence on the trial (not known for this dataset), modeling the random walk isn’t possible. (2) Although the models schematically depict the model the way I have drawn it here (see Fig 2B), the authors actually implemented this as an exponential ramp (I am not sure why they chose this parameterization). 

Main take-home, as Zhao and Kording see it: The “constant” model fit the data best. If true, this would suggest an entirely different view of LIP activity. Specifically, not only would it suggest that neither steps nor random walks underlie the neural activity, but also that there is no time-varying change in firing rate in LIP neurons during decision-making. 

Skeptics Corner:

  1. We had some concerns about the method for model assessment. These are nicely summarized in another blog about the paper.
  2. The selected model will not account for the trial-averaged data (nor the VarCE, a measure of spike-count variability [1]). To be concrete, the trial-averaged firing rate (and VarCE) progressively increases during decision-formation (a ramp) but the model is a flat line.  The authors acknowledge this, but fail to explain why. In our view, the fact that the constant model was the best fit for single trial activity, but fails to explain the average, exposes a critically important truth about neural activity: modulation of neural activity due to cognitive processes (like decision-making) is strongly affected by fluctuations that arise from other neural processes, which we might refer to as “Internal Backdrop“. These other processes likely include a number of things: for instance the animal’s overall state of arousal will vary quite a bit trial-to-trial, leading the baseline firing rate to fluctuate up and down. Accounting for this baseline activity is challenging, but possible [2], and allows an investigator to separate components of the neural signal due to the internal backdrop of brain activity from decision-related activity.
  3. An easy way around this problem is blogFigure2 o include a fourth model, perhaps termed “Ramp with trial-to-trial variation” [2]. I know it isn’t a great name. I am not famous for picking catchy names (remember the VarCE?). This would allow for the reality that decision-related signals ride on top of trial-to-trial variability in baseline firing rate and should outperform models that only account for ramps or constant changes. The authors actually did mention such a model in passing, but stated that it did worse than the model with only baseline fluctuations which at first seems odd (how could adding a parameter worsen the fit?). My hypothesis is that the failure of the model with more parameters stems from overfitting. Here’s why: they are fitting individual time bins (again, see this blog) and then cross validating on left out time bins. However, estimating a ramp in a small time bin is very challenging. The ramps are meant to unfold slowly over the whole trial (and they aren’t ramps on single trials at all). Plus, the point process variance will prevent one from getting a meaningful estimate of a ramp at all. So of course, much of the time one would estimate the wrong ramp, leading to bad predictions on the left-out time bins and hence the poor model fit. 
  4. The Roitman dataset (available here) was appropriate in some ways, most notably that it was the dataset analyzed in a related paper examining this issue [3]. However, this dataset was ill-suited to some of the analyses in the paper because there are low trial counts for many neurons (large trial counts are critical because there are 10 conditions: 2 motion directions and 5 coherence levels). This is already a challenge when estimating firing rate mean, and will be an enormous problem when estimating firing rate variance, as was done to compute the Fano Factor in Figure 1. The paper reports FFs as large as 8, which, while not impossible, likely result from an uncertain estimate of spike count variance.
  5. Finally, the Roitman dataset only includes information about average stimulus coherence for each trial; the motion energy on individual trials is missing. This prevents the possibility of actually modeling the random walk (Model 4) that is the true alternative hypothesis to the baseline and stepping models. A better test of this hypothesis could be made using stimuli for which stimulus strength is explicit such as in this paper or this paper.

Outlook

The authors emphasize the importance of simple models and highlight the importance of trial-by-trial variability when explaining data variance. We agree with this notion and believe that explanations of neural dynamics during decision-making must take the trial-by-trial internal backdrop into account. We hope that the authors will therefore consider models that combine trial-to-trial baseline variability with stimulus-evoked dynamics. A fruitful avenue for model comparison would also be to use more than only the Roitman dataset; while the authors recorded what was at the time a very large number of neurons, there are other, larger datasets available that may make it easier to arbitrate between models.  

References

 1.    Churchland, A. K. et al. Variance as a signature of neural computations during decision making. Neuron 69, 818-831, doi:10.1016/j.neuron.2010.12.037 (2011).

2       Musall, S., Kaufman, M. T., Gluf, S. & Churchland, A. K. Movement-related activity dominates cortex during sensory-guided decision making. BiorXiv (2018).

3       Latimer, K. W., Yates, J. L., Meister, M. L., Huk, A. C. & Pillow, J. W. Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science 349, 184-187, doi:10.1126/science.aaa4056 (2015).

Today’s commentary brought to you from (in alphabetical order): Lital Chartarifsky, Anne Churchland, Ashley Juavinett, Farzaneh Najafi, Anne Urai, & Sashank Pisupati. Feel free to comment, correct, express skepticism, etc. You can do so here, on Biorxiv or on Twitter (@anne_churchland). Let’s get a conversation going!

Paper #1: Psychophysical reverse correlation reflects both sensory and decision-making processes (Okazawa, She, Purcell & Kiani)

Big question: Psychophysical kernels are a powerful method to derive the spatiotemporal filter that transforms sensory information into a decision. However, can psychophysical kernels be interpreted as reflecting such sensory weighting profiles when measures in realistic decision-making scenarios?

Summary: First, tasks with a fixed stimulus duration cannot correctly retrieve sensory filtering timecourses, since the temporal weighting function may just as well reflect the process of bound-crossing during evidence accumulation (and the experimenter doesn’t have access to the time of the decision). Second, variable non-decision time (even in a reaction time task) results in decaying kernels. The authors then demonstrate different ways to draw informative conclusions from psychophysical kernels. First, they compare kernels in a motion direction discrimination RT task to explicit predictions derived from the DDM, and show that kernel shape can be predicted by a DDM with stationary sensory weights. The investigate a range of models that all have stationary sensory weights, and show (Figure 7) that these can generate a diversity of kernel dynamics.

Take home: Be very careful when interpreting psychophysical kernels as reflecting purely sensory weights!

Paper #2: Cortical neural activity predicts sensory acuity under optogenetic manipulation. John J. Briguglio, Mark Aizenberg, Vijay Balasubramanian, Maria N. Geffen (Note that its now in J. Neurosci).

Big question: Why does stimulation (optical, chemical, electrical) cause idiosyncratic changes in behavior, sometimes in opposing directions?

Take home: Behavioral variability occurs because the changes to neurons are also variable. In this paper, the authors showed that changes in psychophysical threshold following A1 optogenetic stimulation were variable and that this variability could be understood if one took into account the change in the neurometric threshold at the site of stimulation.To estimate neurometric threshold, the authors recorded neural activity at the same sites where they stimulated and they measured neurometric threshold using Fisher information. To estimate behavioral threshold, the authors used a pre-pulse inhibition task in which an auditory tone, if the animal’s heard it, could ward off a startle reflex in response to a loud white noise burst.

Skeptics’ corner: Changes in psychometric functions were related to changes in neurometric functions (cool!), but I was left wondering why the changes in neurometric functions were idiosyncratic. The direction of the change in neurometric threshold was idiosyncratic across sites, and even across stimulation methods. In other words, the same kind of stimulation (e.g., ChR2 stimulation in pyramidal neurons) sometimes made both neurometric threshold go up, and other times made it go down.

Paper #3: Functional selectivity and specific connectivity of inhibitory neurons in primary visual cortex Petr Znamenskiy, Mean-Hwan Kim, Dylan R. Muir, Maria Florencia Iacaruso, Sonja B. Hofer, and Thomas D. Mrsic-Flogel

Question: Do inhibitory neurons connect broadly to all their nearby excitatory neurons? Or is there specific connectivity in the connection of inhibitory to excitatory neurons?

Take home message: Inhibitory neurons connect more strongly to nearby excitatory neurons with similar responses to visual stimulation, suggesting that connections between inhibitory and excitatory neurons are organized under a similar rule to excitatory-excitatory connections. In more detail, although inhibitory neurons are less tuned to visual stimuli than excitatory neurons, their response selectivity is not merely a reflection of their surrounding neurons: inhibitory neurons selectivity out-performs that of their surrounding neurons. This is due to their selective connectivity to excitatory neurons with similar tuning properties.

Skeptics’ corner: If there is selective connectivity between excitatory and inhibitory neurons, why are inhibitory neurons still less tuned? 2) Do the same conclusions apply to other subtypes of inhibitory neurons?

 

Paper #4: Stable representation of sounds in the posterior striatum during flexible auditory decisions (Guo, Walker, Ponvert, Penix, Jaramillo)

Big question: What is the role of posterior striatum during auditory-driven decisions in mice?

Take home message: Posterior striatum (also known as “auditory striatum”) is causal in an auditory discrimination task.Bilateral muscimol inactivation of this area impaired performance, while unilateral optogenetic activation during sound presentation biased the animals’ choices contralaterally. The authors also showed that the activity of neurons in posterior striatum reliably encoded stimulus features, but were minimally influenced by the animals’ choices, suggesting that neurons in the posterior striatum provide sensory information downstream, while providing little information about behavioral choice.

Skeptics’ corner: The result showing impaired performance after bilateral muscimol inactivations was averaged across 4 sessions, however the authors note that on individual sessions the mouse was idiosyncratically biased to either the left or the right side. This side bias was probably caused by unbalanced muscimol injection. This is something that we should be mindful of when interpreting performance after bilateral manipulations.

Paper #5: Limitations of proposed signatures of Bayesian confidence
(William T. Adler, Wei Ji Ma)

Big question: A bayesian model of confidence was previously proposed by Hangya et. al, in which confidence reflects the subject’s estimate of posterior probability of the chosen option. Do the proposed signatures of bayesian confidence generalize?

Take home: Proposed signatures of bayesian confidence (i.e. divergence of mean confidence as a function of stimulus magnitude on correct and error trials, mean confidence of 0.75 on uninformative trials) are not necessary if the category-conditioned stimulus distributions are overlapping, especially in certain noise regimes, and yet others can be predicted by non-bayesian models. Hence favor model comparison over signatures!

Skeptics’ corner: The authors mention that an alternate model of confidence, the distance of an observation from the category boundary, can account for some of the signatures. However the question remains whether the Bayesian model makes unique predictions that distinguish it from alternatives, for instance predicted effects of changing subjects’ priors. Enumerating such unique predictions would help in directly testing the model experimentally, and ease the burden on model comparison.

2L6A0063.JPG

Today’s commentary brought to you from (in alphabetical order): George Bekheet, Lital Chartarifsky, Anne Churchland, Ashley Juavinett, Simon Musall, Farzaneh Najafi & Sashank Pisupati. Feel free to comment, correct, express skepticism, etc. You can do so here, on Biorxiv or on Twitter (@anne_churchland). Let’s get a conversation going!

Paper #1: Discrete attractor dynamics underlying selective persistent activity in frontal cortex (Inagaki, FontolanRomani & Svoboda)

Big question: What is the deal with the persistent activity in mouse area ALM that precedes licking movements?

Take home: Using intra and extracellular recording, combined with optogenetics and network modeling (nice!), the authors conclude that attractor dynamics, and not integration, define neural activity in area ALM.
But, hmmmm: Persistent activity in advance of movements is widely observed in many critters, but its function is pretty mysterious. Other kinds of persistent activity, like memory or evidence accumulation have a clear cognitive function, but the role of motor preparatory activity is not obvious. Why does ALM need to respond to far in advance of a movement anyway?

Paper #2: Optogenetically induced low-frequency correlations impair perception (Nandy, Nassi1 & Reynolds)

Big question: While multiple groups have shown that attention reduces the correlation between neurons within the receptive field of the attended location, it has been difficult to show causation. Here, they recreate low-frequency correlations in visual cortex and ask: can one causally affect the ability of an animal to pay attention?

Take home: Using depolarizing opsin C1V1 in a lentivirus in combination with an artificial dura, the researchers created a preparation in which they could optically excite specific locations of pyramidal cells in V4. During an orientation-change detection task, they use this system to induce low frequency (4-5 Hz) as well as high frequency (20 Hz) oscillations within the receptive field of the attended region. They demonstrate that low frequency stimulation within the attended field impairs the animal’s ability to do the task, whereas low frequency stimulation in an unattended field does not. The finding was also frequency specific — high frequency stimulation does not impair performance.

But, hmmmm: These findings provide nice closure to previous skepticism that changes in correlation structure could simply be an off-target effect, but not actually causal for attention. Still, the field seemed pretty convinced that low frequency correlations were somehow involved in attention, so this result probably will not shock many researchers. In addition, the behavioral effects are not well characterized in this paper. The two sample sessions in Figure 2, Supplement 2 show very different effects on the psychometric curve — one is shifted, whereas one has a different slope, suggesting different underlying impairments. We’d love to see the researchers more closely quantify the impairments in each animal.

Paper #3: Exclusive functional subnetworks of intracortical projection neurons in primary visual cortex (Kim, Znamenskiy Iacaruso, & Mrsic-Flogel)

Big question: How do long-range projection targets constrain local connectivity of cortical neurons?

Take home: distinct populations in V1 project to higher visual areas AL and PM. These distinct populations avoid making connections with each other which is unexpected given their signal correlations (response similarity). Therefore, projection target acts independently of response similarity to constrain local cortical connectivity. The absence of recurrent connections between AL and PM potentially allows for their independent modulation by top-down signals.

But, hmmmm: Should we worry that retrograde labeling may have failed to label all projections neurons? Also is identifying double labeled neurons an error-prone task?

Paper #4 Accurate Prediction of Alzheimer’s Disease Using Multi-Modal MRI and High-Throughput Brain Phenotyping (Wang, Xu, Lee, Yaakov, Kim, Yoo, Kim & Cha)

Big Question: Does multi-modal MRI data in combination with high throughput brain phenotyping provide any utility in predicting Alzheimer’s disease?
Take home: Authors have produced a machine learning model that can discern the difference (97% accuracy) between an AD brain and one from a patient with subjective memory complaints.
 
But, hmmm…: Seeing as this is done with retrospective data, why not look at AD patients verses patients with no cognitive impairment and memory complaints? Also, we would have loved it if the authors included more information on the machine learning analytics they used.

Paper #5: Causal contribution and dynamical encoding in the striatum during evidence accumulation (Yartsev, Hanks, Yoon & Brody)

Big question: Which regions of the brain are causally involved in evidence accumulation during decision making?

Take home: Anterior dorsal striatum satisfies 3 major criteria for involvement, as revealed by a detailed behavioral model – necessary (pharmacological inactivation makes accumulation noisy), represents graded evidence on single trials (electrophysiology), and contributes only during accumulation (temporally-specific optogenetic inactivation) – and is hence the first known causal node in evidence accumulation.

But, hmmm…: Teasing apart the relative contributions of striatum and its upstream inputs to accumulation will require further study, as will distinguishing its contribution relative to prefrontal cortex (FOF) to subsequent aspects of the decision such as leak & lapses.

Paper #6: Confidence modulates exploration and exploitation in value-based learning (Boldt, Blundell & De Martino)

Big question: What is the link between humans’ confidence in their decisions and their uncertainty in the value of different choices? How do these quantities influence their decisions?

Take home: Belief confidence(i.e. certainty in value estimates) drives decision confidence (i.e, confidence that choices made were correct) in a two-armed bandit, and individuals with better estimates of the former also had better estimates of the latter. Moreover, the belief confidence in the higher-value option modulated the exploration-exploitation tradeoff, with participants exploring more often when they were less confident.

But, hmmm…: Relating these results to the two known forms of uncertainty-driven exploration –  one that depends on the difference in uncertainties of the two options (uncertainty bonus) and the other that depends on their sum (Thompson sampling)- will require further investigation into the effects of interaction between the belief confidences of the two options.

Paper #7: Aberrant Cortical Activity In Multiple GCaMP6-Expressing Transgenic Mouse Lines (SteinmetzBuetfering, Lecoq, Lee, PetersJacobs, CoenOllerenshawValleydeVries Garrett, Zhuang,  Groblewski Manavi Miles White Lee Griffin, LarkinRollCrossNguyenLarsenPendergraftDaigleTasicThompsonWatersOlsenMargolisZengHausserCarandiniHarris)

Big question: Transgenic animals are developing to be the standard for for measuring neural activity but potential side-effects of genetic manipulations may be overlooked. Here the authors show that several GCaMP lines show abnormal epileptiform activity that is not observed in wild-type mice. They also provide on how to avoid this issue when using affected lines.

Take-home: Epileptiform activity are short, high-amplitude bursts of activity that span large parts of cortex and occur at a rate of ~0.1-0.5 Hz. Epileptiform is clearly distinct from other neural activity (measured with ephys, 2-photon or widefield imaging) but most animals don’t show any clear behavioral impairments. The origin of epileptiform is unclear but suppressing expression of GCaMP in the first 7 weeks seems to resolve the issue even for transgenic lines that are otherwise most affected.

But, hmmmm: Presumably, there is a variety of causes for abnormal neural activity in transgenic animals. Its not clear whether suppressing GCaMP expression will prevent these issues in future lines and there might also be other problems like indicator over-expression that will cause headaches in the future. There should be more studies that describe and address potential issues.

Paper #8: Stable representation of sounds in the posterior striatum during flexible auditory decisions (GuoWalkerPonvertPenix, Jaramillo)

 Big question: What is the role of posterior striatum neurons during auditory-driven decisions in mice?

Take home message: Here, the authors show that transient pharmacological inactivation of posterior striatum (also known as “auditory striatum”) impaired performance in an auditory discrimination task, while optogenetic activation during sound presentation biased the animals’ choices. Moreover, the activity of these neurons reliably encoded stimulus features, but was only minimally influenced by the animals’ choices, suggesting that neurons in the posterior striatum provide sensory information downstream, while providing little information about behavioral choice.

But, hmmm: The activation and inactivation experiments were performed on different neuronal populations (direct-pathway medium spiny neurons vs. all posterior striatal neurons, respectively), as well as unilaterally vs. bilaterally (activation vs. inactivation, respectively). It would be interesting to know how the different populations support the behavior, as well as matching the methodology. Moreover, the pharmacological inactivation had pretty strong motor effects and it is important to make sure that the behavioral effects were not cause by motor deficits.

This summer, a number of new papers have come out with data that bear on the role of posterior parietal cortex (PPC) on perceptual decisions. First, a paper by Katz and colleagues shook things up with their new data demonstrating that pharmacological inactivation of primate PPC has little effect on perceptual decisions. These results have been talked about in the community for a while- I will hold off saying too much about them since I wrote a piece about this paper that will come out in a few weeks (Stay tuned- I’ll post a link then). But the short story is that this paper argued that despite strong modulation during perceptual decisions, primate PPC is not a member of the causal circuit for visual motion decisions.

Two papers about rodent PPC, paint a different picture. We shouldn’t be too surprised about this, since although rodent and primate PPC share the same name, they have a number of anatomical and functional differences that mean it isn’t right to think of one as the homologue of the other (in fact, could we just stop using the word, “homologue” altogether?).

The first paper, by Michael Goard and goardTaskcolleagues, measured and manipulated mouse PPC neurons during a visual detection task: the mice were shown a horizontal or vertical grating, then waited through a 3-9 second delay, then reported whether a vertical grating was present by licking a spout. The authors disrupted PPC activity optogenetically. They found that performance declined considerably when the activation took place during the time the grating was visible to the mice. Interestingly, although PPC neurons were highly active during other parts of the trial, the delay and movement periods, for instance, disruption during those times had little effect on performance. This argues that the activity during those periods may reflect signals that are computed elsewhere and fed back to PPC.

 
PrintThe second paper is from my lab. Like the Goard paper, we found that performance declined when we stimulated while animals were facing visual stimuli that they had to judge. We mainly focussed on this period, so can’t compare the results with disruption at other times. We did, however, compare disruption on visual vs. auditory trials in the same animal and the same session, and we found that effects were mostly restricted to visual decisions. This fits with the data from the paper above, and also with deficits on a visual memory task reported in mice by Chris Harvey and David Tank.

Beyond the science of our paper, it was also a landmark moment in my lab because it is our first paper not the preprint server, biorxiv! Thebiorxiv was started at Cold Spring Harbor Laboratory. It provides a way for scientists to make their work freely available to the world as they journey through the sometimes long TwitterImg2process of academic publishing. I like the idea of making the work available fast, and the fact that it is freely accessible to everyone is important too. I’m excited to be part of this new effort and…  I admit my enthusiasm prompted me to modify the rainbow unicorn of asapbio just a bit…

 

This is a guest blog written by Ashley Ashley Kyalwazi- Shot 1Kyalwazi, a participant in the Undergraduate Research Program at Cold Spring Harbor Laboratory. I am the director of the program, as well as the PI on our NSF-Funded grant (along with my bioinformatics colleague Mike Schatz) to train undergraduates in Bioinformatics and Computational Neuroscience. These fields share many mathematical ideas, such as a need for dimensionality reduction and machine learning tools, but our program is highly unusual in bringing them together. Ms Kyalwazi, one of our students funded by the program, will be a junior this upcoming year at the University of Notre Dame. Her story is below:

As I embarked on my ten-week long summer research immersion experience here at Cold Spring Harbor Laboratory, I was excited. To me this was an opportunity to listen to a vast range of new ideas in major scientific disciplines, to analyze them, and then to begin forming my own. I would always carry a notepad and a pen with me around campus; I never knew what I would learn on any given day, from any given individual. All I knew, for sure, was that I would grow as a scientist and as a future physician.

Working in the systems neuroscience lab of Dr. Stephen Shea has been an incredible experience. I have had the opportunity to learn a vast new array of techniques from my mentors in the Shea lab, and to apply them as I worked on an independent project that uses a mouse model in order to gain a deeper understanding of the inhibitory network of parvalbumin neurons in the auditory cortex, and how this network regulates neural activity before, during, and after what I have come to refer to as “the maternal experience.”

The maternal experience describes any aspect of the mother-pup interaction that contributes to the context of the overall birthing process. This could range from mothers giving birth, to the act of retrieving distressed pups that find themselves isolated from the nest.  The former is an action that is characteristic to a single mother and her pups; however, the latter is one that could be translated and studied with a model incorporating virgin female mice (‘surrogates’). In my project, I was interested in understanding how maternal experience alters the neural circuitry of the surrogate’s brain, primarily focusing on a network of neurons in the cortex defined by the marker parvalbumin (PV+ cells). This network has been found to play a key role in regulating plasticity in the auditory cortex of female mice, following interaction with newborn pups (Shea et al 2016). So again I ask: what is the nature of nuture?

Last week, during journal club, my lab read a paper by Lior Cohen, Gideon Rothschild, and Adi Mizrahi titled “Multisensory Integration of Natural Odors and Sounds in the Auditory Cortex.” This paper found that neurons in A1 of mothers, and other virgin female mice, integrate pup odors and sounds, suggesting an experience-dependent model for cortical plasticity.

One of the findings that intrigued me the most as I was reading this paper, was the observation that washing the pups hindered a lactating mother’s ability to retrieve them, after they became isolated from the nest. While the authors’ emphasis on this observation was the fact that pup odor is a commanding feature of pup-retrieval behavior, I was interested in this for a slightly different reason.

To know that an act as simple as washing the pups could change the behavior of something as innate as a mother retrieving her own pups led me to wonder: is there an essential biological component of motherhood that is necessary for a pup’s development and overall survival¾ or should we, as scientists, begin to hone in on the commonalities that make up the maternal experience, and also enable virgin females to successfully retrieve isolated pups?

 

My experiments this summer utilized a combination of stereotaxic surgery (injections and craniotomies), fluorescence imaging, computer programming and image analysis in order to observe the sound-evoked spatiotemporal activity patterns in the A1 PV+ network of naïve female mice.

Blue LED light was directed through cranial windows over the left auditory cortex and recordings of eight pup calls were emitted at regular intervals for each of the 20 trials. Plotting the average intensities for the activated region of interest across the 20 trials yielded a visual representation of the GCaMP6m activation in the parvalbumin neuronal population (Table 1).

 

Table 1- Ashley

This widespread cortical GCaMP6m activation throughout the A1 PV+ neuronal network suggests that, when exposed to pup calls, female mice do not tend to differentiate among varying frequencies. Perhaps this could be due the dependency of all pups to receive this nurturing behavior, and for mothers and surrogates alike to provide it.

This suggested binary distinction of female mice¾ recognizing ‘call vs. no call,’ but not distinguishing between ‘call frequency A vs. call frequency B’¾ is one that will be further investigated in the Shea lab, as we look to hone in on the neural circuits that regulate long-term, experience-dependent plasticity in the auditory cortex.

I would like to thank my research advisor for the summer, Dr. Stephen Shea, and the directors of the Undergraduate Research Program, Dr. Anne Churchland and Kim Creteur, for providing me with this opportunity. I also thank my parents¾ Michael and Winnie Kyalwazi. Against all odds you continue to work hard and sacrifice so I have opportunities as life-changing as coming to CSHL to study neuroscience… a dream come true. Your love and support has been and will never cease to be the wind beneath my wings. It has definitely been a memorable summer for me here at Cold Spring Harbor Laboratory and I look forward to the future.

 

I attended the Neurofutures 2016 conference at the Allen Institute for Brain Sciences in Seattle last week. The conference that focussed on new technologies in the field and how they will drive new discoveries. I gave the opening plenary talk at the conference, a public lecture which you can see here. Following my lecture, I was part of a panel consisting of olfaction hero Linda Buck, blood flow guru (and recent marmoset pioneer) Alfonso Silva and eCog sage Jeff Ojemann. It was exciting to hear their take on the most exciting technologies in neuroscience. Some of the exciting new developments were highlighted by the panel included optogenetics, powerful transgenic animals (mice, marmosets and beyond) and high-throughput sequencing, just to name a few.

Screen Shot 2016-07-01 at 3.23.11 PM

I share my colleagues’ enthusiasm for those techniques, but also held fast that these techniques must be accompanied by advances in theory to support out ability to understand the incoming data. Theoretical neuroscience has historically played a fundamental role in the field as a whole, and its importance going forward cannot be understated (I have argued for this before).

Fairhall lab

Computational neuroscience at the University of Washington

Pillow Lab Blog

Neural Coding and Computation Lab @ Princeton University

Churchland lab

Perceptual decision-making at Cold Spring Harbor