I attended the Neurofutures 2016 conference at the Allen Institute for Brain Sciences in Seattle last week. The conference that focussed on new technologies in the field and how they will drive new discoveries. I gave the opening plenary talk at the conference, a public lecture which you can see here. Following my lecture, I was part of a panel consisting of olfaction hero Linda Buck, blood flow guru (and recent marmoset pioneer) Alfonso Silva and eCog sage Jeff Ojemann. It was exciting to hear their take on the most exciting technologies in neuroscience. Some of the exciting new developments were highlighted by the panel included optogenetics, powerful transgenic animals (mice, marmosets and beyond) and high-throughput sequencing, just to name a few.
I share my colleagues’ enthusiasm for those techniques, but also held fast that these techniques must be accompanied by advances in theory to support out ability to understand the incoming data. Theoretical neuroscience has historically played a fundamental role in the field as a whole, and its importance going forward cannot be understated (I have argued for this before).
A recent paper in Neuron from Kanaka Rajan, Chris Harvey and David Tank sets out to demonstrate how relatively unstructured networks can give rise to highly structured outputs that persist on slow timescales relevant to behaviors like decision-making and working memory. Such unstructured networks seem at first like exactly the wrong thing to support stimulus-driven persistent activity. Indeed, classic work in the prefrontal cortex revealed individual neurons that respond persistently during delays, presumably support the ability of the animal to hold information in mind over that delay. In mouse posterior parietal cortex, however, it’s a different story. On a memory guided decision task published previously many individual neurons respond only very transiently, for much less time than the animal holds those memories in mind. Both that paper and the current one argue that many such neurons could fire in sequence, supporting slow-timescale memory-guided decisions even in the absence of single neurons with persistent activity.
The big steps forward in the current paper are:
- The authors demonstrated that a randomly connected network could give rise to this activity. This was an advance for a number of reasons, including the development of a new model framework called PINning. This method builds on a now classic technique, FORCE learning which generates coherent activity patterns from chaotic networks. PINing is different because only a small percentage (~12%) of synaptic weights are allowed to change. The ability of the network to capture the complex firing rates of 437 neurons when only a few synaptic weights were allowed to change is a big deal.
- The paper pointed out features of the data that are incompatible with a traditional model for persistent activity, like bump attractors. This is evidence against an appealing idea (that may be present in other systems) in which a hill of activity moves around the network, driving a persistent response.
- Finally, the authors found that the network’s success relied not only on the strongly choice-selective neurons you might expect, but also on neurons that weren’t selective for the animal’s choice at all. In fact, they observed that these seemingly unimportant neurons might play a critical “conveyer belt” role that was essential in supporting more difficult decisions, especially those among many alternatives. The previous paper (and indeed many other studies) mainly excluded these neurons from analysis; an understandable choice at the time, but one that now warrants reconsideration.
There is still a challenge ahead for putative mechanisms that support slow timescale behaviors like working memory and decision-making. At the moment, there are few causal manipulations that can disrupt proposed mechanisms and demonstrate an effect on behavior. In the framework here, it would be compelling to demonstrate that changing the order of the sequence changed the behavior (admittedly no small feat!). More traditional mechanisms aren’t off the hook either: demonstrating that persistent activity at the single neuron level supports working memory likewise would be aided by precise disruption experiments. Indeed, the single neuron persistence could be epiphenomenological; the persistent working memories could be supported by some other aspect of the network. Many such manipulation experiments will be feasible in the near future.
Until then, I am excited to see a new mechanism to support slow-timescale behavior. It is counterintuitive that such network complexity can be captured by a randomly connected network, especially one in which such a small number of synapses are allowed to change.
February 11, 2016
Discoveries made by plant geneticists in the 1940s are changing our understanding of the brain. Specifically, Barbara McClintock’s (left) discovery of transposons, for which she won the nobel prize, has turned out to be important not only for understanding gene function in plants, but in brains as well. Tranposons, described by the New Yorker as “wandering snippets of DNA that hide in genomes, copying and pasting themselves at random” account for ~40% of our genome. They are likely to play a key role in normal brain function, and also might be involved in neurodegenerative diseases including ALS.
The importance of transposons for all biology inspired current CSHL graduate students and motivated them to create a lecture series named after Barbara McClintock. The first one was today, and in recognition of the role of transposons in the brain, they invited a neuroscientists, Ann Graybiel (right) from MIT, to be the first recipient. Ann’s work on the striatum has been critical for the field’s growing understanding of how incoming inputs can lead to actions, especially ones that are reinforced and become habitual. Her emerging work is especially exciting as her lab is leveraging modern techinques to specifically measure and manipulate classes of cells within the striatum to understand their role in different behaviors and decisions.
To commemorate the creation of this new lecture series and its first recipient, neuroscientists from around New York gathered to honor Ann and attend her talk. Researchers focussing on decision-making, attention, vision and auditory processing came together and some lively discussions ensued! It was a lot of fun to show the setups in the my lab to this crew, which included Jackie Gottlieb, Yael Niv, Heather Read and Ariana Maffei, and we realized many links between our collective research programs that I hope will lead to new collaborations down the line.
I am happy to announce another post by a guest blogger. This time, its Sashank Pisupati, a new graduate student in my lab.
Last week, our lab read a paper by Ramon Reig & Gilad Silberberg titled “Multisensory Integration in the Mouse Striatum”. While studies of multisensory integration have focussed largely on cortical structures and the superior colliculus, this study adds to a growing body of evidence that the striatum may play a key role in this process. Striatal medium-spiny neurons (MSNs) are known to receive convergent projections from multiple sensory cortices, but relatively few studies have reported multisensory responses in these cells.
Here, the authors set out to test whether individual MSNs integrated visual (LED flashes) and tactile (whisker stimulation with air puffs) stimuli in anesthetized mice. In order to observe such synaptic integration, they performed whole-cell patch clamp recordings from striatal neurons. They targeted regions of striatum receiving projections from primary visual (V1) and somatosensory (S1) cortex, as identified by anterograde tracing using BDA.
They found sub-threshold responses to whisker stimulation (purple trace) in all the neurons they recorded from, which were modulated by stimulation strength. More interestingly, in the dorsomedial striatum a subset of these neurons were also responsive to visual stimuli (green trace), with slightly longer peak response latencies. They then presented visual and tactile stimuli together at various relative delays, and observed multisensory responses in these cells (orange & black traces) that were sublinearly additive i.e. less than the linear summation (grey traces) of the visual and tactile responses. Moreover, the peak multisensory response was maximal when the onsets/peaks of the unisensory responses were aligned, suggesting that the neurons summated congruent synaptic inputs.
These findings of multisensory cells in the mouse striatum corroborate similar reports from extracellular recordings in the striatum of rats and cats, and complement them by offering a valuable glimpse of sub-threshold activity. The sub-linear additivity described here contrasts with the super-linear additivity of firing rate responses often emphasized in studies of the superior colliculus.
One of the questions that remained at the end of our discussion, was how this result fit into models of multisensory integration such as divisive normalization4, or Bayes-optimal cue combination. While classical approaches have emphasized the degree of additivity of the unisensory responses, these models make strong predictions about how the weights assigned to each unisensory response in the summation change in accordance to the reliability of that sensory modality4. For example, we expect the contribution of a visual flash to the summation to decrease for weaker, less reliable flashes.
One could test this prediction in the author’s current setup by simply varying the stimulus strength for each modality during multisensory presentation. Combined with the power of the patch clamp approach, this could yield further insight into the sub-threshold computations being performed by these neurons, and we hope to see more such work in the future!
November 9, 2015
Last week our lab read a recent Neuron paper out of the Brody lab, by Kopec, Erlich, Brunton, Deisseroth & Brody, titled “Cortical and subcortical contributions to short-term memory for orienting movements.” This paper continues with that lab’s recent strategy of using optogenetics to briefly inactivate brain areas during decision making.
The experiments were straightforward. They trained rats to judge whether a click train was faster or slower than 50 Hz, then used optogenetics (eNpHR3.0) to inactivate either the Frontal Orienting Fields (FOF) or superior colliculus (SC) on one side of the brain at different points in the trial. This allowed Kopec et al. to see when these areas contributed to making the decision. The key experimental finding was that the rats’ decisions were most biased when either FOF or SC was silenced during the stimulus, a little less biased when silenced early in the subsequent delay, and less biased still when silenced late in the delay. Decisions were essentially unaffected when silencing was performed during the response period.
This finding is initially surprising, because tuning in the FOF increases over the course of the trial (as known from previous studies). They argue, however, that this seeming mismatch makes sense in the context of an attractor dynamics model (below). Since the evidence from the stimulus is not fluctuating in this task, the animal should be able to make his decision quickly. The increasing tuning might be due to attractor dynamics that amplify the tuning with time, while perturbations should mostly impact decisions before the neural activity has had time to settle in an attractor. Additional comparisons, including inactivating both areas together and comparing hard vs. easy trials, quantitatively fit their simple attractor model.
This study forms an interesting contrast with their paper from earlier this year, Hanks et al. 2015 in Nature. There, they took a similar approach but with a temporal integration task. In that task, the FOF was only critical at the end of such a stimulus. This again makes sense; you don’t want attractor dynamics if you need to integrate instead.
The question on many of our minds was: do these areas “really” exhibit attractor dynamics? On further reflection, though, this is a bit like asking whether the planets follow Newton’s laws. What I mean by that is: neurons, like orbiting planets, aren’t solving equations. Dynamical models, like Newton’s equations, are a mathematical description of how the system behaves over time. But if a model is an easy way to think about a system, and makes intuitive, useful predictions that hold up experimentally, then the model does useful work.
Many questions remain unanswered, of course. In terms of separation of function, are FOF and SC really doing the exact same thing? Are there other tasks where they would function very differently? Regarding dynamics, how does the system learn to produce these attractor dynamics? Since the FOF can apparently be trained to produce different dynamics in animals trained on different tasks, can it support either computation in an animal trained on both tasks? If so, how would it switch its dynamics? We’ll look forward to the next installment.
Simons Foundation sponsors meeting on how incoming sensory signals interact with ongoing internal dynamics
September 30, 2015
I recently attended a meeting as part of the Simons Collaboration on the Global Brain. A postdoc in my lab, Matt Kaufman, has an award from this group and so attended as well. The goal of the collaboration is to understand the internal neural signals that interact with sensory inputs and motor outputs to shape behavior.
It was a fantastic meeting. Blaise Aguera y Arcas (Google) talked about machine intelligence and how it has advanced dramatically in recent years, easily accomplishing tasks that seemed impossible half a decade ago. Andrew Leifer (Princeton) talked about a new microscopy system for large scale imaging in c-elegans. Marlene Cohen described a surprising observation she made that the increased firing rates seen during attention are accompanied by decreased correlations among neurons.
A common theme among all the presentations was the idea that understanding these internal states requires considering the activity of large neural populations. A number of analyses were put forth to achieve that. The ones that were most interesting to me are designed to compare neural population activity during different kinds on behavioral states. We began to do this in our 2014 paper (see figure 7), but have really only begun to scratch the surface. The talks and conversations at the meeting expanded our thinking about new analyses we can use to get at this question. For instance, as an animal goes form making a decision to committing to action, does the population activity simply re-scale, or does it occupy a fundamentally new space?
New approach in my lab aims to understand how external and internal signals shape population activity
September 2, 2015
Our decisions are influenced in part by incoming sensory information, and in part by the current cognitive state of the brain. For instance, a rustle in the bushes can make you run away quickly if you are walking the dark and worrying about bears, but have little effect on your behavior if you are deep in thought about something else- your upcoming vacation, for instance. This led us to wonder, how do incoming sensory signals and ongoing cognitive signals interact to guide behavior?
A postdoc in my lab, Farzaneh Najafi, is working to understand this, supported in part by the Simons Collaboration on the Global Brain. We were fortunate to have a collaborator, John Cunningham (Columbia University) visit us today, along with a graduate student in his lab, Gamal Elsayed. Their focus is on understanding neural activity at the population level, and in particular understanding how such populations evolve over time. We hope that their approach can offer insight into our question by helping us evaluate how population dynamics differ depending on the internal cognitive state of the animal. Farzaneh, John and Gamal are pictured below. They are gathered at the 2-photon rig in our lab and are viewing neural activity of labelled inhibitory neurons.
July 1, 2015
My lab members and I did a “Literature Blitz” today: each person in the group gave a short presentation, including only a single figure, on a recent finding in the area of decision-making and sensory guided action. Short presentations like this don’t allow for the in-depth discussions we have when we read a single paper, but they give us a snapshot of a whole field that we can absorb in just a few hours. This inevitably broadens all of our perspectives.
1. From Kawai et al in Neuron: Motor cortex isn’t needed to support complex, sequenced movements that harvest rewards. However, motor cortex is required to learn these movements. The movies associated with this paper and incredible, definitely check them out
2. From R. Kiani et al in Neuron: Natural groupings of neurons, based on time varying response similarities, can define spatially segregated subnetworks. Surprisingly, these subnetworks have correlated noise, especially during quiet wakefulness when no stimuli are present. This approach suggests we might want to consider a new way to define cortical regions and subregions, especially in areas like the frontal lobe which has historically been difficult to parcellate.
3. From Strandbug-Peshkin at al in Science: This group fitted wild Kenyan baboons with GPS collars and worked out which factors determine their collective movements. The first part we might have guessed: the number of animals, and their commitment to a particular direction of movement play a large role in determining whether the other baboons will join the move. One aspect was surprising: although baboons have a strong social hierarchy, it doesn’t play much of a role in determine where animals go next. In other words, just because the king-of-the-pack goes north, it doesn’t mean the other baboons follow suit.
4. From Juavinett & Callaway in Current Biology: Here, the authors used intrinsic signal mapping to pinpoint multiple visual areas and then measured howthey differed in terms of their ability to represent complex motion. Specifically, they tracked whether individual neurons were sensitive to the pattern motion defined by a plaid created by two overlapping gratings. Similar to classic observations in monkey, there was a transition from primary areas which mainly reflected the pattern motion, to secondary areas (especially RL) which were more likely to respond to component motino.
5. From Chen et al in Nature Neuroscience: This paper showed that during learning of a level-press task, the spines of pyramidal neurons in primary motor cortex change dramatically. Further, they determined that this change was largely mediated by a specific class of interneurons, SOM+ neurons which preferentially target the apical dendrites of pyramidal neurons.
6. From Rohe & Noppeney in PLOS Biology: These authors used fMRI to evaluate how causal inference is performed in humans who must judge whether auditory and visual information bears on the same source. Their main observation is that this occurs hierarchically: in early sensory areas (A1 & V1), activity reflects the assumption that there are two sourcesof information, whereas in the anterior intraparietal sulcus, activity reflects the assumption that the two signals are from a common source and should be integrated.
7. From Murayama et al in Neuron: Projections from secondary motor cortex feedback to secondary somatosensory cortex to help shape information about texture in mice. This suggests that feedback projections play a key role in shaping sensory experience.
8. From Cooke et al in Nature Neuroscience: This paper coined a new term, “vidget”, which refers to a visually induced fidget. Apparently head-fixed mice are especially prone to these when they experience a novel visual stimulus, even a grating in an orientation they haven’t seen in a while. Using NMDA blockers and PV-ChR2 mice, the authors argue that memory for visual images, as evidenced by vidgets, requires area V1.
9. From M. Siegel et al in Science: Functions, such as knowledge of task context and visual responses, are shared, not compartmentalized, across cortical regions. Here. the authors recorded neurons in 6 cortical areas on a complex decision task and evaluated how representations changed from sensory to parietal to frontal regions. I liked the approach and hope the dataset will be further analyzed. By experimenting with different ways to combine neurons, the authors might learn more about the kinds of computations feasible in each area.
July 1, 2015
The McKnight Foundation has been a big supporter of neuroscience in recent years and holds an annual meeting for recipients of their awards. These include recipients of a Memory and Cognitive Disorders Award, a Technology award and a Scholars Award, which funds early stage investigators like me. This year at the meeting, my third, I was accompanied by a postdoc from my lab, Matt Kaufman, who received a special travel award. The travel awards for postdocs are new this year and are in honor of Dr. Allison J. Doupe, who was on the Board of Directors for the Scholar awards for a number of years. Allison passed away this past year and it was an honor for Matt to attend in her memory.
There were many interesting talks at the meeting. Two scientific highlights for us were:
1. Hearing about recent work from Ben Barres’s lab. He warns that A1 (bad) astrocytes proliferate in aging brains & may play a role in Alzheimer’s disease.
2. Some more comforting news about aging from Elizabeth Kensinger’s lab. She reported on the preserved ability of older adults to remember affective details of memories. In fact, older subjects sometimes outperformed younger subjects on this particular kind of memory.
Finally, a high point for us was presenting a poster (below) with new 2-photon imaging from the lab. This technique is new for us and will allow us to measure the responses of many neurons at the same time.
I am at Janelia Research campus this week, along with Lital Chartarifsky, a graduate student in my lab. The meeting organizers brought together researchers with highly diverse approaches to the problem of multisensory integration, from invertebrates, to rodents to primates. One feature of integration that appears to be common across these species is the ability to use the reliability of incoming inputs to guide the integration. That is, to down-weight noisy signals and up-weight reliable ones. This appears to be widespread, although whether common neural mechanisms support this ability in diverse species is unclear.
An interesting talk on Day 1 came from Vivek Jayaraman’s lab. Vivek described responses in a part of the fly’s brain called the ellipsoid body (shown in the figure). His group measured neural responses in the ellipsoid body as the fly experienced a virtual reality environment in which its movements drove changes in a visual arena that surrounded it. The arena contained a visual bar and the bar’s position turns out to be key in driving responses in the ellipsoid body. In fact, by decoding the ellipsoid body neural activity, the researchers were able to estimate the fly’s orientation in the visual scene with remarkable precision. Surprisingly, the decode remained accurate for a while even when visual inputs to the fly’s brain were blocked. This last observation points to the ellipsoid body as driving an abstract representation of visual space, one that is derived from visual input and incorporated with self motion. This work was published just before the meeting in Nature.