I am at Janelia Research campus this week, along with Lital Chartarifsky, a graduate student in my lab. The meeting organizers brought together researchers with highly diverse approaches to the problem of multisensory integration, from invertebrates, to rodents to primates. One feature of integration that appears to be common across these species is the ability to use the reliability of incoming inputs to guide the integration. That is, to down-weight noisy signals and up-weight reliable ones. This appears to be widespread, although whether common neural mechanisms support this ability in diverse species is unclear.
An interesting talk on Day 1 came from Vivek Jayaraman’s lab. Vivek described responses in a part of the fly’s brain called the ellipsoid body (shown in the figure). His group measured neural responses in the ellipsoid body as the fly experienced a virtual reality environment in which its movements drove changes in a visual arena that surrounded it. The arena contained a visual bar and the bar’s position turns out to be key in driving responses in the ellipsoid body. In fact, by decoding the ellipsoid body neural activity, the researchers were able to estimate the fly’s orientation in the visual scene with remarkable precision. Surprisingly, the decode remained accurate for a while even when visual inputs to the fly’s brain were blocked. This last observation points to the ellipsoid body as driving an abstract representation of visual space, one that is derived from visual input and incorporated with self motion. This work was published just before the meeting in Nature.
In our lab meeting this week, we read a paper by Nuo Li, Karel Svoboda. These guys have been probing the function of a secondary motor area, ALM, for a while now, and have previously implicated it as playing a role in sensory guided movements, especially those following a delay. Here, they delved in deeper, asking how neurons within ALM drive movements. The work started with a puzzling observation: when you record neurons within ALM, they are a mixed bag in terms of what makes them respond: some respond in advance of movements to the contralateral side and others respond in advance of movements to the ipsilateral side. Here’s the weird part: when you disrupt this area, contralateral movements are particularly affected. This new result tackles that disconnect.
What these guys found is that mixed bag of neurons that is apparent during recording is actually comprised of two populations, each with its own response properties. Some neurons (the green ones in the figure) project to brainstem nuclei, while others (the purple ones) project to ALM on the other side. Importantly, the former group is distinct: those neurons tend to respond preferentially in advance of contralateral movements, and when you stimulate them specifically, a contralateral bias is observed.
This is a big deal for a few reasons. First, the experiments leveraged really cool tools that made it possible to selectively activate a projection-defined population of neurons. Second, it suggests that the heterogeneity of responses that electrophysiologists observe might be partly explained by the projection target of each recorded neuron. A population might only appear to be heterogeneous because traditional electrophysiology experiments don’t tell the experimenter anything about where neurons are projecting. However… I don’t know whether this will always be the case. When I was a graduate student with Steve Lisberger, I used antidromic stimulation, a classic technique also used in the current paper, to identify extrastriate cortex neurons projecting to the frontal cortex. In my (admittedly small) population, the response properties of projection neurons didn’t differ in any obvious way from the general population. So it may be that projection target can predict some properties of neurons in some areas, but that even a group of neurons with a shared target can nonetheless be very heterogeneous.
A final thought: while discussing this paper in lab meeting, it was fun telling the students and postdocs in my lab about my own experiences identifying projection neurons and using the collision test to demonstrate the direction of the connectivity. It turns out that, back in the day, I actually made a movie of this! The lab claims it was clarifying, so I include it here for educational and amusement purposes.
Causal experiments are appealing. For example, perturbing neural activity in a particular brain area and seeing a change in behavior seems like good evidence that the brain area in question supports the behavior. Experiments that instead just correlate neural activity to behavior are criticized because the relationship between the two could be simply coincidental. But is this really fair? Cosyne workshop organizer Arash Afraz, a postdoc in Jim DiCarlo’s lab at MIT, brought together 6 of us to argue it out. The lineup included myself, Karel Svoboda, Rick Born, Chris Fetsch, Mehrdad Jazayeri and Daniel Yamins.
A number of interesting points were brought up. For instance, one problem with perturbation experiments that is not at first obvious is that they drive neural activity in such an unusual way that they actually expand the space of possible hypotheses rather than restrict it. On the other hand, the fact that neural activity during perturbations is unusual might be a strength: pushing the system into states it doesn’t normally occupy might offer key insights into what the area does.
In the end, we agreed that in some circumstances, assuming correlation implies causation is truly the optimal strategy. Well… okay, only 1 circumstance, but its an important one: the correlation between the national rates of chocolate consumption and nobel prize frequency (right). There are a lot of alternative explanations for this relationship, but to be safe, we all decided to eat more chocolate anyway (hence the Lindt bars in the photo above).
Report from cosyne workshops: How the brain gets unconfused by the sensory consequences of movements
March 11, 2015
Brains live inside bodies that move around, and this simple fact means a lot of extra work for neural circuits. Imagine you are following a flying bird with your eyes. If your brain needs to know how fast the bird is moving in space, it needs to account for the fact that the image of the bird on your retina is altered by the movements of your eye. A Cosyne Workshop talk by Larry Abbott provides some new insights into the neural circuits that make this possible. He did this work with a number of collaborators, including Anne Kennedy (right) and Nate Sawtell, both known among the cosyne crowd for their innovative approaches to complex problems.
The model organism the group used to tackle this problem was the electric fish, which detects its dinner (fish and bugs) by sensing the electrical signals they create in the water. The challenge for the fish is that it sends out electrical pulses to accomplish this, altering the electrical signals that its sensory organs experience, just like in the case of vision and the bird above.
The key advance Larry talked about (part of which is published here) is that they made a large-scale realistic model that included 20,000 neurons. These were manufactured based on response properties of a smaller number of real neurons observed in a laboratory setting. Their model solved the problem by constructing a negative image of the signal sent out, and then subtracting it from the detected inputs. A key test for the model is to see what happens when the animal sends a descending command to emit the pulse, but this never actually happens (as if the fish were paralyzed, for instance). When this happens, the signal needed to be subtracted out changes. Once conditions are brought back to normal, a signature of this adjusted signal is evident. This was true in both the model and the fish.
An exciting future direction, to my mind, will be to see the degree to which the circuits that accomplish this behavior are similar in humans. Because subtracting out the consequences of movements if fundamental for all organisms, similar strategies could be evident even in very different species.
January 28, 2015
I started this year off with some travel, giving seminars in 3 places. I started off visiting Duke University where I was hosted by D-CIDES, an interdisciplinary group of decision-making researchers that includes neuroscientists, economists, and sociologists and people from the business school. I met many new people, including Rachel Kranton, well-known for her work on identity, who described her recent research looking at gender effects in decision-making. We found common ground discussing analysis methods for big datasets, a theme that reappeared during a conversation with Kevin LaBar, whose recent paper used a familiar technique, multivariate classifiers, to predict emotional state from multiple kinds of physiological measurement.
At both Brandeis and Pittsburgh, I was the student invited speaker which is, of course, a great honor! At Brandeis, I spoke with Steve Van Hooser about his recent paper in Neuron (along with Ken Miller and others) about divisive normalization. I also caught up with former Computational Vision TA Marjena Popovic, who promises to tell me soon how responses of neural populations change following experience with particular stimuli.
My trip to Pittsburgh/Carnegie Mellon capped the trifecta of talks. I have to say that Pittsburgh is a really great city: I had a delicious dinner with Bita Moghaddam’s lab where we discussed, among other things, scientific blogging. I found out from Bita that this conversation inspired their lab to start blogging, too. And in fact, their first blog post highlights my visit. I also enjoyed hearing the latest from Byron Yu’s lab, especially the work of his student Ben Cowley, who had extensive knowledge of our newly developed analysis techniques, and even described them as “intuitive”!!
The travel was great, but I am happy to be back home at Cold Spring Harbor, where science is progressing despite a lot of cold weather and a blizzard!
Sensory signals enter the brain at a rapid rate, and they differ greatly in their relevance for the organism at a given moment: the voice of another speaker might contain far more signal than background noise from a barking dog, a television or a ringing phone. A large body of work has tried to understand how the brain achieves this. A new paper in Nature Neuroscience provides some new insights about the role of the thalamic reticular nucleus (TRN).
The TRN has been thought of as a “gatekeeper” of sensory information to the thalamus because it receives inputs from both cortex and thalamus, but only projects to thalamus. Interestingly, TRN neurons express high levels of ErbB4, a receptor tyrosine kinase implicated in mental disorders like schizophrenia. Understanding the role of ErbB4 neurons has become possible recently because, fortuitously, ErbB4-expressing TRN neurons are mostly SOM+ neurons, a well-studied class of inhibitory neurons which can be specifically controlled using cre driver lines.
Sandra Ahrens, Bo Li and colleagues did just that: they bred mice that were deficient in ErbB4-SOM neurons and tested their behavior. They found that the deficient mice had altered performance on behavioral tasks requiring them to filter out unnecessary information. On one task, animals had to ignore and auditory distractor and attend to a visual cue (Figure, right). The ErbB4 deficient animals were unable to suppress the incoming auditory signal, in keeping with the idea that the TRN plays a role in filtering out irrelevant information. But here’s the really interesting part: on another task, ErbB4 knockout animals actually did better! On this task, animals had to ignore auditory distractors and listen for an auditory target (a warble; Figure, left). The ErbB4 deficient animals were better at ignoring the irrelevant distractors, and could identify the target sound better than their spared litter mates.
The two tasks differ in a number of ways, so understanding why one got better and the other got worse is not straightforward. For instance, the impaired task was what the authors describe as “incongruent”: the auditory signal the mice were told to ignore had previously instructed them to do something else. This was unlike the other task in which the irrelevant information was more like background noise.
But even if task differences make it reasonable that the behaviors might differ, we are still left wondering what it was that happened to help the animals get better on the auditory only distractor task. The improvement in deficient animals suggests that in intact animals, an active process limits their ability to suppress distractors. A competing explanation is that in intact animals, maladaptive behavioral strategies limit performance- like paying attention to reward history, for instance. Reward history has no relevance for the current trial, but typically has an effect on decisions anyway. In the paper here, though, it is not clear why a maladaptive strategy would affect one behavior more than the other.
In any case, the differing effects on the two tasks are a mystery. The ability to target specific populations of neurons might bring about more such instances in which performance improves. Finding out the reason might require taking into account a number of factors which, together, shape the animal’s behavior.
December 22, 2014
The above photo shows surest lab members at our holiday skate. It was great to get together and celebrate a year of discovery. Highlights include:
January: Matt Kaufman generating the first images of mouse cortex with our new 2-photon imaging setup
February: Attending, and presenting at, Cosyne
May: Being part of the Symposium on Quantitative Biology at Cold Spring Harbor
July: Hosting undergraduates John Cannon and Nikaela Bryan as summer students in the lab
August: The addition of a new postdoc, Farzaneh Najafi, a new graduate student, Lital Chartarifsky, and two new technicians, Hien Nguyen and Angela Licata
September: Using intrinsic imaging to see multiple visual areas in the mouse brain
November: Traveling to the Society for Neuroscience annual meeting (and Kachi presenting a poster there)
November: Publication of a paper in Nature Neuroscience with co-authors David Raposo, Matt Kaufman and myself. We report insights from a multisensory decision task.
November 10, 2014
A new paper is out from my lab today in Nature Neuroscience. In this paper, we set out to understand how a single part of the brain can be used to support many behaviors. The posterior parietal cortex, for instance, has been implicated in decision-making, value judgments, attention and action selection. We wondered how that could be: are there categories of specialized neurons that are specialized for each behavior? Or, alternatively, does a single population of neurons multitask to support lots of behaviors? We found the latter possibility to be true. We recorded neurons in rat posterior parietal cortex while rats were making decisions about lights and sounds. We found that neurons could be strongly modulated by the animal’s choice, the modality of the stimulus, or, very often, both of those things. This multitasking did not provide a problem for decoding: a linear combination of responses could easily estimate choice and modality well.
We hope that our observations will change the way people think about how neurons support diverse behaviors because they challenge the prevailing view that neurons are specialized. Horace Barlow (the grandfather of computational neuroscience), argued that neurons in the frog’s retina were specialized for detecting particular kinds of motion. This is likely true in early visual areas, but in higher cortical areas, things are very different. Our observations about multitasking neurons point to a new way of encoding information that, we argue, confers flexibly in how the neurons are used, and could allow their responses to be flexibly combined to support many behaviors. The picture below shows me with co-first authors David Raposo and Matt Kaufman.
H. Read identifies auditory areas by intrinsic imaging & measures responses electrophysiologically to define function
October 31, 2014
Heather Read, from the University of Connecticut, visited my lab this week. She shared her expertise on intrinsic imaging with my lab members and I, and had extensive conversations with Kachi Odeomene and Lital Chartarifsky (shown at left). Heather also gave a seminar describing her recent work measuring neural responses in 3 auditory areas. Heather and her colleagues are interested in how these areas work together to provide the needed information about auditory inputs to make sense of them. The first thing they do is to use intrinsic imaging to map out the three auditory areas (see image below). To do this, they take advantage of the fact that each area has a unique “map” of tonotopic space. But why would the brain need so many maps of the same space? One possibility is that each is specialized for a particular “shape” of sound. That is, the time and frequency modulation pattern that make a sound unique. Heather reports that three areas, primary auditory cortex, the ventral auditory field and the supra rhinal auditory field differ in the degree to which they are specialized for representing fast-modulating vs. slow-modulating sounds. This has many parallels to the visual system where visual areas differ tremendously in the degree to which they reflect fast versus slow modulating inputs.
New insights from Nicole Rust on mixed up perirhinal cortex neurons at the Optical Society vision meeting
October 12, 2014
This weekend I attended the vision meeting of the Optical Society at the University of Pennsylvania in Philadelphia. I was invited to participate in a debate about mice as models for visual function that included, among others, Tony Movshon, a vocal skeptic of mouse models. My role in the debate was to highlight features of rodent behavior that makes then well-suited to provide insights about computations that may be conserved across many species. In my lab, we think a lot (A LOT) about how behavior, and how to design paradigms that give us the best shot at uncovering computations that are shared by mice and humans.
In addition to debating the merits of different models, I enjoyed some great talks including one by Nicole Rust whom you can see here with colleagues discussing her data post-talik. She and lab members have been measuring signals in two cortical structures: inferotemporal cortex (IT), long studied as an object recognition area, and perirhinal cortex (PRH), an association area that gets inputs from IT. PRH is shown below in an image I made from the Allen Brain Connectivity Atlas (it is in a different species, but likely there are some parallels). Each dot on the right image shows an area that projects to PRH, highlighting the area as a good candidate for transforming complex visual signals into a judgment about what to do. Nicole has argued previously that a key difference between IT and PRH is that a linear combination of IT neuron responses cannot predict whether a given stimulus matches a searched-for target whereas PRH neurons can predict this.
Her latest work is informative about what kind of operations are performed on IT neurons so that the signals arrive in PRH in a more manageable form. The answer is surprisingly simple. She argues that a feedforward architecture that includes IT neurons with variable response latencies is key, and also that the IT neurons have response preferences that are not simply rescaled with time. This accounts for the observed dynamics in PRH pretty well.