Causal experiments are appealing. For example, perturbing neural activity in a particular brain area and seeing a change in behavior seems like good evidence that the brain area in question supports the behavior. Experiments that instead just correlate neural activity to behavior are criticized because the relationship between the two could be simply coincidental. But is this really fair? Cosyne workshop organizer Arash Afraz, a postdoc in Jim DiCarlo’s lab at MIT, brought together 6 of us to argue it out. The lineup included myself, Karel Svoboda, Rick BornChris Fetsch, Mehrdad Jazayeri and Daniel Yamins.

IMG_0706A number of interesting points were brought up. For instance, one problem with perturbation experiments that is not at first obvious is that they drive neural activity in such an unusual way that they actually expand the space of possible hypotheses rather than restrict it. On the other hand, the fact that neural activity during perturbations is unusual might be a strength: pushing the system into states it doesn’t normally occupy might offer key insights into what the area does.

In the end, we agreed that in Screen-Shot-2012-11-20-at-4.46.58-PM1some circumstances, assuming correlation implies causation is truly the optimal strategy. Well…  okay, only 1 circumstance, but its an important one: the correlation between the national rates of chocolate consumption and nobel prize frequency (right). There are a lot of alternative explanations for this relationship, but to be safe, we all decided to eat more chocolate anyway (hence the Lindt bars in the photo above).

 

Brains live inside bodies that move around, and this simple fact means a lot of extra work for neural circuits. Imagine you are following a bonnflying bird with your eyes. If your brain needs to know how fast the bird is moving in space, it needs to account for the fact that the image of the bird on your retina is altered by the movements of your eye. A Cosyne Workshop talk by Larry Abbott provides some new insights into the neural circuits that make this possible. He did this work with a number of collaborators, including Anne Kennedy (right) and Nate Sawtell, both known among the cosyne crowd for their innovative approaches to complex problems.

FishThe model organism the group used to tackle this problem was the electric fish, which detects its dinner (fish and bugs) by sensing the electrical signals they create in the water. The challenge for the fish is that it sends out electrical pulses to accomplish this, altering the electrical signals that its sensory organs experience, just like in the case of vision and the bird above.

The key advance Larry talked about (part of which is published here) is that they made a large-scale realistic model that included 20,000 neurons. These were manufactured based on response properties of a smaller number of real neurons observed in a laboratory setting. Their model solved the problem by constructing a negative image of the signal sent out, and then subtracting it from the detected inputs. A key test for the model is to see what happens when the animal sends a descending command to emit the pulse, but this never actually happens (as if the fish were paralyzed, for instance). When this happens, the signal needed to be subtracted out changes. Once conditions are brought back to normal, a signature of this adjusted signal is evident. This was true in both the model and the fish.

An exciting future direction, to my mind, will be to see the degree to which the circuits that accomplish this behavior are similar in humans. Because subtracting out the consequences of movements if fundamental for all organisms, similar strategies could be evident even in very different species.

 

 

 

I started this year off with some travel, giving seminars in 3 places. I started off visiting Duke University where I was hosted by D-CIDES, an interdisciplinary group of decision-making researchers that includes neuroscientists, economists, and sociologists and people from the business school. I met many new people, including Rachel Kranton, well-known for her work on identity, who described her recent research looking at gender effects in decision-making. We found common ground discussing analysis methods for big datasets, a theme that  reappeared during a conversation with Kevin LaBar, whose recent paper used a familiar technique, multivariate classifiers, to predict emotional state from multiple kinds of physiological measurement.

At both Brandeis and Pittsburgh, I was the student invited speaker which is, of course, a great honor! At Brandeis, I spoke with Steve Van Hooser about his recent paper in Neuron (along with Ken Miller and others) about divisive normalization. I also caught up with former  Computational Vision TA Marjena Popovic, who promises to tell me soon how responses of neural populations change following experience with particular stimuli.

My trip to Pittsburgh/Carnegie Mellon capped the trifecta of talks. I have to say that Pittsburgh is a really great city: I had a delicious dinner with Bita Moghaddam’s lab where we discussed, among other things, scientific blogging. I found out from Bita that this conversation  inspired their lab to start blogging, too. And in fact, their first blog post highlights my visit. I also enjoyed hearing the latest from Byron Yu’s lab, especially the work of his student Ben Cowley, who had extensive knowledge of our newly developed analysis techniques, and even described them as “intuitive”!!

IMG_0475

The travel was great, but I am happy to be back home at Cold Spring Harbor, where science is progressing despite a lot of cold weather and a blizzard!

Sensory signals enter the brain at a rapid rate, and they differ greatly in their relevance for the organism at a given moment: the voice of another speaker might contain far more signal than background noise from a barking dog, a television or a ringing phone. A large body of work has tried to understand how the brain achieves this. A new paper in Nature Neuroscience provides some new insights about the role of the thalamic reticular nucleus (TRN).

The TRN has been thought of as a “gatekeeper” of sensory information to the thalamus because it receives inputs from both cortex and thalamus, but only projects to thalamus. Interestingly, TRN neurons express high levels of ErbB4, a receptor tyrosine kinase implicated in mental disorders like schizophrenia. Understanding the role of ErbB4 neurons has become possible recently because, fortuitously, ErbB4-expressing TRN neurons are mostly SOM+ neurons, a well-studied class of inhibitory neurons which can be specifically controlled using cre driver lines.

Sandra Ahrens, Bo Li and colleagues did just that: they bred mice that were deficient in ErbB4-SOM neurons and tested their behavior. They found that the deficient mice had altered performance on behavioral tasks requiring them to filter out unnecessary information. On one task, animals had to ignore and auditory distractor and attend to a visual cue (Figure, right). The ErbB4 deficient animals were unable to suppress the incoming auditory signal, in keeping with the idea that the TRN plays a role in filtering out irrelevant information. But here’s the really interesting part: on another task, ErbB4 knockout animals actually did better! On this task, animals had to ignore auditory distractors and listen for an auditory target (a warble; Figure, left). The ErbB4 deficient animals were better at ignoring the irrelevant distractors, and could identify the target sound better than their spared litter mates.

nn.3897-F2The two tasks differ in a number of ways, so understanding why one got better and the other got worse is not straightforward. For instance, the impaired task was what the authors describe as “incongruent”: the auditory signal the mice were told to ignore had previously instructed them to do something else. This was unlike the other task in which the irrelevant information was more like background noise.

But even if task differences make it reasonable that the behaviors might differ, we are still left wondering what it was that happened to help the animals get better on the auditory only distractor task. The improvement in deficient animals suggests that in intact animals, an active process limits their ability to suppress distractors. A competing explanation is that in intact animals, maladaptive behavioral strategies limit performance- like paying attention to reward history, for instance. Reward history has no relevance for the current trial, but typically has an effect on decisions anyway. In the paper here, though, it is not clear why a maladaptive strategy would affect one behavior more than the other.

In any case, the differing effects on the two tasks are a mystery. The ability to target specific populations of neurons might bring about more such instances in which performance improves. Finding out the reason might require taking into account a number of factors which, together, shape the animal’s behavior.

 

 


DSC_0683 (1)

The above photo shows surest lab members at our holiday skate. It was great to get together and celebrate a year of discovery. Highlights include:

January: Matt Kaufman generating the first images of mouse cortex with our new 2-photon imaging setupIMG_0164

February: Attending, and presenting at, Cosyne

May: Being part of the Symposium on Quantitative Biology at Cold Spring Harbor

IMG_9472July: Hosting undergraduates John Cannon and Nikaela Bryan as summer students in the lab

August: The addition of a new postdoc, Farzaneh Najafi, a new graduate student, Lital Chartarifsky, and two new technicians, Hien Nguyen and Angela Licata

 

September: Using intrinsic imaging to see multiple visual areas in the mouse brain

October: Seeing David Raposo and Kachi Odeomene lead the acquisition of a new technology for behavioral data acquisition (alongside Josh Sanders)IMG_9532

November: Traveling to the Society for Neuroscience annual meeting (and Kachi presenting a poster there)

November: Publication of a paper in Nature Neuroscience with co-authors David Raposo, Matt Kaufman and myself. We report insights from a multisensory decision task.

ratSpaceA new paper is out from my lab today in Nature Neuroscience. In this paper, we set out to understand how a single part of the brain can be used to support many behaviors. The posterior parietal cortex, for instance, has been implicated in decision-making, value judgments, attention and action selection. We wondered how that could be: are there categories of specialized neurons that are specialized for each behavior? Or, alternatively, does a single population of neurons multitask to support lots of behaviors? We found the latter possibility to be true. We recorded neurons in rat posterior parietal cortex while rats were making decisions about lights and sounds. We found that neurons could be strongly modulated by the animal’s choice, the modality of the stimulus, or, very often, both of those things. This multitasking did not provide a problem for decoding: a linear combination of responses could easily estimate choice and modality well.

We hope that our observations will change the way people think about how neurons support diverse behaviors because they challenge the prevailing view that neurons are specialized. Horace Barlow (the grandfather of computational neuroscience), argued that neurons in the frog’s retina were specialized for detecting particular kinds of motion. This is likely true in early visual areas, but in higher cortical areas, things are very different. Our observations about multitasking neurons point to a new way of encoding information that, we argue, confers flexibly in how the neurons are used, and could allow their responses to be flexibly combined to support many behaviors. The picture below shows me with co-first authors David Raposo and Matt Kaufman. IMG_9993

IMG_9918Heather Read, from the University of Connecticut, visited my lab this week. She shared her expertise on intrinsic imaging with my lab members and I, and had extensive conversations with Kachi Odeomene and Lital Chartarifsky (shown at left). Heather also gave a seminar describing her recent work measuring neural responses in 3 auditory areas. Heather and her colleagues are interested in how these areas work together to provide the needed information about auditory inputs to make sense of them. The first thing they do is to use intrinsic imaging to map out the three auditory areas (see image below). To do this, they take advantage of the fact that each area has a unique “map” of tonotopic space. But why would the brain need so many maps of the Lee-2014-Figure01-Rat071213-Oct16same space? One possibility is that each is specialized for a particular “shape” of sound. That is, the time and frequency modulation pattern that make a sound unique. Heather reports that three areas, primary auditory cortex, the ventral auditory field and the supra rhinal auditory field differ in the degree to which they are specialized for representing fast-modulating vs. slow-modulating sounds. This has many parallels to the visual system where visual areas differ tremendously in the degree to which they reflect fast versus slow modulating inputs.

This weekend I attended the vision meeting of the Optical Society at the University of Pennsylvania in Philadelphia. I was invited to participate in a debate about mice as models for visual function that included, among others, Tony Movshon, a vocal skeptic of mouse models. My role in the debate was to highlight features of rodent behavior that makes then well-suited to provide insights about computations that may be conserved across many species. In my lab, we think a lot (A LOT) about how behavior, and how to design paradigms that give us the best shot at uncovering computations that are shared by mice and humans.

In addition to debaIMG_9548ting the merits of different models, I enjoyed some great talks including one by Nicole Rust whom you can see here with colleagues discussing her data post-talik. She and lab members have been measuring signals in two cortical structures: inferotemporal cortex (IT), long studied as an object recognition area, and perirhinal cortex (PRH), an association area that gets inputs from IT. PRH is shown below in an image I made from the Allen Brain Connectivity Atlas (it is in a different species, but likely there are  some parallels). Each dot on the right image shows an area that projects to PRH, highlighting the area as a good candidate for transforming complex visual signals into a judgment about what to do. Nicole has argued previously that a key difference between IT and PRH is that a linear combination of IT neuron responses cannot predict whether a given stimulus matches a searched-for target whereas PRH neurons can predict this.

imagePRH.001

Her latest work is informative about what kind of operations are performed on IT neurons so that the signals arrive in PRH in a more manageable form. The answer is surprisingly simple. She argues that a feedforward architecture that includes IT neurons with variable response latencies is key, and also that the IT neurons have response preferences that are not simply rescaled with time. This accounts for the observed dynamics in PRH pretty well.

DSC_0032I spoke this week at Oxford University’s Cortex Club- a student/postdoc-led organization that brings in speakers from around the world to speak to a very lively and engaged audience. As a former visiting student at Oxford, it was a special pleasure to come here, and the beautiful architecture is as inspiring as it ever was. The discussion during and after the talk was great, and I was pleased to get some critical feedback on our ideas from Andrew Parker, who has done influential work about motion direction decisions. One question he and his lab members raised was how to think about sensory vs. motor driven activity in parietal cortex- this is a key question.

Following the talk, the students and postdocs escorted meDSC_0036 to a local pub where we had pints and discussed both my talk and the field in general. One topic of interest was how the legal system is evolving in response to neuroscience data. With structural and functional MRI becoming widely available, unusual brain architecture and dynamics are sometimes argued to underlie criminal behavior.

DSC_0033Finally, the students requested that I sign a guest book that they have been keeping to track all the speakers at the club. It was fun to look back at the messages from my colleagues who have been previous speakers. There were some entertaining and inspiring messages, a few clearly fueled by the lighthearted feeling that evolves after a few pints in this cozy pub. I wrote them a limerick about machine learning, which I won’t repeat here, but if you encounter the club at any point, maybe they will share it with you!

My lab met this week to discuss a new paper by Park, Meister, Huk and Pillow, recently published in Nature Neuroscience. They leveraged neural data generated via a tried and true approach: measuring the responses of neurons in the parietal cortex during a random dot motion decision task. What’s new here was their analysis. Unlike previous work, which has focussed on normative (what the brain SHOULD do) or mechanistic models, these folks took a statistical approach. They said, look, we just want to describe the responses of each neuron, taking into account the inputs on that particular trial. And they wanted to do this on a trial-by-trial basis, no small feat since single-trial spike trains are highly variable.

They did this with success: as you can see in the examplenn.3800-F4 (right), the model firing rate (yellow) approximates the true single-trial response (black). But capturing the detailed, time varying responses of many trials required a model with a lot of parameters. Like, a whole lot. This seemed at first discouraging, but then again, the goal of these models was not to inform us about the nature of neural mechanisms, but instead to try and figure out which of many incoming signals modulate each neuron, and how much they do so at different moments in time. Once they fit the model, they used it to decode the nn.3800-F7data, and then looked at the time course of this decode for a whole bunch of neurons. What they realized at this point is cool: the very complex model could be distilled to something much simpler (left) and could still predict the trial-to-trial choices. By integrating the firing rates of a pool of neurons using leaky integrators with two time constants (way simple!) they could predict choice almost as accurately as the full-blown model. The net effect is that the analysis ended up telling us something interesting about how parietal cortex neurons mix multiple inputs, and also how they might be decoded easily.

Fairhall lab

Computational neuroscience at the University of Washington

Pillow Lab Blog

Neural Coding and Computation Lab @ Princeton University

Churchland lab

Perceptual decision-making at Cold Spring Harbor

Follow

Get every new post delivered to your Inbox.

Join 1,011 other followers