January 28, 2015
I started this year off with some travel, giving seminars in 3 places. I started off visiting Duke University where I was hosted by D-CIDES, an interdisciplinary group of decision-making researchers that includes neuroscientists, economists, and sociologists and people from the business school. I met many new people, including Rachel Kranton, well-known for her work on identity, who described her recent research looking at gender effects in decision-making. We found common ground discussing analysis methods for big datasets, a theme that reappeared during a conversation with Kevin LaBar, whose recent paper used a familiar technique, multivariate classifiers, to predict emotional state from multiple kinds of physiological measurement.
At both Brandeis and Pittsburgh, I was the student invited speaker which is, of course, a great honor! At Brandeis, I spoke with Steve Van Hooser about his recent paper in Neuron (along with Ken Miller and others) about divisive normalization. I also caught up with former Computational Vision TA Marjena Popovic, who promises to tell me soon how responses of neural populations change following experience with particular stimuli.
My trip to Pittsburgh/Carnegie Mellon capped the trifecta of talks. I have to say that Pittsburgh is a really great city: I had a delicious dinner with Bita Moghaddam’s lab where we discussed, among other things, scientific blogging. I found out from Bita that this conversation inspired their lab to start blogging, too. And in fact, their first blog post highlights my visit. I also enjoyed hearing the latest from Byron Yu’s lab, especially the work of his student Ben Cowley, who had extensive knowledge of our newly developed analysis techniques, and even described them as “intuitive”!!
The travel was great, but I am happy to be back home at Cold Spring Harbor, where science is progressing despite a lot of cold weather and a blizzard!
Sensory signals enter the brain at a rapid rate, and they differ greatly in their relevance for the organism at a given moment: the voice of another speaker might contain far more signal than background noise from a barking dog, a television or a ringing phone. A large body of work has tried to understand how the brain achieves this. A new paper in Nature Neuroscience provides some new insights about the role of the thalamic reticular nucleus (TRN).
The TRN has been thought of as a “gatekeeper” of sensory information to the thalamus because it receives inputs from both cortex and thalamus, but only projects to thalamus. Interestingly, TRN neurons express high levels of ErbB4, a receptor tyrosine kinase implicated in mental disorders like schizophrenia. Understanding the role of ErbB4 neurons has become possible recently because, fortuitously, ErbB4-expressing TRN neurons are mostly SOM+ neurons, a well-studied class of inhibitory neurons which can be specifically controlled using cre driver lines.
Sandra Ahrens, Bo Li and colleagues did just that: they bred mice that were deficient in ErbB4-SOM neurons and tested their behavior. They found that the deficient mice had altered performance on behavioral tasks requiring them to filter out unnecessary information. On one task, animals had to ignore and auditory distractor and attend to a visual cue (Figure, right). The ErbB4 deficient animals were unable to suppress the incoming auditory signal, in keeping with the idea that the TRN plays a role in filtering out irrelevant information. But here’s the really interesting part: on another task, ErbB4 knockout animals actually did better! On this task, animals had to ignore auditory distractors and listen for an auditory target (a warble; Figure, left). The ErbB4 deficient animals were better at ignoring the irrelevant distractors, and could identify the target sound better than their spared litter mates.
The two tasks differ in a number of ways, so understanding why one got better and the other got worse is not straightforward. For instance, the impaired task was what the authors describe as “incongruent”: the auditory signal the mice were told to ignore had previously instructed them to do something else. This was unlike the other task in which the irrelevant information was more like background noise.
But even if task differences make it reasonable that the behaviors might differ, we are still left wondering what it was that happened to help the animals get better on the auditory only distractor task. The improvement in deficient animals suggests that in intact animals, an active process limits their ability to suppress distractors. A competing explanation is that in intact animals, maladaptive behavioral strategies limit performance- like paying attention to reward history, for instance. Reward history has no relevance for the current trial, but typically has an effect on decisions anyway. In the paper here, though, it is not clear why a maladaptive strategy would affect one behavior more than the other.
In any case, the differing effects on the two tasks are a mystery. The ability to target specific populations of neurons might bring about more such instances in which performance improves. Finding out the reason might require taking into account a number of factors which, together, shape the animal’s behavior.
December 22, 2014
The above photo shows surest lab members at our holiday skate. It was great to get together and celebrate a year of discovery. Highlights include:
January: Matt Kaufman generating the first images of mouse cortex with our new 2-photon imaging setup
February: Attending, and presenting at, Cosyne
May: Being part of the Symposium on Quantitative Biology at Cold Spring Harbor
July: Hosting undergraduates John Cannon and Nikaela Bryan as summer students in the lab
August: The addition of a new postdoc, Farzaneh Najafi, a new graduate student, Lital Chartarifsky, and two new technicians, Hien Nguyen and Angela Licata
September: Using intrinsic imaging to see multiple visual areas in the mouse brain
November: Traveling to the Society for Neuroscience annual meeting (and Kachi presenting a poster there)
November: Publication of a paper in Nature Neuroscience with co-authors David Raposo, Matt Kaufman and myself. We report insights from a multisensory decision task.
November 10, 2014
A new paper is out from my lab today in Nature Neuroscience. In this paper, we set out to understand how a single part of the brain can be used to support many behaviors. The posterior parietal cortex, for instance, has been implicated in decision-making, value judgments, attention and action selection. We wondered how that could be: are there categories of specialized neurons that are specialized for each behavior? Or, alternatively, does a single population of neurons multitask to support lots of behaviors? We found the latter possibility to be true. We recorded neurons in rat posterior parietal cortex while rats were making decisions about lights and sounds. We found that neurons could be strongly modulated by the animal’s choice, the modality of the stimulus, or, very often, both of those things. This multitasking did not provide a problem for decoding: a linear combination of responses could easily estimate choice and modality well.
We hope that our observations will change the way people think about how neurons support diverse behaviors because they challenge the prevailing view that neurons are specialized. Horace Barlow (the grandfather of computational neuroscience), argued that neurons in the frog’s retina were specialized for detecting particular kinds of motion. This is likely true in early visual areas, but in higher cortical areas, things are very different. Our observations about multitasking neurons point to a new way of encoding information that, we argue, confers flexibly in how the neurons are used, and could allow their responses to be flexibly combined to support many behaviors. The picture below shows me with co-first authors David Raposo and Matt Kaufman.
H. Read identifies auditory areas by intrinsic imaging & measures responses electrophysiologically to define function
October 31, 2014
Heather Read, from the University of Connecticut, visited my lab this week. She shared her expertise on intrinsic imaging with my lab members and I, and had extensive conversations with Kachi Odeomene and Lital Chartarifsky (shown at left). Heather also gave a seminar describing her recent work measuring neural responses in 3 auditory areas. Heather and her colleagues are interested in how these areas work together to provide the needed information about auditory inputs to make sense of them. The first thing they do is to use intrinsic imaging to map out the three auditory areas (see image below). To do this, they take advantage of the fact that each area has a unique “map” of tonotopic space. But why would the brain need so many maps of the same space? One possibility is that each is specialized for a particular “shape” of sound. That is, the time and frequency modulation pattern that make a sound unique. Heather reports that three areas, primary auditory cortex, the ventral auditory field and the supra rhinal auditory field differ in the degree to which they are specialized for representing fast-modulating vs. slow-modulating sounds. This has many parallels to the visual system where visual areas differ tremendously in the degree to which they reflect fast versus slow modulating inputs.
New insights from Nicole Rust on mixed up perirhinal cortex neurons at the Optical Society vision meeting
October 12, 2014
This weekend I attended the vision meeting of the Optical Society at the University of Pennsylvania in Philadelphia. I was invited to participate in a debate about mice as models for visual function that included, among others, Tony Movshon, a vocal skeptic of mouse models. My role in the debate was to highlight features of rodent behavior that makes then well-suited to provide insights about computations that may be conserved across many species. In my lab, we think a lot (A LOT) about how behavior, and how to design paradigms that give us the best shot at uncovering computations that are shared by mice and humans.
In addition to debating the merits of different models, I enjoyed some great talks including one by Nicole Rust whom you can see here with colleagues discussing her data post-talik. She and lab members have been measuring signals in two cortical structures: inferotemporal cortex (IT), long studied as an object recognition area, and perirhinal cortex (PRH), an association area that gets inputs from IT. PRH is shown below in an image I made from the Allen Brain Connectivity Atlas (it is in a different species, but likely there are some parallels). Each dot on the right image shows an area that projects to PRH, highlighting the area as a good candidate for transforming complex visual signals into a judgment about what to do. Nicole has argued previously that a key difference between IT and PRH is that a linear combination of IT neuron responses cannot predict whether a given stimulus matches a searched-for target whereas PRH neurons can predict this.
Her latest work is informative about what kind of operations are performed on IT neurons so that the signals arrive in PRH in a more manageable form. The answer is surprisingly simple. She argues that a feedforward architecture that includes IT neurons with variable response latencies is key, and also that the IT neurons have response preferences that are not simply rescaled with time. This accounts for the observed dynamics in PRH pretty well.
Cortex Club, organized by Oxford students and postdocs, invites speakers to present emerging work and engage in lively debate
October 3, 2014
I spoke this week at Oxford University’s Cortex Club- a student/postdoc-led organization that brings in speakers from around the world to speak to a very lively and engaged audience. As a former visiting student at Oxford, it was a special pleasure to come here, and the beautiful architecture is as inspiring as it ever was. The discussion during and after the talk was great, and I was pleased to get some critical feedback on our ideas from Andrew Parker, who has done influential work about motion direction decisions. One question he and his lab members raised was how to think about sensory vs. motor driven activity in parietal cortex- this is a key question.
Following the talk, the students and postdocs escorted me to a local pub where we had pints and discussed both my talk and the field in general. One topic of interest was how the legal system is evolving in response to neuroscience data. With structural and functional MRI becoming widely available, unusual brain architecture and dynamics are sometimes argued to underlie criminal behavior.
Finally, the students requested that I sign a guest book that they have been keeping to track all the speakers at the club. It was fun to look back at the messages from my colleagues who have been previous speakers. There were some entertaining and inspiring messages, a few clearly fueled by the lighthearted feeling that evolves after a few pints in this cozy pub. I wrote them a limerick about machine learning, which I won’t repeat here, but if you encounter the club at any point, maybe they will share it with you!
Park et al’s multi-parameter statistical model reveals which simple model can decode decisions from neural activity
September 24, 2014
My lab met this week to discuss a new paper by Park, Meister, Huk and Pillow, recently published in Nature Neuroscience. They leveraged neural data generated via a tried and true approach: measuring the responses of neurons in the parietal cortex during a random dot motion decision task. What’s new here was their analysis. Unlike previous work, which has focussed on normative (what the brain SHOULD do) or mechanistic models, these folks took a statistical approach. They said, look, we just want to describe the responses of each neuron, taking into account the inputs on that particular trial. And they wanted to do this on a trial-by-trial basis, no small feat since single-trial spike trains are highly variable.
They did this with success: as you can see in the example (right), the model firing rate (yellow) approximates the true single-trial response (black). But capturing the detailed, time varying responses of many trials required a model with a lot of parameters. Like, a whole lot. This seemed at first discouraging, but then again, the goal of these models was not to inform us about the nature of neural mechanisms, but instead to try and figure out which of many incoming signals modulate each neuron, and how much they do so at different moments in time. Once they fit the model, they used it to decode the data, and then looked at the time course of this decode for a whole bunch of neurons. What they realized at this point is cool: the very complex model could be distilled to something much simpler (left) and could still predict the trial-to-trial choices. By integrating the firing rates of a pool of neurons using leaky integrators with two time constants (way simple!) they could predict choice almost as accurately as the full-blown model. The net effect is that the analysis ended up telling us something interesting about how parietal cortex neurons mix multiple inputs, and also how they might be decoded easily.
Workshop on the dynamic brain highlights tools from the Allen Institute that make it easy to visualize neural connectivity
September 4, 2014
I have just returned from a week-long Workshop on the Dynamic Brain, led by Adrienne Fairhall (University of Washington) and Christof Koch (Allen Institute for Brain Science). The course brought in students from around the country including graduate programs at the University of Washington, UCSD and the University of Michigan. Lectures brought students up to speed on emerging approaches for understanding brain function, especially those using tools developed by the Allen Institute. For my part, I gave 2 lectures and then took advantage of the opportunity to learn about the ins and outs of the tools from some of the local experts, most notably Lydia Ng who is my new hero.
I already had some experience with the Allen Brain Connectivity Atlas, but its new developments really blew me away. The atlas is based on systematic injections of AAV across cortical and subcortical structures. One feature I particularly liked is the ability to visualize injections not just based on the injection location, but based on a target location for which the user wants to know all the inputs (this is called a “spatial search”). I did such a search for the posterior parietal region. Because this approach allowed me to see all the areas into which injections led to parietal label, it is kind of equivalent to seeing a retrograde injection from the posterior parietal cortex. As the image below shows, there is clear label in visual areas (the blue dots at the posterior region of the gray brain, one of which is shown in more detail at the right). Injections to secondary motor and orbital areas (the green dots at the anterior region) likewise innervate the posterior parietal area. Being able to easily visualize many injections from many different vantage points gives a much clearer pictures of the overall connectivity, and the tools are really fun to play around with.
August 10, 2014
Leila Elabbady (Wellesley College) worked in Josh Dubnau’s lab where they are using fruit flies to understand neurodegeneration. An emerging hypothesis is that increased transposon activity may play a role in neurodegeneration. Transposons are repetitive strands of DNA that can copy themselves and insert themselves elsewhere in the genome. These were first discovered in corn, here at Cold Spring harbor by Barbara McClintock who later got the nobel prize for her work. Transposons can be dangerous: they can alter the transcription of other, potentially important genes. Leila’s work this summer focussed on TDP-43, a DNA/RNA binding protein that might keep transposon activity in check. She tested whether manipulating the fly homolog of TDP-43 affected transposon activity in flies. A key part of this approach, and also what makes it challenging, is that Leila monitored transposon activity at multiple stages in the flies’ lives. Identifying how TDP-43 affects this progression will be key for testing the hypothesis about its role in neurodegeneration.
Nikalea Bryan (University of Maryland, Baltimore County) also worked in my lab. She was likewise interested in the timing of decision formation in the cortex, but wanted to get at the issue by manipulating inhibitory interneurons. These neurons are plentiful in the cortex and the sub-type she was interested in, parvalbumin positive (PV) interneurons, strongly innervate excitatory pyramidal cells. Upregulating the PV neurons therefore can shut down the ability of the cortex to communicate information to downstream areas, a powerful tool. Nikaela also thought deeply about training procedures and how to tweak them to get the best performance possible.