DSC_0683 (1)

The above photo shows surest lab members at our holiday skate. It was great to get together and celebrate a year of discovery. Highlights include:

January: Matt Kaufman generating the first images of mouse cortex with our new 2-photon imaging setupIMG_0164

February: Attending, and presenting at, Cosyne

May: Being part of the Symposium on Quantitative Biology at Cold Spring Harbor

IMG_9472July: Hosting undergraduates John Cannon and Nikaela Bryan as summer students in the lab

August: The addition of a new postdoc, Farzaneh Najafi, a new graduate student, Lital Chartarifsky, and two new technicians, Hien Nguyen and Angela Licata

 

September: Using intrinsic imaging to see multiple visual areas in the mouse brain

October: Seeing David Raposo and Kachi Odeomene lead the acquisition of a new technology for behavioral data acquisition (alongside Josh Sanders)IMG_9532

November: Traveling to the Society for Neuroscience annual meeting (and Kachi presenting a poster there)

November: Publication of a paper in Nature Neuroscience with co-authors David Raposo, Matt Kaufman and myself. We report insights from a multisensory decision task.

ratSpaceA new paper is out from my lab today in Nature Neuroscience. In this paper, we set out to understand how a single part of the brain can be used to support many behaviors. The posterior parietal cortex, for instance, has been implicated in decision-making, value judgments, attention and action selection. We wondered how that could be: are there categories of specialized neurons that are specialized for each behavior? Or, alternatively, does a single population of neurons multitask to support lots of behaviors? We found the latter possibility to be true. We recorded neurons in rat posterior parietal cortex while rats were making decisions about lights and sounds. We found that neurons could be strongly modulated by the animal’s choice, the modality of the stimulus, or, very often, both of those things. This multitasking did not provide a problem for decoding: a linear combination of responses could easily estimate choice and modality well.

We hope that our observations will change the way people think about how neurons support diverse behaviors because they challenge the prevailing view that neurons are specialized. Horace Barlow (the grandfather of computational neuroscience), argued that neurons in the frog’s retina were specialized for detecting particular kinds of motion. This is likely true in early visual areas, but in higher cortical areas, things are very different. Our observations about multitasking neurons point to a new way of encoding information that, we argue, confers flexibly in how the neurons are used, and could allow their responses to be flexibly combined to support many behaviors. The picture below shows me with co-first authors David Raposo and Matt Kaufman. IMG_9993

IMG_9918Heather Read, from the University of Connecticut, visited my lab this week. She shared her expertise on intrinsic imaging with my lab members and I, and had extensive conversations with Kachi Odeomene and Lital Chartarifsky (shown at left). Heather also gave a seminar describing her recent work measuring neural responses in 3 auditory areas. Heather and her colleagues are interested in how these areas work together to provide the needed information about auditory inputs to make sense of them. The first thing they do is to use intrinsic imaging to map out the three auditory areas (see image below). To do this, they take advantage of the fact that each area has a unique “map” of tonotopic space. But why would the brain need so many maps of the Lee-2014-Figure01-Rat071213-Oct16same space? One possibility is that each is specialized for a particular “shape” of sound. That is, the time and frequency modulation pattern that make a sound unique. Heather reports that three areas, primary auditory cortex, the ventral auditory field and the supra rhinal auditory field differ in the degree to which they are specialized for representing fast-modulating vs. slow-modulating sounds. This has many parallels to the visual system where visual areas differ tremendously in the degree to which they reflect fast versus slow modulating inputs.

This weekend I attended the vision meeting of the Optical Society at the University of Pennsylvania in Philadelphia. I was invited to participate in a debate about mice as models for visual function that included, among others, Tony Movshon, a vocal skeptic of mouse models. My role in the debate was to highlight features of rodent behavior that makes then well-suited to provide insights about computations that may be conserved across many species. In my lab, we think a lot (A LOT) about how behavior, and how to design paradigms that give us the best shot at uncovering computations that are shared by mice and humans.

In addition to debaIMG_9548ting the merits of different models, I enjoyed some great talks including one by Nicole Rust whom you can see here with colleagues discussing her data post-talik. She and lab members have been measuring signals in two cortical structures: inferotemporal cortex (IT), long studied as an object recognition area, and perirhinal cortex (PRH), an association area that gets inputs from IT. PRH is shown below in an image I made from the Allen Brain Connectivity Atlas (it is in a different species, but likely there are  some parallels). Each dot on the right image shows an area that projects to PRH, highlighting the area as a good candidate for transforming complex visual signals into a judgment about what to do. Nicole has argued previously that a key difference between IT and PRH is that a linear combination of IT neuron responses cannot predict whether a given stimulus matches a searched-for target whereas PRH neurons can predict this.

imagePRH.001

Her latest work is informative about what kind of operations are performed on IT neurons so that the signals arrive in PRH in a more manageable form. The answer is surprisingly simple. She argues that a feedforward architecture that includes IT neurons with variable response latencies is key, and also that the IT neurons have response preferences that are not simply rescaled with time. This accounts for the observed dynamics in PRH pretty well.

DSC_0032I spoke this week at Oxford University’s Cortex Club– a student/postdoc-led organization that brings in speakers from around the world to speak to a very lively and engaged audience. As a former visiting student at Oxford, it was a special pleasure to come here, and the beautiful architecture is as inspiring as it ever was. The discussion during and after the talk was great, and I was pleased to get some critical feedback on our ideas from Andrew Parker, who has done influential work about motion direction decisions. One question he and his lab members raised was how to think about sensory vs. motor driven activity in parietal cortex- this is a key question.

Following the talk, the students and postdocs escorted meDSC_0036 to a local pub where we had pints and discussed both my talk and the field in general. One topic of interest was how the legal system is evolving in response to neuroscience data. With structural and functional MRI becoming widely available, unusual brain architecture and dynamics are sometimes argued to underlie criminal behavior.

DSC_0033Finally, the students requested that I sign a guest book that they have been keeping to track all the speakers at the club. It was fun to look back at the messages from my colleagues who have been previous speakers. There were some entertaining and inspiring messages, a few clearly fueled by the lighthearted feeling that evolves after a few pints in this cozy pub. I wrote them a limerick about machine learning, which I won’t repeat here, but if you encounter the club at any point, maybe they will share it with you!

My lab met this week to discuss a new paper by Park, Meister, Huk and Pillow, recently published in Nature Neuroscience. They leveraged neural data generated via a tried and true approach: measuring the responses of neurons in the parietal cortex during a random dot motion decision task. What’s new here was their analysis. Unlike previous work, which has focussed on normative (what the brain SHOULD do) or mechanistic models, these folks took a statistical approach. They said, look, we just want to describe the responses of each neuron, taking into account the inputs on that particular trial. And they wanted to do this on a trial-by-trial basis, no small feat since single-trial spike trains are highly variable.

They did this with success: as you can see in the examplenn.3800-F4 (right), the model firing rate (yellow) approximates the true single-trial response (black). But capturing the detailed, time varying responses of many trials required a model with a lot of parameters. Like, a whole lot. This seemed at first discouraging, but then again, the goal of these models was not to inform us about the nature of neural mechanisms, but instead to try and figure out which of many incoming signals modulate each neuron, and how much they do so at different moments in time. Once they fit the model, they used it to decode the nn.3800-F7data, and then looked at the time course of this decode for a whole bunch of neurons. What they realized at this point is cool: the very complex model could be distilled to something much simpler (left) and could still predict the trial-to-trial choices. By integrating the firing rates of a pool of neurons using leaky integrators with two time constants (way simple!) they could predict choice almost as accurately as the full-blown model. The net effect is that the analysis ended up telling us something interesting about how parietal cortex neurons mix multiple inputs, and also how they might be decoded easily.

I have just returned from a week-long Workshop on the Dynamic Brain, led by Adrienne Fairhall (University of Washington) and Christof Koch (Allen Institute for Brain Science). The course brought in students from around the country including graduate programs at the University of Washington, UCSD and the University of Michigan. Lectures brought students up to speed on emerging approaches for understanding brain function, especially those using tools developed by the Allen Institute. For my part, I gave 2 lectures and then took advantage of the opportunity to learn about the ins and outs of the tools from some of the local experts, most notably Lydia Ng who is my new hero.

I already had some experience with the Allen Brain Connectivity Atlas, but its new developments really blew me away. The atlas is based on systematic injections of AAV across cortical and subcortical structures. One feature I particularly liked is the ability to visualize injections not just based on the injection location, but based on a target location for which the user wants to know all the inputs (this is called a “spatial search”). I did such a search for the posterior parietal region. Because this approach allowed me to see all the areas into which injections led to parietal label, it is kind of equivalent to seeing a retrograde injection from the posterior parietal cortex. As the image below shows, there is clear label in visual areas (the blue dots at the posterior region of the gray brain, one of which is shown in more detail at the right). Injections to secondary motor and orbital areas (the green dots at the anterior region) likewise innervate the posterior parietal area. Being able to easily visualize many injections from many different vantage points gives a much clearer pictures of the overall connectivity, and the tools are really fun to play around with.

ALlenBrainPPC

leilaLeila Elabbady (Wellesley College) worked in Josh Dubnau’s lab where they are using fruit flies to understand neurodegeneration. An emerging hypothesis is that increased transposon activity may play a role in neurodegeneration. Transposons are repetitive strands of DNA that can copy themselves and insert themselves elsewhere in the genome. These were first discovered in corn, here at Cold Spring harbor by Barbara McClintock who later got the nobel prize for her work. Transposons can be dangerous: they can alter the transcription of other, potentially important genes. Leila’s work this summer focussed on TDP-43, a DNA/RNA binding protein that might keep transposon activity in check. She tested whether manipulating the fly homolog of TDP-43 affected transposon activity in flies. A key part of this approach, and also what makes it challenging, is that Leila monitored transposon activity at multiple stages in the flies’ lives. Identifying how TDP-43 affects this progression will be key for testing the hypothesis about its role in neurodegeneration.

nikaelaNikalea Bryan (University of Maryland, Baltimore County) also worked in my lab. She was likewise interested in the timing of decision formation in the cortex, but wanted to get at the issue by manipulating inhibitory interneurons. These neurons are plentiful in the cortex and the sub-type she was interested in, parvalbumin positive (PV) interneurons, strongly innervate excitatory pyramidal cells. Upregulating the PV neurons therefore can shut down the ability of the cortex to communicate information to downstream areas, a powerful tool. Nikaela also thought deeply about training procedures and how to tweak them to get the best performance possible.

IMG_9161Today marked the last meeting of our summer long “Gilbert Club”, a group that gathers weekly to discuss the Gilbert Strang lectures available on MIT Open Courseware. Most folks who have taken linear algebra know and love the Strang textbook- his lectures will also not disappoint. We gathered after watching each lecture in the hopes of using it as a jupming-off point for developing new techniques to interpret neural data from both electrophysiological recordings and imaging. The challenge is that the temporal precision and noise can differ greatly across these two methods for measuring neural activity, such as electrophysiology and imaging. Techniques discussed include regression, dimensionality reduction and image processing. The club was led by Matt Kaufman, a postdoc in my lab who has been a major player in bringing new analysis techniques to neural data. Attendees included students and postdocs from my lab as well as the Albeanu, Koulakov and Zador labs.

One clear outcome of the club is that scientists here working on different problems now have a common language for discussing the data. My hope is that this could constitute a first step in generating not just a common language, but a common data format and a common set of analysis tools as well. The opportunity to share data and analyses easily would make each of our individual efforts go much further, and could help to unify broad approaches here at CSHL.

Being able to navigate in the world requires a stable representation of space. A key part of the neural substrate supporting this ability is the entorhinal cortex, where individual cells’ responses constitute a grid that tiles the space being explored. ilafiete

The existence of such cells has been known for a while, and certainly they seem reasonable for the task at hand, but it has been an ongoing challenge to understand what kind of neural machinery would give rise to them. Ila Fiete, from UT Austin, has been tackling this problem from a theoretical point of view. I heard her give a talk at a recent meeting organized by the McKnight Foundation, which funds systems neuroscientists working at the molecular, cellular, systems and theoretical level.

Ila’s idea is that the grid cells reflect a stable 2-dimensional manifold driven by continuous attractors (see left panel of figure, below). The gist is that short-range excitatory and long-range inhibitory connections give rise to stable “bumps” of activity. This kind of mechanism has been put forth previously in the visual and oculomotor systems. Here, Ila proposes that the same continuous attractors might be used by entorhinal cortex, there to drive the individual nodes of activity of the grid cells. This model predicts some specific features of the resulting population: for example, the model predicts that even though the absolute phase of individual neurons might change a bit over time the relative phase of the neurons to each other should be fixed. This prediction is born out by real measurements of grid cells: their phase can change over time, but their relative phase is extremely stable. A second prediction is that tuning curves of all the neurons will be stereotyped, a prediction that is again born out by the data.

A continuous attractor, some grid cells, and some measurements about them

A continuous attractor, some grid cells, and some measurements about them

This work presents a challenge to alternative explanations for grid cells, such as that they are driven by oscillations in the cortex. To my mind, a key next step will be to manipulate the circuit, perhaps by suppressing the activity of interneurons which play a key role, and examining the effects on the phase of grid cells.

anneslist

Highlighting female systems neuroscientists

Fairhall lab

Computational neuroscience at the University of Washington

Pillow Lab Blog

Neural Coding and Computation Lab @ Princeton University

Churchland lab

Perceptual decision-making and multisensory integration