I blogged repeatedly during the recent Society for Neuroscience Meeting about posters and presentations from other labs. This was great fun as there was a lot of terrific science presented. However, this post will take a different angle: I’ll highlight what my lab presented at the meeting.
David Raposo, John Sheppard and Matt Kaufman:
The three posters collectively made the point that our use of multisensory stimuli exposed an unexpected computational strategy for neurons in the posterior parietal cortex. Despite the fact that the 3 posters made this point together, they were stationed in separate sessions! Undaunted by this problem, the guys manufactured “data baseball cards” (see right) that briefly outlined each poster. Each presenter could hand out the baseball cards of the other presenter as needed; for example, if a poster attendee wondered about an issue that was presented in a different session. Although we designed the cards to ease the burden of connected posters in different sessions, they became a huge hit! The guys’ collections were depleted almost immediately- if you want one, maybe they will turn up for auction on eBay??
Onyekachi Odoemene: Kachi’s poster described some work that is at an early stage but is very exciting. He has been working on developing decision-making behavior in mice. His poster described early efforts to determine which structures are required for these decisions. Keep a look out for Kachi next year: we joke in the lab that whenever we think of an innovative idea, it turns out Kachi already thought of it, built the apparatus to test it, and has the data in a power point presentation.
Amanda Brown: Amanda presented work alongside Ingmar Kanitschneider, a postdoc in Alex Pouget’s lab with whom we have an ongoing collaboration. Their poster described human behavioral data about a new version of our multisensory decision task. In this version, the stimulus is configured so that subjects must make a multisensory estimate of the number of events (as opposed to the rate of those events, which is what animals do in our usual task). Their poster was very busy so they got to spread the word about their new view of probabilistic number representation.
We finished the meeting off with anentertaining lab dinner at a local restaurant. We were joined by some outside collaborators, and some internal collaborators as well, including Ashlan Reid.
All in all the meeting was a big success. Lab members got the word out about a bunch of new observations we have made, and returned to Cold Spring Harbor overflowing with ideas for new experiments and analyses. These new directions will keep us busy- stay tuned for more updates in the coming months.
November 12, 2013
Neurons across the cortex differ considerably in the degree to which they exhibit persistent activity. Neurons in frontal areas might fire persistently for seconds even in the absence of a sensory stimulus, while neurons in early visual cortex (V1) are more tightly linked to incoming sensory input. Does this tight linking arise because V1 circuits simply lack the features that allow persistent activity, or might the tight linking arise as the result of an active process?
An intriguing poster from Kim Reinhold in Massimo Scanziani’s lab suggests the latter. She has been running experiments to determine the timescale over which cortical activity changes when she removes thalamic inputs (via an optogenetic strategy). She finds that the time constant for the decay of activity is super-fast: about 9 ms. Given that 9 ms is around the membrane time constant of the cell, it would seem at first that the membrane properties of individual cells define the time constant of persistent activity for the whole area. But the plot thickens: when Kim squelched cortical inhibition, the time constant got considerably longer. This observation suggests that inhibitory neurons actively squelch cortical responses thereby preventing persistent activity. Why might this be? Kim reasons that fast-acting inhibition would ensure that the visual cortex was always at-the-ready for new incoming stimuli. This suggests a tradeoff between the ability to maintain a persistent response, and the ability to response with high temporal resolution.
In a special session about the BRAIN initiative, a panel of experts reported on the current plans for implementing the BRAIN initiative. The initiative was announced this spring and was described by President Obama as the next great American project. A working group, which includes Bill Newsome, Cori Bargman, Terri Sejnowski and others have been hard at work laying a plan for how to implement the initiative. They came up with a list of recommendations which include goals such as linking neural activity to behavior and integrating theory, modeling and computation. The group also emphasized that there must be a means to disseminate the technologies that are developed as part of the initiative. This is key: through dissemination of technology, the effects of the initiative can reach far beyond the funded labs and impact a larger community of scientists. Geoff Ling feels particularly passionate about these tools. He argues that as neuroscientists, we have no shortage of compelling hypotheses, but that “we are stymied by the tools we have available to test our hypotheses.” I partially agree: good tools are necessary, but we need to think deeply about what the fundamental hypotheses are and how to test them.
November 11, 2013
Manipulating neural activity and measuring the effect on behavior is a key tool for understanding the function of a structure in the brain. Sometimes, experimenters will manipulate activity in two areas to gain insight into the flow of information through the brain. In a poster this year at SFN, Nuo Li, a postdoc in Karel Svoboda’s lab, took things a big step further by systematically suppressing the activity of 55 locations, each 1 mm wide, in the cortex of individual mice. He achieved this by using a pair of mirror galvos to move a stimulating beam to different places in the brain. In a given session, he could inactivate the parietal cortex on some trials, the somatosensory cortex on other trials and the anterior lateral motor cortex (ALM) on others still. This approach has a few advantages: first, by surveying the cortex broadly, he leaves open the possibility of identifying relevant areas that weren’t even on his radar. Second, the approach avoids a common pitfall of traditional stimulation experiments: in those experiments, the animal can notice a change in its performance and adapts its strategy. A signature of this nonstationary strategy is usually apparent in control trials. Here, the stimulation causes different effects depending on WHERE its targeted and WHEN in the trial it takes place, making it a challenge for the animal to respond with an altered strategy.
The group found that stimulating the ALM and the barrel cortex had the biggest effects on behavior. Barrel cortex inactivation mattered most when it was presented early in the trial, and ALM inactivation mattered most when it was late in the trial, consistent with the idea that information flows from the sensory area to the motor area over the course of decision formation. An interesting next step would be to uncover what aspect of the animal’s performance was disrupted by the inactivation. For example, interpreting the results would be aided by knowing whether the reduced performance on their task was driven by a change in sensitivity or a change in bias.
November 10, 2013
In humans, massive changes take place in the brain over the course of language learning in infants. A poster today from Wellesley College, presented by 3 undergraduate students from Sharon Gobes lab, hope to gain insight into this process by studying the brains of juvenile birds around the time they learn their father’s song. In birds, song is known to preferentially activate neurons on the left side of the brain; a leftwards lateralization is likewise seen in humans. The group wondered whether this is an active process: would song natually activate the left hemisphere, even in birds who weren’t exposed to song during development?
To test this, they reared a cohort of birds without exposure to a tutor’s song and measured neural activity in the caudomedial nidopallium, a structure involved in song. In this special cohort, birdsong didn’t have its characteristic effect on the left hemisphere. Instead, both hemispheres responded. Interestingly, a control sound with the same frequency profile activated the birds’ brain in a normal fashion, suggesting that they could process ordinary sounds typically. These results suggest that changes in the brain during vocal learning are driven by exposure to the right stimulus- in this case, the song of a tutor. The take home message? Appropriate developmental environments are necessary, even for innate behaviors.
November 9, 2013
I attended two interesting posters at tonight’s SFN Diversity Poster Session. Temidayo Orederu, a Hunter college student working Liz Phelp’s lab at NYU, explored the effect of stress on learning. Temidayo is particularly interested in “Reversal learning”: the ability to unlearn an association which once was positive but now is aversive. She brought a large cohort of human subjects into the lab, and examined how a stressful situation affected their reversal learning. She found the the stressed-out subjects were able to learn that a once-positive stimulus was now negative, but NOT that a once-negative stimulus was now positive. This dissociation was surprising and suggests that the two aspects of reversal learning might be mediated by separate circuits, one of which is susceptible to stress. The lesson? Stay calm if you want to learn new contingencies about the world.
In another poster, Nancy Padilla explored the neural mechanisms underlying anxiety. She works in Josh Gordon’s lab at Columbia University. Nancy expressed Archaerhodopsin in the axon terminals of vental hippocampal neurons that project to the medial prefrontal cortex. She then examined the behavior of mice in an elevated plus maze with and without optical stimulation that causes the Arch to suppress activity in the terminals. She saw a clear behavioral effect: Animals spent much more time in the open arms of the maze during stimulation, suggesting that activating the hippocampal inputs reduced anxiety, encouraging animals to explore. Seeing a clear behavioral effect from this stimulation is exciting and suggests that the hippocampal inputs play an important role in anxiety.
By S. TANABE, A. ZANDVAKILI, A. KOHN; Neurosci., Albert Einstein Col. of Med., Bronx, NY
This group tackled a long-standing question in the field of decision-making: how do you tell the degree to which a single neuron “weighs in” on a behavioral choice? The question is important, but hard to get at via traditional single cell recording methods, especially for a fine discrimination task as the authors used here.
To get around this problem, this group recorded from 15-30 neurons in V4 while well-trained subjects made decisions about the orientation of a stimulus. As predicted, they didn’t find a strong relationship between the firing rates of single neurons and the subjects’ choices, but things changed dramatically when they looked at the population level and used a linear classifier. What might account for this? The authors argue that in high dimensional state spaces, the decision axis and variability axis are aligned. This means that even if a given neuron is a key player in a decision, the relationship between its firing and the animal’s choice might be weak.
At the end of the talk, the authors suggested a “trick” for evaluating the degree to which decision and variability axes are aligned: they shuffled the trials and found that sometimes the classifier did better! The effect of trial shuffling on the classifier, they argue, offers insight into the weighting profile of the neurons. I haven’t heard of anyone taking this approach before- it will be interesting to test on other datasets, especially those on coarse discrimination tasks where the prediction differs.
October 25, 2013
I am pleased to announce that my blog was selected as one of the official blogs for the Society for Neuroscience Meeting this year. I will focus on two themes: “Sensory systems and behavior” and “Cognition and Behavior”. You can also follow me on Twitter at @anne_churchland.
This is exciting news as there will be a wealth of fantastic science to talk about at the meeting. But in the meantime, I’ve been arguing with students at Penn about the importance of an orthogonal representation of task parameters in the parietal cortex. They are a tough crowd and had many great questions after my talk today. In this photo, we are discussing the science at the edge of a pretty pond behind the biology building. Marino Pagan from Nicole Rust’s lab had some interesting ideas about new classification analyses we should try inspired by analyses they undertook in their recent paper.
October 18, 2013
Okay, so suppose you’ve just measured responses in hundreds of neurons, over time, during a complex behavioral task. Now what?? My lab members and I attended a conference at Columbia this week focussed on this issue. The conference, organized by Mark Churchland, Larry Abbott, John Cunningham and Liam Paninski was sponsored by Sandy Grossman and is a timely topic: advances in recording and imaging technology have made large neural datasets the norm and understanding how to analyze such datasets is nontrivial.
The talks included one from our lab, in which I described our recent ideas about the posterior parietal cortex and its response during a high dimensional decision task. Our work dovetailed with several others at the meeting: For example, Chris Machens spoke about his demixing principal components analysis, an analysis we have been using in our data. Chris, along with his student Wieland Brendel, developed this analysis to ask whether parameters that are mixed at the level of single neurons might be orthogonal at the level of the population. Observing an orthogonal representation in the population is important because it suggests that task parameters are represented in a way that could be trivially decoded by a downstream area.
In another talk, Jonathan Pillow described recent work from his lab on Bayesian nonparametric models for spike patterns in large datasets. The basic idea in “Bayesian nonparametrics” is to define models whose complexity grows gracefully with the amount of data available. Jonathan described an approach for modeling binary spike patterns using a Dirichlet process, which marries the parsimony of a simple parametric model (e.g., each neuron fires independently with probability “p”) and a “histogram” model that describes arbitrarily complex distributions over binary spike patterns. These models, which Jonathan’s group calls “universal binary models”, strike a happy medium between overly complex models and those that are so simple they fail to capture key features of spike data.
October 7, 2013
Last week I attended the Champalimaud Neuroscience in Lisbon, Portugal. I heard many fantastic talks, including those from Susana Lima, Dora Angelaki, Michale Fee and Matteo Carandini. I also got updates from many investigators with labs at the Champalimaud, a number of whom are thinking deeply about body movements: how to track them and what they tell us about underlying neural processes. I spoke with Megan Carey and some of her team who are investigating how sensory inputs are processed differently in the brains of moving versus stationary animals.
I also spoke in detail with Joe Paton whose lab has been tackling questions about how future decisions affect current movements. This approach builds on an existing body of work suggesting that a signature of developing decisions is sometimes evident in premotor areas, and even in the movements themselves. The animal’s in Joe’s lab are freely moving so getting handle on their complex full-body movements is a challenge. The standard approach is to track one or two parameters that might turn out to be important- head angle, for instance. Thiago Gouvêa along with Asma Motiwala, graduate students in Joe’s Lab, came up with an approach that is fundamentally different: rather than trying to guess what the right body parameter might be, they image the whole animal and then reduce the
dimensionality of the large collection of images that results (see below, and also this video).
This analysis will inform them which dimensions are the right ones. Because the approach doesn’t commit the investigator to a particular movement parameter, it allows for the fact that animals might be different from each other, or that a single animal might change over time: for instance, early in training showing anticipatory movements that are suppressed when the animal is an expert at the task. This is a new project, but has the potential to lead to a novel method of evaluating movements during decisions. Down the line, the movements could be related to neurons in different parts of the brain, and could help to interpret how those neurons contribute to a developing choice.