Today’s commentary brought to you from (in alphabetical order): George Bekheet, Lital Chartarifsky, Anne Churchland, Ashley Juavinett, Simon Musall, Farzaneh Najafi & Sashank Pisupati. Feel free to comment, correct, express skepticism, etc. You can do so here, on Biorxiv or on Twitter (@anne_churchland). Let’s get a conversation going!

Paper #1: Discrete attractor dynamics underlying selective persistent activity in frontal cortex (Inagaki, FontolanRomani & Svoboda)

Big question: What is the deal with the persistent activity in mouse area ALM that precedes licking movements?

Take home: Using intra and extracellular recording, combined with optogenetics and network modeling (nice!), the authors conclude that attractor dynamics, and not integration, define neural activity in area ALM.
But, hmmmm: Persistent activity in advance of movements is widely observed in many critters, but its function is pretty mysterious. Other kinds of persistent activity, like memory or evidence accumulation have a clear cognitive function, but the role of motor preparatory activity is not obvious. Why does ALM need to respond to far in advance of a movement anyway?

Paper #2: Optogenetically induced low-frequency correlations impair perception (Nandy, Nassi1 & Reynolds)

Big question: While multiple groups have shown that attention reduces the correlation between neurons within the receptive field of the attended location, it has been difficult to show causation. Here, they recreate low-frequency correlations in visual cortex and ask: can one causally affect the ability of an animal to pay attention?

Take home: Using depolarizing opsin C1V1 in a lentivirus in combination with an artificial dura, the researchers created a preparation in which they could optically excite specific locations of pyramidal cells in V4. During an orientation-change detection task, they use this system to induce low frequency (4-5 Hz) as well as high frequency (20 Hz) oscillations within the receptive field of the attended region. They demonstrate that low frequency stimulation within the attended field impairs the animal’s ability to do the task, whereas low frequency stimulation in an unattended field does not. The finding was also frequency specific — high frequency stimulation does not impair performance.

But, hmmmm: These findings provide nice closure to previous skepticism that changes in correlation structure could simply be an off-target effect, but not actually causal for attention. Still, the field seemed pretty convinced that low frequency correlations were somehow involved in attention, so this result probably will not shock many researchers. In addition, the behavioral effects are not well characterized in this paper. The two sample sessions in Figure 2, Supplement 2 show very different effects on the psychometric curve — one is shifted, whereas one has a different slope, suggesting different underlying impairments. We’d love to see the researchers more closely quantify the impairments in each animal.

Paper #3: Exclusive functional subnetworks of intracortical projection neurons in primary visual cortex (Kim, Znamenskiy Iacaruso, & Mrsic-Flogel)

Big question: How do long-range projection targets constrain local connectivity of cortical neurons?

Take home: distinct populations in V1 project to higher visual areas AL and PM. These distinct populations avoid making connections with each other which is unexpected given their signal correlations (response similarity). Therefore, projection target acts independently of response similarity to constrain local cortical connectivity. The absence of recurrent connections between AL and PM potentially allows for their independent modulation by top-down signals.

But, hmmmm: Should we worry that retrograde labeling may have failed to label all projections neurons? Also is identifying double labeled neurons an error-prone task?

Paper #4 Accurate Prediction of Alzheimer’s Disease Using Multi-Modal MRI and High-Throughput Brain Phenotyping (Wang, Xu, Lee, Yaakov, Kim, Yoo, Kim & Cha)

Big Question: Does multi-modal MRI data in combination with high throughput brain phenotyping provide any utility in predicting Alzheimer’s disease?
Take home: Authors have produced a machine learning model that can discern the difference (97% accuracy) between an AD brain and one from a patient with subjective memory complaints.
But, hmmm…: Seeing as this is done with retrospective data, why not look at AD patients verses patients with no cognitive impairment and memory complaints? Also, we would have loved it if the authors included more information on the machine learning analytics they used.

Paper #5: Causal contribution and dynamical encoding in the striatum during evidence accumulation (Yartsev, Hanks, Yoon & Brody)

Big question: Which regions of the brain are causally involved in evidence accumulation during decision making?

Take home: Anterior dorsal striatum satisfies 3 major criteria for involvement, as revealed by a detailed behavioral model – necessary (pharmacological inactivation makes accumulation noisy), represents graded evidence on single trials (electrophysiology), and contributes only during accumulation (temporally-specific optogenetic inactivation) – and is hence the first known causal node in evidence accumulation.

But, hmmm…: Teasing apart the relative contributions of striatum and its upstream inputs to accumulation will require further study, as will distinguishing its contribution relative to prefrontal cortex (FOF) to subsequent aspects of the decision such as leak & lapses.

Paper #6: Confidence modulates exploration and exploitation in value-based learning (Boldt, Blundell & De Martino)

Big question: What is the link between humans’ confidence in their decisions and their uncertainty in the value of different choices? How do these quantities influence their decisions?

Take home: Belief confidence(i.e. certainty in value estimates) drives decision confidence (i.e, confidence that choices made were correct) in a two-armed bandit, and individuals with better estimates of the former also had better estimates of the latter. Moreover, the belief confidence in the higher-value option modulated the exploration-exploitation tradeoff, with participants exploring more often when they were less confident.

But, hmmm…: Relating these results to the two known forms of uncertainty-driven exploration –  one that depends on the difference in uncertainties of the two options (uncertainty bonus) and the other that depends on their sum (Thompson sampling)- will require further investigation into the effects of interaction between the belief confidences of the two options.

Paper #7: Aberrant Cortical Activity In Multiple GCaMP6-Expressing Transgenic Mouse Lines (SteinmetzBuetfering, Lecoq, Lee, PetersJacobs, CoenOllerenshawValleydeVries Garrett, Zhuang,  Groblewski Manavi Miles White Lee Griffin, LarkinRollCrossNguyenLarsenPendergraftDaigleTasicThompsonWatersOlsenMargolisZengHausserCarandiniHarris)

Big question: Transgenic animals are developing to be the standard for for measuring neural activity but potential side-effects of genetic manipulations may be overlooked. Here the authors show that several GCaMP lines show abnormal epileptiform activity that is not observed in wild-type mice. They also provide on how to avoid this issue when using affected lines.

Take-home: Epileptiform activity are short, high-amplitude bursts of activity that span large parts of cortex and occur at a rate of ~0.1-0.5 Hz. Epileptiform is clearly distinct from other neural activity (measured with ephys, 2-photon or widefield imaging) but most animals don’t show any clear behavioral impairments. The origin of epileptiform is unclear but suppressing expression of GCaMP in the first 7 weeks seems to resolve the issue even for transgenic lines that are otherwise most affected.

But, hmmmm: Presumably, there is a variety of causes for abnormal neural activity in transgenic animals. Its not clear whether suppressing GCaMP expression will prevent these issues in future lines and there might also be other problems like indicator over-expression that will cause headaches in the future. There should be more studies that describe and address potential issues.

Paper #8: Stable representation of sounds in the posterior striatum during flexible auditory decisions (GuoWalkerPonvertPenix, Jaramillo)

 Big question: What is the role of posterior striatum neurons during auditory-driven decisions in mice?

Take home message: Here, the authors show that transient pharmacological inactivation of posterior striatum (also known as “auditory striatum”) impaired performance in an auditory discrimination task, while optogenetic activation during sound presentation biased the animals’ choices. Moreover, the activity of these neurons reliably encoded stimulus features, but was only minimally influenced by the animals’ choices, suggesting that neurons in the posterior striatum provide sensory information downstream, while providing little information about behavioral choice.

But, hmmm: The activation and inactivation experiments were performed on different neuronal populations (direct-pathway medium spiny neurons vs. all posterior striatal neurons, respectively), as well as unilaterally vs. bilaterally (activation vs. inactivation, respectively). It would be interesting to know how the different populations support the behavior, as well as matching the methodology. Moreover, the pharmacological inactivation had pretty strong motor effects and it is important to make sure that the behavioral effects were not cause by motor deficits.

This summer, a number of new papers have come out with data that bear on the role of posterior parietal cortex (PPC) on perceptual decisions. First, a paper by Katz and colleagues shook things up with their new data demonstrating that pharmacological inactivation of primate PPC has little effect on perceptual decisions. These results have been talked about in the community for a while- I will hold off saying too much about them since I wrote a piece about this paper that will come out in a few weeks (Stay tuned- I’ll post a link then). But the short story is that this paper argued that despite strong modulation during perceptual decisions, primate PPC is not a member of the causal circuit for visual motion decisions.

Two papers about rodent PPC, paint a different picture. We shouldn’t be too surprised about this, since although rodent and primate PPC share the same name, they have a number of anatomical and functional differences that mean it isn’t right to think of one as the homologue of the other (in fact, could we just stop using the word, “homologue” altogether?).

The first paper, by Michael Goard and goardTaskcolleagues, measured and manipulated mouse PPC neurons during a visual detection task: the mice were shown a horizontal or vertical grating, then waited through a 3-9 second delay, then reported whether a vertical grating was present by licking a spout. The authors disrupted PPC activity optogenetically. They found that performance declined considerably when the activation took place during the time the grating was visible to the mice. Interestingly, although PPC neurons were highly active during other parts of the trial, the delay and movement periods, for instance, disruption during those times had little effect on performance. This argues that the activity during those periods may reflect signals that are computed elsewhere and fed back to PPC.

PrintThe second paper is from my lab. Like the Goard paper, we found that performance declined when we stimulated while animals were facing visual stimuli that they had to judge. We mainly focussed on this period, so can’t compare the results with disruption at other times. We did, however, compare disruption on visual vs. auditory trials in the same animal and the same session, and we found that effects were mostly restricted to visual decisions. This fits with the data from the paper above, and also with deficits on a visual memory task reported in mice by Chris Harvey and David Tank.

Beyond the science of our paper, it was also a landmark moment in my lab because it is our first paper not the preprint server, biorxiv! Thebiorxiv was started at Cold Spring Harbor Laboratory. It provides a way for scientists to make their work freely available to the world as they journey through the sometimes long TwitterImg2process of academic publishing. I like the idea of making the work available fast, and the fact that it is freely accessible to everyone is important too. I’m excited to be part of this new effort and…  I admit my enthusiasm prompted me to modify the rainbow unicorn of asapbio just a bit…


This is a guest blog written by Ashley Ashley Kyalwazi- Shot 1Kyalwazi, a participant in the Undergraduate Research Program at Cold Spring Harbor Laboratory. I am the director of the program, as well as the PI on our NSF-Funded grant (along with my bioinformatics colleague Mike Schatz) to train undergraduates in Bioinformatics and Computational Neuroscience. These fields share many mathematical ideas, such as a need for dimensionality reduction and machine learning tools, but our program is highly unusual in bringing them together. Ms Kyalwazi, one of our students funded by the program, will be a junior this upcoming year at the University of Notre Dame. Her story is below:

As I embarked on my ten-week long summer research immersion experience here at Cold Spring Harbor Laboratory, I was excited. To me this was an opportunity to listen to a vast range of new ideas in major scientific disciplines, to analyze them, and then to begin forming my own. I would always carry a notepad and a pen with me around campus; I never knew what I would learn on any given day, from any given individual. All I knew, for sure, was that I would grow as a scientist and as a future physician.

Working in the systems neuroscience lab of Dr. Stephen Shea has been an incredible experience. I have had the opportunity to learn a vast new array of techniques from my mentors in the Shea lab, and to apply them as I worked on an independent project that uses a mouse model in order to gain a deeper understanding of the inhibitory network of parvalbumin neurons in the auditory cortex, and how this network regulates neural activity before, during, and after what I have come to refer to as “the maternal experience.”

The maternal experience describes any aspect of the mother-pup interaction that contributes to the context of the overall birthing process. This could range from mothers giving birth, to the act of retrieving distressed pups that find themselves isolated from the nest.  The former is an action that is characteristic to a single mother and her pups; however, the latter is one that could be translated and studied with a model incorporating virgin female mice (‘surrogates’). In my project, I was interested in understanding how maternal experience alters the neural circuitry of the surrogate’s brain, primarily focusing on a network of neurons in the cortex defined by the marker parvalbumin (PV+ cells). This network has been found to play a key role in regulating plasticity in the auditory cortex of female mice, following interaction with newborn pups (Shea et al 2016). So again I ask: what is the nature of nuture?

Last week, during journal club, my lab read a paper by Lior Cohen, Gideon Rothschild, and Adi Mizrahi titled “Multisensory Integration of Natural Odors and Sounds in the Auditory Cortex.” This paper found that neurons in A1 of mothers, and other virgin female mice, integrate pup odors and sounds, suggesting an experience-dependent model for cortical plasticity.

One of the findings that intrigued me the most as I was reading this paper, was the observation that washing the pups hindered a lactating mother’s ability to retrieve them, after they became isolated from the nest. While the authors’ emphasis on this observation was the fact that pup odor is a commanding feature of pup-retrieval behavior, I was interested in this for a slightly different reason.

To know that an act as simple as washing the pups could change the behavior of something as innate as a mother retrieving her own pups led me to wonder: is there an essential biological component of motherhood that is necessary for a pup’s development and overall survival¾ or should we, as scientists, begin to hone in on the commonalities that make up the maternal experience, and also enable virgin females to successfully retrieve isolated pups?


My experiments this summer utilized a combination of stereotaxic surgery (injections and craniotomies), fluorescence imaging, computer programming and image analysis in order to observe the sound-evoked spatiotemporal activity patterns in the A1 PV+ network of naïve female mice.

Blue LED light was directed through cranial windows over the left auditory cortex and recordings of eight pup calls were emitted at regular intervals for each of the 20 trials. Plotting the average intensities for the activated region of interest across the 20 trials yielded a visual representation of the GCaMP6m activation in the parvalbumin neuronal population (Table 1).


Table 1- Ashley

This widespread cortical GCaMP6m activation throughout the A1 PV+ neuronal network suggests that, when exposed to pup calls, female mice do not tend to differentiate among varying frequencies. Perhaps this could be due the dependency of all pups to receive this nurturing behavior, and for mothers and surrogates alike to provide it.

This suggested binary distinction of female mice¾ recognizing ‘call vs. no call,’ but not distinguishing between ‘call frequency A vs. call frequency B’¾ is one that will be further investigated in the Shea lab, as we look to hone in on the neural circuits that regulate long-term, experience-dependent plasticity in the auditory cortex.

I would like to thank my research advisor for the summer, Dr. Stephen Shea, and the directors of the Undergraduate Research Program, Dr. Anne Churchland and Kim Creteur, for providing me with this opportunity. I also thank my parents¾ Michael and Winnie Kyalwazi. Against all odds you continue to work hard and sacrifice so I have opportunities as life-changing as coming to CSHL to study neuroscience… a dream come true. Your love and support has been and will never cease to be the wind beneath my wings. It has definitely been a memorable summer for me here at Cold Spring Harbor Laboratory and I look forward to the future.


I attended the Neurofutures 2016 conference at the Allen Institute for Brain Sciences in Seattle last week. The conference that focussed on new technologies in the field and how they will drive new discoveries. I gave the opening plenary talk at the conference, a public lecture which you can see here. Following my lecture, I was part of a panel consisting of olfaction hero Linda Buck, blood flow guru (and recent marmoset pioneer) Alfonso Silva and eCog sage Jeff Ojemann. It was exciting to hear their take on the most exciting technologies in neuroscience. Some of the exciting new developments were highlighted by the panel included optogenetics, powerful transgenic animals (mice, marmosets and beyond) and high-throughput sequencing, just to name a few.

Screen Shot 2016-07-01 at 3.23.11 PM

I share my colleagues’ enthusiasm for those techniques, but also held fast that these techniques must be accompanied by advances in theory to support out ability to understand the incoming data. Theoretical neuroscience has historically played a fundamental role in the field as a whole, and its importance going forward cannot be understated (I have argued for this before).

A recent paper in Neuron from Kanaka Rajan, Chris Harvey and David Tank sets out to demonstrate how relatively unstructured networks can give rise to highly structured outputs that persist on slow timescales relevant to behaviors like decision-making and working memory. Such unstructured networks seem at first like exactly the wrong thing to support stimulus-driven persistent activity. Indeed, classic work in the prefrontal cortex revealed individual neurons that respond persistently during delays, presumably support the ability of the animal to hold information in mind over that delay. In mouse posterior parietal cortex, however, it’s a different story. On a memory guided decision task published previously many individual neurons respond only very transiently, for much less time than the animal holds those memories in mind. Both that paper and the current one argue that many such neurons could fire in sequence, supporting slow-timescale memory-guided decisions even in the absence of single neurons with persistent activity.

The big steps forward in the current paper are:

  1. The authors demonstrated that a randomly connected network could give rise to this activity. This was an advance for a number of reasons, including the development of a new model framework called PINning. This method builds on a now classic technique, FORCE learning which generates coherent activity patterns from chaotic networks. PINing is different because only a small percentage (~12%) of synaptic weights are allowed to change. The ability of the network to capture the complex firing rates of 437 neurons when only a few synaptic weights were allowed to change is a big deal.


    Network that learns by PINning; red lines are the only synapses that are allowed to change during learning to match the data.

  2. The paper pointed out features of the data that are incompatible with a traditional model for persistent activity, like bump attractors. This is evidence against an appealing idea (that may be present in other systems) in which a hill of activity moves around the network, driving a persistent response.
  3. Finally, the authors found that the network’s success relied not only on the strongly choice-selective neurons you might expect, but also on neurons that weren’t selective for the animal’s choice at all. In fact, they observed that these seemingly unimportant neurons might play a critical “conveyer belt” role that was essential in supporting more difficult decisions, especially those among many alternatives. The previous paper (and indeed many other studies) mainly excluded these neurons from analysis; an understandable choice at the time, but one that now warrants reconsideration.

There is still a challenge ahead for putative mechanisms that support slow timescale behaviors like working memory and decision-making. At the moment, there are few causal manipulations that can disrupt proposed mechanisms and demonstrate an effect on behavior. In the framework here, it would be compelling to demonstrate that changing the order of the sequence changed the behavior (admittedly no small feat!). More traditional mechanisms aren’t off the hook either: demonstrating that persistent activity at the single neuron level supports working memory likewise would be aided by precise disruption experiments. Indeed, the single neuron persistence could be epiphenomenological; the persistent working memories could be supported by some other aspect of the network. Many such  manipulation experiments will be feasible in the near future.

Until then, I am excited to see a new mechanism to support slow-timescale behavior. It is counterintuitive that such network complexity can be captured by a randomly connected network, especially one in which such a small number of synapses are allowed to change.




Discoveries made by plant geneticists in the 1940s are changing our understanding of the brain. Specifically, Barbara McClintock’s (left) discovery of transposons, for which she won the nobel prize, has turned out to be important not only for understanding gene function in plants, but in brains as well. Tranposons, described by the New Yorker as “wandering snippets of DNA that hide in genomes, copying and pasting themselves at random” account for ~40% of our genome. They are likely to play a key role in normal brain function, and also might be involved in neurodegenerative diseases including ALS.

The importance of transposons for all biology inspired current CSHL graduate students and motivated them to create a lecture series named after Barbara McClintock. The first one was today, and in recognition of the role of transposons in the brain, they invited a neuroscientists, Ann Graybiel (right) from MIT, to be the first recipient. Ann’s work on the striatum has been critical for the field’s growing understanding of how incoming inputs can lead to actions, especially ones that are reinforced and become habitual. Her emerging work is especially exciting as her lab is leveraging modern techinques to specifically measure and manipulate classes of cells within the striatum to understand their role in different behaviors and decisions.

To commemorate the creation of this new lecture series and its first recipient, neuroscientists from around New York gathered to honor Ann and attend her talk. Researchers focussing on decision-making, attention, vision and auditory processing came together and some lively discussions ensued! It was a lot of fun to show the setups in the my lab to this crew, which included Jackie Gottlieb, Yael Niv, Heather Read and Ariana Maffei, and we realized many links between our collective research programs that I hope will lead to new collaborations down the line.



I am happy to announce another post by a guest blogger. This time, its Sashank Pisupati, a new graduate student in my lab.

Last week, our lab read a paper by Ramon Reig & Gilad Silberberg titled “Multisensory Integration in the Mouse Striatum”. While studies of multisensory integration have focussed largely on cortical structures and the superior colliculus, this study adds to a growing body of evidence that the striatum may play a key role in this process. Striatal medium-spiny neurons (MSNs) are known to receive convergent projections from multiple sensory cortices, but relatively few studies have reported multisensory responses in these cells.

Here, the authors set out to test whether individual MSNs integrated visual (LED flashes) and tactile (whisker stimulation with air puffs) stimuli in anesthetized mice. In order to observe such synaptic integration, they performed whole-cell patch clamp recordings from striatal neurons. They targeted regions of striatum receiving projections from primary visual (V1) and somatosensory (S1) cortex, as identified by anterograde tracing using BDA.

They found sub-threshold responses to whisker stimulation (purple trace) in all the neurons they recorded from, which were modulated by stimulation strength. More interestingly, in the dorsomedial striatum a subset of these neurons were also responsive to visual stimuli (green trace), with slightly longer peak response latencies. They then presented visual and tactile stimuli together at various relative delays, and observed multisensory responses in these cells (orange & black traces) that were sublinearly additive i.e. less than the linear summation (grey traces) of the visual and tactile responses. Moreover, the peak multisensory response was maximal when the onsets/peaks of the unisensory responses were aligned, suggesting that the neurons summated congruent synaptic inputs.

These findings of multisensory cells in the mouse striatum corroborate similar reports from extracellular recordings in the striatum of rats and cats, and complement them by offering a valuable glimpse of sub-threshold activity. The sub-linear additivity described here contrasts with the super-linear additivity of firing rate responses often emphasized in studies of the superior colliculus.

One of the questions that remained at the end of our discussion, was how this result fit into models of multisensory integration such as divisive normalization4, or Bayes-optimal cue combination. While classical approaches have emphasized the degree of additivity of the unisensory responses, these models make strong predictions about how the weights assigned to each unisensory response in the summation change in accordance to the reliability of that sensory modality4. For example, we expect the contribution of a visual flash to the summation to decrease for weaker, less reliable flashes.

One could test this prediction in the author’s current setup by simply varying the stimulus strength for each modality during multisensory presentation. Combined with the power of the patch clamp approach, this could yield further insight into the sub-threshold computations being performed by these neurons, and we hope to see more such work in the future!

mattThis post is written by guest blogger, Matt Kaufman, a postdoc in my lab (left).

Last week our lab read a recent Neuron paper out of the Brody lab, by Kopec, Erlich, Brunton, Deisseroth & Brody, titled “Cortical and subcortical contributions to short-term memory for orienting movements.” This paper continues with that lab’s recent strategy of using optogenetics to briefly inactivate brain areas during decision making.

The experiments were straightforward. They trained rats to judge whether a click train was faster or slower than 50 Hz, then used optogenetics (eNpHR3.0) to inactivate either the Frontal Orienting Fields (FOF) or superior colliculus (SC) on one side of the brain at different points in the trial. This allowed Kopec et al. to see when these areas contributed to making the decision. The key experimental finding was that the rats’ decisions were most biased when either FOF or SC was silenced during the stimulus, a little less biased when silenced early in the subsequent delay, and less biased still when silenced late in the delay. Decisions were essentially unaffected when silencing was performed during the response period.

This finding is initially surprising, because tuning in the FOF increases over the course of the trial (as known from previous studies). They argue, however, that this seeming mismatch makes sense in the context of an attractor dynamics model (below). Since the evidence from the stimulus is not fluctuating in this task, the animal should be able to make his decision quickly. The increasing tuning might be due to attractor dynamics that amplify the tuning with time, while perturbations should mostly impact decisions before the neural activity has had time to settle in an attractor. Additional comparisons, including inactivating both areas together and comparing hard vs. easy trials, quantitatively fit their simple attractor model.


This study forms an interesting contrast with their paper from earlier this year, Hanks et al. 2015 in Nature. There, they took a similar approach but with a temporal integration task. In that task, the FOF was only critical at the end of such a stimulus. This again makes sense; you don’t want attractor dynamics if you need to integrate instead.

The question on many of our minds was: do these areas “really” exhibit attractor dynamics? On further reflection, though, this is a bit like asking whether the planets follow Newton’s laws. What I mean by that is: neurons, like orbiting planets, aren’t solving equations. Dynamical models, like Newton’s equations, are a mathematical description of how the system behaves over time. But if a model is an easy way to think about a system, and makes intuitive, useful predictions that hold up experimentally, then the model does useful work.

Many questions remain unanswered, of course. In terms of separation of function, are FOF and SC really doing the exact same thing? Are there other tasks where they would function very differently? Regarding dynamics, how does the system learn to produce these attractor dynamics? Since the FOF can apparently be trained to produce different dynamics in animals trained on different tasks, can it support either computation in an animal trained on both tasks? If so, how would it switch its dynamics? We’ll look forward to the next installment.

I recently attended a meeting as part of the Simons Collaboration on the Global Brain. A postdoc in my lab, Matt Kaufman, has an award from this group and so attended as well. The goal of the collaboration is to understand the internal neural signals that interact with sensory inputs and motor outputs to shape behavior.

It was a fantastic meeting. Blaise Aguera y Arcas (Google) talked about machine intelligence and how it has advanced dramatically in recent years, easily accomplishing tasks that seemed impossible half a decade ago. Mvl_9_hpedKVHXhWyiO8kP4aj-FYeWWbV5I7GtqmRgMAndrew Leifer (Princeton) talked about a new microscopy system for large scale imaging in c-elegans. Marlene Cohen described a surprising observation she made that the increased firing rates seen during attention are accompanied by decreased correlations among neurons.
-10RuBqErXZjaH3iLSOBfqk58xs96xYmK4WA3WZG4SEA common theme among all the presentations was the idea that understanding these internal states requires considering the activity of large neural populations. A number of analyses were put forth to achieve that. The ones that were most interesting to me are designed to compare neural population activity during different kinds on behavioral states. We began to do this in our 2014 paper (see figure 7), but have rmv09LVM5Y1vWQzNSwjrKoD5NQ6MTJEUVF9WQnrydN20eally only begun to scratch the surface. The talks and conversations at the meeting expanded our thinking about new analyses we can use to get at this question. For instance, as an animal goes form making a decision to committing to action, does the population activity simply re-scale, or does it occupy a fundamentally new space?


Our decisions are influenced in part by incoming sensory information, and in part by the current cognitive state of the brain. For instance, a rustle in the bushes can make you run away quickly if you are walking the dark and worrying about bears, but have little effect on your behavior if you are deep in thought about something else- your upcoming vacation, for instance. This led us to wonder, how do incoming sensory signals and ongoing cognitive signals interact to guide behavior?

A postdoc in my lab, Farzaneh Najafi, is working to understand this, supported in part by the Simons Collaboration on the Global Brain. We were fortunate to have a collaborator, John Cunningham (Columbia University) visit us today, along with a graduate student in his lab, Gamal Elsayed. Their focus is on understanding neural activity at the population level, and in particular understanding how such populations evolve over time. We hope that their approach can offer insight into our question by helping us evaluate how population dynamics differ depending on the internal cognitive state of the animal. Farzaneh, John and Gamal are pictured below. They are gathered at the 2-photon rig in our lab and are viewing neural activity of labelled inhibitory neurons.


Fairhall lab

Computational neuroscience at the University of Washington

Pillow Lab Blog

Neural Coding and Computation Lab @ Princeton University

Churchland lab

Perceptual decision-making at Cold Spring Harbor