Computing with neural synchrony (new paper)

It seems that I haven't written a blog entry for three years now. Oops! I figured I could use this blog to announce new papers. Since I have not written here for a while, it will be a long post!

Here is my most recently published paper:

Brette R (2012). Computing with neural synchrony. PLoS Comp Biol. 8(6): e1002561. doi:10.1371/journal.pcbi.1002561.

It is a theoretical paper (no experimental data). I try to address two questions: 1) does synchrony matter?, 2) what is synchrony good for, from a functional point of view?

 

Does synchrony matter?

In fact, we addressed the first question in a paper published last year, combining theory and slice experiments:

Rossant C, Leijon S, Magnusson AK, Brette R (2011). Sensitivity of noisy neurons to coincident inputs. J Neurosci 31(47):17193-17206.

In the PLoS paper above, I complement this analysis with another point of view, so I'll try to summarize the results of both papers here. Synchrony (that is, correlations at a short timescale, comparable to the integration time constant of neurons) has been observed in a number in experimental studies. But some authors would argue that it is a meaningless correlate of neural network dynamics. Our approach is very straightforward: what is, quantitatively, the impact of coincident input spikes on the output of a postsynaptic neuron? Is it a mild modulation, or is it very large? There are different ways to approach the issue, but essentially the results are: extremely large.

One approach is based on the analysis of the impact of correlations on the membrane potential distribution. Neurons generally live (in vivo) in a fluctuation-driven regime: they fire irregularly, in response to fluctuations of their total synaptic input above the spike threshold. In this regime, the output rate depends not only on the mean input, but also on its variance. The mean does not depend on input correlations. But the variance does. When the inputs are independent, the variance scales as N, the number of synapses. When they are not, the variance is the sum of covariances between all pairs of inputs, and there are about N² of them. This means that even if pairwise correlations are of order 1/N, they still have a major impact on the neuron's output. Therefore, for correlations not to matter would require specific cellular or network mechanisms that cancel these correlations at all times.

The second approach shows that even if these correlations are somehow cancelled, the neuron still remains highly sensitive to coincident inputs. Consider background synaptic activity, resulting in a membrane potential distribution peaking below threshold. If this is the sum of N independent inputs (or with cancelled correlations), the standard deviation is of order N^(1/2). Now add p coincident input spikes. You can easily show that these input spikes cause a number of extra output spikes that increases supralinearly with p. In the J Neurosci paper, we calculate it and the formula agrees very well with slice experiments (in a cortical neuron). The calculations show that a very small number of input spikes are enough to produce an output. This is related to the "coincidence advantage" discussed by Moshe Abeles in the 1980s, and a number of experimental papers by Usrey, Alonso and Reid on the LGN->V1 projection. We also show that very tiny modifications in spike timing completely change the output rate, even though pairwise correlations remain very close to zero.

In the PLoS paper I show another point of view, based on signal detection theory. You consider the problem of detecting coincident inputs in a background noise, from just observing the membrane potential. Again, because the signal scales linearly with the number p of coincident spikes but the noise (independent inputs) scales with the square root of the total number N of inputs, even a very small number of coincident input spikes is highly detectable.

Coincidence detection is also enhanced by the fact the spike threshold adapts to the membrane potential. In this paper we explain what it means for synaptic integration and coincidence detection:

Platkiewicz J and Brette R (2011). Impact of Fast Sodium Channel Inactivation on Spike Threshold Dynamics and Synaptic Integration. PLoS Comp Biol 7(5): e1001129. doi:10.1371/journal.pcbi.1001129.

Finally in the synchrony paper, I also show that synchronously activated synapses tend to be potentiated by STDP, and so synchrony has a huge effect both on input-output properties and on learning. This finding about correlated inputs being selected is not new, I just put it in a functional context.

 

What is synchrony good for?

But the more important point in the PLoS paper is about the possible function of synchrony. To be useful, synchrony has to be stimulus-dependent. So I define the "synchrony receptive field" of a given pair or group of neurons as the set of stimuli that produce synchronous responses in these neurons. I give a toy example with duration-selection neurons, but the more interesting stuff is the application to sensory modalities.

The idea is the following. One of the main function of a sensory system is to extract invariant structure in sensory signals. "Structure" simply means something in the sensory signals that distinguish them from random inputs. For example, sounds produced by an actual source in the environment produces two binaural signals that have very precise relationships between them, that depend on the source location. The task of the sensory system is to identify these relationships. "Invariant" refers to the idea that, in a fixed environment, relationships that remain fixed when something else changes (for example the observer's position) tells you something about the environment. For example, in binaural hearing, the source of change is simply the source signal itself, and what persists is the relationship between the two binaural signals, which only depends on the location of the sound source. This idea has been articulated in particular by James Gibson, a visual psychologist who developed an "ecological theory" of perception. From a computational point of view, an interesting thing in this theory is that the problem of invariance is solved from the start: instead of trying to design operations that are invariant to a number of transformations, you trigger these transformations and observe the relationships that persist (e.g. for vision, by moving and looking at the changes in the visual input). Note that this is very different from the standard point of view in most neural network theory, in which the main paradigm is pattern recognition, inspired largely by David Marr's work in vision. Here we do not compare the sensory inputs with previously stored patterns, we look for structure (relationships) in these sensory signals.

What's the relationship with synchrony? The idea is very simple. If there is some structure, that is, some regularity in sensory signals, then neurons with different receptive fields will encode these sensory signals into spike trains, in which sensory structure is seen as synchrony patterns. In the paper, I show that these synchrony patterns are intrinsically invariant to a number of transformations, just because the structure itself is invariant. I then develop mostly a (simplified) example of odor recognition, and I show how a simple network based on these ideas can perform concentration-invariant odor recognition, and how the network can learn to recognize an odor and can generalize across a large range of concentrations.

Two years ago, we applied this principle to binaural hearing:

Goodman DF and R Brette (2010). Spike-timing-based computation in sound localization. PLoS Comp Biol 6(11): e1000993. doi:10.1371/journal.pcbi.1000993.

This was really a "proof-of-principle" study, where we showed how we can use these ideas to identify the location of a sound source using realistic acoustic signals, coming from a virtual auditory environment (with real diffraction properties and natural sounds). We have refined the model and are currently trying to test it with in vivo recordings.

At the end of the synchrony paper, I also mention an application in vision: the synchrony receptive field of two LGN neurons is an edge with a particular orientation. I was very excited to read this week that a group has actually found exactly that, using multiple recordings in the LGN with gratings:

Stanley et al. (2010). Visual Orientation and Directional Selectivity through Thalamic Synchrony. The Journal of Neuroscience, 27 June 2012, 32(26):9073-9088.

And they show that this could provide the contrast-invariance property that is seen in V1 neurons. In fact, during my PhD I worked exactly on this problem, using simple models, and I predicted these findings: it's only in my thesis in French, unfortunately.

 

Code

I have to clean up my code before it is usable, but I will upload it soon to modeldb and Brian examples!