What is the difference between neural correlation and neural synchrony? As I am interested in the role of synchrony in neural computation, I often hear the question. I will try to give a few answers here.

A simple answer is: it’s a question of timescale. That is, synchrony is correlation at a fine timescale, or more precisely, at a timescale shorter than the integration time constant of the neuron. In this sense, the term synchrony implicitly acknowledges that there is an observer of these correlations. This usage is consistent with the fact that neurons are very sensitive to the relative timing of their inputs within their integration time constant (see our recent paper in J Neurosci on the subject).

However, although I have been satisfied with this simple answer in the past, I now feel that it misses the point. I think the distinction rather has to do with the distinction between the two main theories of neural computation, rate-based theories vs. spike-based theories. The term “correlation” is often used in the context of rate-based theories, whereas the term “synchrony” is used in general in the context of spike-based theories (as in my recent paper on computing with neural synchrony). The difference is substantial, and it does not really have to do with the timescale. A correlation is an average, just as a firing rate is an average. Therefore, by using the term correlation, one implicitly assumes that the quantities of interest are averages. In this view, correlations are generally seen as modulating input-output properties of neurons, in a rate-based framework, rather than being the substance of computation. But when using the term synchrony, one does not necessarily refer to an average, simply to the fact that two spikes occur at a similar time. For example, in my recent paper on computing with neural synchrony, I view coincidence detection as the detection of a rare event, that is, a synchrony event that is unlikely to occur by chance. If one takes this view further, then meaningful synchrony is in fact transient, and therefore the concept cannot be well captured by an average, i.e., by correlation.

The distinction might not be entirely obvious, so I will give a simple example here. Consider two Poisson inputs with rate F. Consider one spike from neuron A. The probability that neuron B spikes within time T after this spike can be calculated (integral of an exponential distribution), and for small T it is essentially proportional to T (and to F squared). If T is very small and the two inputs are independent, this event will almost never happen. So if it does happen, even just once, then it is unexpected and therefore meaningful, since it means that the assumption of independence was probably wrong. In a way, a coincidence detector can be seen as a statistical test: it tests the coincidence of input spikes against the null hypothesis, which is that inputs are independent. A single synchrony event can make this test fail, and so the concept cannot be fully captured by correlation, which is an average.

To summarize, synchrony is not about determinism vs. stochasticity, it is not about correlation on a very fine timescale, or about very strong correlation, it is about relative timing in individual spiking events, and about how likely such an event is likely to occur by chance under an independence hypothesis.

Hi Romain,

I fully agree: correlation is a statistical/probabilistic quantity, therefore it makes no sense for unique events, unlike synchrony.

So I think it makes sense, for example, to say that STDP favors correlated input, because learning takes a long time (many events), so statistics apply.

But a neuron is a synchrony detector, not a correlation detector. It may detect a synchronous volley of spikes, that happens only once in an infinite amount of time, leading to arbitrarily low correlations.

On the same ground, for most tasks we can rule out that correlations are a part of the neural code, because they cannot be estimated on small time windows.

Synchrony can definitely be a part of the neural code.

BTW, the same apply to trial-to-trial variability: it cannot be a part of the neural code, since usually we have only one trial to decide.

Best,

Tim