Rate vs. timing (V) Fast rate-based coding

Misconception #4: “A stochastic spike-based theory is nothing else than a rate-based theory, only at a finer timescale”.

It is sometimes claimed or implied that there is no conceptual difference between the two kinds of theories, the only difference being the timescale of the description (short timescale for spike-based theories, long timescale for rate-based theories). This is a more subtle misconception, which stems from a confusion between coding and computation. If one only considers the response of a neuron to a stimulus and how much information there is in that response about the stimulus, then yes, this statement makes sense.

But rate-based and spike-based theories are not simply theories of coding, they are also theories of computation, that is, of how responses of neurons depend on the responses of other neurons. The key assumption of rate-based theories is that it is possible and meaningful to reduce this transformation to a transformation between analog variables r(t), the underlying time-varying rates of the neurons. These are hidden variables, since only the spike trains are observable. The state of the network is then entirely defined by the set of time-varying rates. Therefore there are two underlying assumptions: 1) that the output spike train can be derived from its rate r(t) alone, 2) that a sufficiently accurate approximation of the presynaptic rates can be derived from the presynaptic spike trains, so that the output rate can be calculated.

Since spike trains are considered as stochastic with (expected) instantaneous rate r(t), assumption #1 means that spike trains are stochastic point processes defined from and consistent with the time-varying rate r(t) – they could be Poisson processes, but not necessarily. The key point here is that the spiking process is only based on the quantity r(t). This means in particular that the source of noise is independent between neurons.

The second assumption means that the operation performed on input spike trains is essentially independent of the specific realizations of the random processes. There are two possible cases. One alternative is that the law of large numbers can be applied, so that integrating inputs produces a deterministic value that depends on the presynaptic rates. But then the source of noise, which produces stochastic spike trains from a deterministic quantity, must be entirely intrinsic to the neuron. Given what we know from experiments in vitro (Mainen and Sejnowski, 1995), this is a fairly strong assumption. The other alternative is that the output rate depends on higher statistical orders of the total input (e.g. variance) and not only on the mean (e.g. through the central limit theorem). But in this case, the inputs must be independent, for otherwise it would not be possible to describe the output rate r(t) as a single quantity, since the transformation would also depend on higher-order quantities (correlations).

In other words, the assumptions of rate-based theories mean that spike trains are realizations of independent random processes, with a source of stochasticity entirely intrinsic to the neuron. This is a strong assumption that has little to do with the description timescale.

This assumption is also known to be inconsistent in general in spiking neural network theory. Indeed it is possible to derive self-consistent equations that describe the transformation between the input rates of independent spike trains and the output rate of an integrate-and-fire model (Brunel 2001), but these equations fail unless one postulates that connections between neurons are sparse and random. This postulate means that there are no short cycles in the connectivity graph, so that inputs to a neuron are effectively independent. Otherwise, the assumption of independent outputs is inconsistent with overlaps in inputs between neurons. Unfortunately, neural networks in the brain are known to be non-random and with short cycles (Song et al. 2005).

To be fair, it is still possible that neurons that share inputs have weakly correlated outputs, if inhibition precisely tracks excitation (Renart et al. 2010). But it should be stressed that it is the assumptions of rate-based theories that require a specific non-trivial mechanism, rather than those of spike-based theories. It is ironic that spike-based theories are sometimes depicted as exotic by tenants of rate-based theories, while the burden of proof should in fact reside on the latter.

To summarize this post: the debate of rate vs. timing is not about the description timescale, but about the notion that neural activity and computation may be entirely and consistently defined by the time-varying rates r(t) in the network. This boils down to whether neurons spike in a stochastic independent manner, conditionally to the input rates. It is worth noting that this is a very strong assumption, with currently very little evidence in favor, and a lot of evidence against.

Une réflexion au sujet de « Rate vs. timing (V) Fast rate-based coding »

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *