Rate vs. timing (VII) Marr's levels of analysis

In summary, the debate of rate vs. timing is not about the description timescale, but about the notion that neural activity and computation may be entirely and consistently defined by the time-varying rates r(t) in the network. In fact, it is interesting to cast this debate in the analysis framework proposed by David Marr. I have discussed this framework in other posts, but it is worth explaining it here again. Marr proposed that information processing systems can be analyzed at three levels:

1) The computational level: what does the system do? (for example: estimating the location of a sound source)

2) The algorithmic/representational level: how does it do it? (for example: by calculating the maximum of cross-correlation between the two monaural signals)

3) The physical level: how is it physically realized? (for example: with axonal delay lines and coincidence detectors)

Rate-based theories postulate that algorithms and representations can be defined independently of spikes that instantiate them. Indeed, as I argued in previous posts, the instantaneous firing rate is not a physical quantity but an abstraction (in general a probability of firing), and it is postulated that all algorithms can be defined at the level of rates, without loss. The conversion between spikes and rates is seen as independent from that level. In other words, the rate-based hypothesis is the postulate that the algorithmic and the physical levels are independent. In contrast, spike-based theories consider that these levels are not independent, i.e., that algorithms are defined at the spike level.

In the example of sound localization I used in the description of three levels, the binaural neuron implements the cross-correlation between two monaural signals. This is possible if one assumes that monaural signals are transduced to spikes, through a Poisson process with rate equal to these signals, the binaural neuron responds to coincidences and the result is the spike count of the binaural neuron. This is rate-based theory (even though based on coincidence detection). Alternatively, in Goodman & Brette (2010), signals are transduced to spikes through an essentially deterministic process, and the binaural neuron spikes to signal the similarity between the transduced signals (note that a single spike is meaningful here, see my post on the difference between correlation and synchrony). This is spike-based theory. It also makes a functional difference in the example I just described, because in the spike-based version, the neuron is also sensitive to interaural intensity differences.

When expressed as the independence between the algorithmic and spike level, the rate-based hypothesis seems like an ad hoc postulate. Why would evolution make it such that it is possible to describe neural algorithms in terms of rates, what is the advantage from the organism’s point of view? This is why I see the rate-based hypothesis as a methodological postulate, rather than a true scientific hypothesis. That is, it is a postulate that makes it simpler for us, external observers, to describe and understand what neurons are doing. This is so because most of our calculus is based on operations on analog signals rather on discrete entities (spikes). It is then hoped that this level of description is adequate, but there is no strong biological reason why it should be so. It just seems adequate enough to defenders of rate-based theories, and they endorse it because it is methodologically convenient.

This reminds me of discussions I have had with strong advocates of rate-based theories, who are also reasonable scientists. When faced with evidence and arguments that strongly suggest that rates cannot fully describe neural activity, they may agree. But they remain unconvinced, because they do not see why they should abandon a seemingly working theory (rate-based calculus) for a hypothesis that does not help them understand the system, even though it is more empirically valid (neural computation is based on spikes, but how exactly?). In other words, why bother with the extra complication of spikes? This is what I mean by a “methodological postulate”: it is not that, for empirical reasons, neurons are more likely to discard any information about spike timing, but rather that it seems conceptually more convenient to think in terms of analog quantities rather than spikes.

This means that this debate will not be resolved by accumulating empirical evidence for or against either alternative. For defenders of spike-based theories, it can only be resolved by providing a convincing theory of spike-based computation that could replace rate-based calculus. For defenders of rate-based theories, the challenge is rather to find mechanisms by which neural activity can truly be reduced to calculus with analog signals – a difficult task, as I will show in the next posts.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *