I have identified the rate-based hypothesis as a methodological postulate, according to which neural activity can entirely be described by underlying rates, which are abstract variables. In general, individual spikes are then seen as instantiations of a random point process with the given rates. It can also be seen as a hypothesis of independence between the algorithmic and physical levels, in David Marr’s terminology. On the contrary, spike-based theories consider that algorithms are defined at the spike level.
What is the empirical evidence? I will start by showing that most arguments that have been used in this debate do not actually help us distinguish between the two alternatives.
Variability. Perhaps the most used argument against spike-based theories is the fact that spike trains in vivo are variable both temporally and over trials, and yet this might well be the least relevant argument. I addressed this point in detail in my third post, so I will only briefly summarize. Within one recording, inter-spike intervals (ISIs) are highly variable. This has been used as a sign that spike trains are instantiations of random point processes. But variability of ISIs is also a requirement of spike-based theories, because it increases the amount of information available in spike timing (mathematically, entropy in the ISI distribution is maximal for Poisson processes). More generally, temporal or spatial variability can never be a way to distinguish between random and deterministic schemes, because the entropy of a distribution reflects either the information content (if it is used for the code) or the amount of randomness (if it cannot be used). This brings us to the argument of variability across trials, that is, the lack of reproducibility of neural responses to the same stimulus. But this is a category error: these observations tell us that responses are stochastic, not that the activity can be fully described by rates. Therefore, it is an argument in the stochastic vs. deterministic debate, not in the rate vs. spike debate. In addition, it is a weak argument because only the stimulus is controlled. The state of the brain (e.g. due to attention, or any other aspect of the network dynamics) is not. In some cases, the sensory inputs themselves are not fully controlled (e.g. eye movements in awake animals). Therefore, the lack of reproducibility may represent either true stochasticity or simply reflect the uncertainty about uncontrolled variables, which may still be accessible to the nervous system. The lack of reproducibility itself is also contentious, at least in subcortical and primary cortical areas. But since I pointed out that this is not a very relevant argument anyway, I will not comment on this evidence (although I should note that strong reproducibility at spike level would be an argument against rate-based theories).
Chaos. Related to the variability arguments is the chaos argument (see my related post). It has been claimed that neural networks are chaotic. This is an interesting point, because it has been used to argue in favor of rate-based theories when really, it is an argument against them. What chaos implies is an absence of reproducibility of neural responses to a given stimulus. As I argued in the previous paragraph, by itself the argument has no value in the rate vs. spike debate. But if it is true that the lack of reproducibility is due to chaotic dynamics, then this goes against the rate-based hypothesis. Indeed, chaotic systems are deterministic, they cannot be described as random processes. In particular, the variables are not independent, and trajectories live in lower-dimensional spaces (attractors). I am not convinced that network dynamics are truly chaotic (although they might be), but if they are, then defenders of rate-based theories should rather be worried.
Selectivity curves. The concept of the selectivity curve or tuning curve has been used very much in research on sensory systems (e.g. the visual cortex). It has been found for example that many cells in the primary visual cortex fire more in response to a moving bar or grating with a specific orientation. This observation is often reported as the statement that these neurons code for orientation. Implicitly, this means that the firing rate of these neurons contains information about orientation, and that this is the information used by the rest of the system. However, this is not what these experiments tell us. They only tell us that the firing rate covaries with stimulus orientation, nothing more. This cannot be an argument for rate-based theories, because in spike-based theories, the firing rate also varies with stimuli (see my specific post). Indeed, to process stimuli with spikes requires producing spikes, and so stimulus-dependent variations in firing rate are a necessary correlate of spike-based computation. It is useful to interpret spike counts in terms of energy consumption, and with this notion in mind, what orientation selectivity curves tell us is not that the cells code for orientation, but rather they care about orientation (or about a specific orientation). This is still quite an informative statement, but it does not tell us anything about whether the firing rate is the right quantity to look at.
Fast processing. To be fair, I will now critically examine an argument that has been used to contradict rate-based theories. It has been shown with psychophysical experiments that complex visual tasks can be performed by humans in very little time, so little time that any neuron along the processing chain may only fire once or not at all. This observation contradicts any scheme based on counting spikes over time, but it does not contradict views based on rate as a firing probability or as a spatial average – however, it does impose constraints on these views. It also rules out schemes based on interspike intervals. In other words, it discards computing schemes based on information obtained within single neurons (interspike interval or spike count) rather than across neurons (relative timing or population firing rate).
High correlations. A number of studies claim that there are significant correlations between neural responses, in some cases. For example, neurons of the LGN that share a presynaptic retinal ganglion cell tend to fire synchronously, at a short timescale. This contradicts one of the claims of rate-based theories, that firing between neurons is independent, conditionally to the underlying rates. Other studies have shown oscillations that organize spiking at different timescales (e.g. in the visual cortex, and in the hippocampus). These observations may be seen as contradicting rate-based theories (especially the former), but it could be opposed that 1) these correlations may still not have a big impact on neural dynamics, and 2) even if they do, it is a minor modification to rate-based theory if they do not depend systematically on the stimulus. For example, opponents of oscillation-based theories would argue that oscillations are a by-product of the fact that networks are recurrent, and as feedback systems they can develop oscillations, which bear no functional significance. In the same way, fine scale correlations between neighboring LGN neurons may result from anatomical factors, but it may only contribute to amplify the thalamic input to the cortex – not a fundamental change in rate-based theory. But there are now a number of studies, in the cortex (e.g. from Singer’s lab) and in the hippocampus (e.g. from Buzsaki’s lab) that show a systematic relationship between functional aspects and oscillatory properties. Fine scale correlations have not been studied so extensively in relationship to stimulus properties, but recently there was a study showing that the correlation between two neighboring LGN neurons in response to oriented gratings is tuned to orientation (Stanley et al. 2012). These cells project to cortical neurons in V1, whose firing rate is tuned to orientation. Thus, there is pretty clear evidence that correlations can be stimulus-dependent. The main question, then, is whether these correlations actually make a difference. That is, does the firing rate of a neuron depend mainly on the underlying rates of the presynaptic neurons, or can fine scale correlations (or, say, a few individual spikes) make a difference? I will come back to this question in more detail below.
Low correlations. Before I discuss the impact of correlations on neural firing, I will also comment on the opposite line of arguments. A few recent studies have actually claimed that there are weak correlations between cortical neurons. First of all, the term “weak” is generally vague, i.e., weak compared to what? Is 0.1 a weak or a strong correlation? Such unqualified statements are subjective. One would intuitively think that 0.01 is a very weak correlation, in the sense that it is probably as if it were zero. But this is mere speculation. Another statement might be that correlations are not statistically significant. This statement is objective, but not conclusive. It only means that positive correlations could not be observed given the duration of the recordings, which amounts to saying that correlations are smaller than the maximum amount that can be measured. This is not more informative than saying that there is (say) an 0.1 correlation – it is even less informative, if this maximum amount is not stated. So is 0.1 a weak or a strong pairwise correlation? The answer is, in general, that it is a huge correlation. As argued in Rossant et al. (2011), correlations make a huge difference to postsynaptic firing unless they are negligible compared to 1/N, where N is the number of synapses of the cell. So for a typical cortical neuron, this would mean negligible compared to 0.0001. The argument is very simple: independent inputs contribute to the membrane potential variance as N, but correlated inputs as c.N², where c is the pairwise correlation. The question, in fact, is rather how to deal with such huge correlations (more on this below).
Below I will discuss a little more the impact of correlations on postsynaptic firing, but before that, I would first like to stress two important facts: 1) the presence of pairwise correlations is not critical to all spike-based theories, 2) in other spike-based theories, relative spike timing is stimulus-dependent and possibly transient. Indeed there are prominent spike-theories based on asynchrony rather than synchrony. For example, the rank-order theory (e.g. from Thorpe’s lab) proposes that information is encoded in the relative activation order of neurons, and there is no particular role for synchrony. The theory does not predict a high amount of correlations. However, this rank order information may still manifest itself in cross-correlograms, as a stimulus-dependent asymmetry. Another example is the predictive spike coding theory defended by Sophie Denève, in which neurons fire when a specific criterion is fulfilled, so as to minimize an error. This predicts that neurons fire asynchronously - in fact in a slightly anti-correlated way. Finally, even in those theories based on synchrony, such as the one I presented recently (Brette 2012), neurons are not correlated in general. In the theory I proposed, synchrony is an unlikely event, which is detected by neurons. It is precisely because it is unlikely that it is meaningful – in this case, it signals some structure that is unlikely to be observed by chance. I have to recognize, however, that when a structured stimulus is presented, specific neuron groups fire in synchrony, throughout the duration of the stimulus. I actually do not think that it should necessarily be the case (except for the binaural system). Pushing the theory further, I would argue that once the stimulus structure is established and recognized, it is not unlikely anymore, and therefore only the onset of the synchrony event is meaningful and required by the theory. Therefore, the prediction of the theory is rather that there are transient synchrony events, associated to specific properties of stimuli, which have an impact on target neurons. To summarize, spike-based theories do not generally predict strong correlations, and none of these theories predict correlations in spontaneous activity.
This post is already long, so I will finish with a brief discussion of the impact of correlations on postsynaptic firing – a longer one in the next post. As I mentioned above, very small pairwise correlations have a huge impact on postsynaptic firing. To be negligible, they should be small compared to 1/N, where N is the number of synapses of the postsynaptic neuron. Another way to look at it, which is discussed in detail in Rossant et al. (2011), is that changing the timing of a few spikes (on the order of 10 synapses, out of 10,000) has a dramatic effect on postsynaptic firing (i.e., from silent to strongly firing). This point was already made in the 1980s by Abeles. The phenomenon occurs specifically in the fluctuation-driven regime, so in the next post I will describe this regime and what it means for the debate.