It is sometimes stated that rate-based computing is like analog computing while spike-based computing is like digital computing. The analogy comes from the fact, of course, that spikes are discrete whereas rates are continuous. But as any analogy, it has its limits. First of all, spikes are not discrete in the way digital numbers are discrete. In digital computing, the input is a stream of binary digits, coming one after another in a cadenced sequence. The digits are gathered by blocks, say of 16 or 32, to form words that stand for instructions or numbers. Let us examine these two facts with respect to spikes. Spikes do not arrive in a cadenced sequence. Spikes arrive at irregular times, and time is continuous, not digital. What was meant by digital is presumably that there can be a spike or there can be no spike, but there is nothing in between. However, given that there is also a continuous timing associated to the occurrence of a spike, a spike is better described as a timed event rather than as a binary digit. But of course one could decide to divide the time axis into small time bins, and associate a digit 0 when there is no spike and 1 when there is a spike. This is certainly true, but as one performs this process as finely as possible to approximate the real spike train, it appears that there are very few 1s drowned in a sea of 0s. This is what is meant by “event”: information is carried by the occurrence of 1s at specific times rather than by the specific combinations of 0s and 1s, as in digital computing. So in this sense, spike-based computing is not very similar to digital computing.
The second aspect of the analogy is that digits are gathered in words (of say 32 digits), and these words are assigned a meaning in terms of either an instruction or a number. Transposed to spikes, these “words” could be the temporal pattern of spikes of a single neuron, or perhaps more meaningfully a pattern of spikes across neurons, as in synchrony-based schemes, or across neurons and time, as in polychronization. Now there are two ways of understanding the analogy. Either a spike pattern stands for a number, and in this case the analogy is not very interesting, since this is pretty much saying that spikes implement an underlying continuous value, in other words this is the rate-based view of neural computation. Or a spike pattern stands for a symbol. This case is more interesting, and it may apply to some proposed spike-based schemes (like polychronization). It emphasizes the idea that unlike rate-based theories, spike-based theories are not (necessarily) related to usual mathematical notions of calculus (e.g. adding numbers), but possibly to more symbolic manipulations.
However, this does not apply to all spike-based theories. For example, in Sophie Denève’s spike-based theory of inference (which I will describe in a future post), spike-based computation actually implements some form of calculus. But in her theory, analog signals are reconstructed from spikes, in the same way as the membrane potential results from the action of incoming spikes, rather than the other way around as in rate-based theories (i.e., a rate description is postulated, then spikes are randomly produced to implement that description). So in this case the theory describes some form of calculus, but based on timed events.
This brings me to the fact that neurons do not always interact with spikes. For example, in the retina, there are many neurons that do not spike. There are also gap junctions, in which the membrane potentials of several neurons directly interact. There are also ephaptic interactions (through the extracellular field potential). There is also evidence that the shape of action potentials can influence downstream synapses (see a recent review by Dominique Debanne). In these cases, we may speak of analog computation. But this does not bring us closer to rate-based theories. In fact, quite the opposite: rates are abstracted from spikes, and stereotypical spikes are an approximation of what really goes on, which may involve other physical quantities. The point here is that firing rate is not a physical quantity as the membrane potential, for example. It is an abstract variable. In this sense, spike-based theories, because they are based on actual biophysical quantities in neurons, might be closer to what we might call “analog descriptions” of computation than rate-based theories.