In two companion papers in Neural Computation (followed by a related paper on working memory), Sophie Denève developed a spike-based theory of Bayesian inference. It can be categorized as a representational spike-based theory, in the sense that spikes collectively represent some objective variable of the world, for which there is some uncertainty. It follows a typical Marr-ian approach, in which the function of the neural network (level 1) is first postulated, in terms of external properties of the world, and then the properties of the network (dynamics, connectivity) are derived. But unlike Marr’s approach, the algorithmic and physical levels are not considered independent, that is, the algorithm is defined directly at the level of spikes. In the first paper, it is assumed that a neuron codes for a hidden variable, corresponding to an external property of the world. The neuron must infer that variable from a set of observations, which are independent Poisson inputs whose rates depend on the binary value. The neuron codes for other neurons (as opposed to the external observer), that is, it is postulated that the log odds of the hidden variable are estimated from the spike train produced by the train as a sum of PSPs in a target neuron. Thus, the decoding process is fixed, and the dynamics of the neuron can then be deduced from an optimization principle, that is, so that the decoded quantity is as close as possible to the true quantity.

One can write a differential-spike equation that describes how the log-odds evolve with the inputs. At any time, the estimated log-odds can be estimated by the neuron from its output spike train. A spike is then produced if it brings the estimation closer to the true value of the log-odds, calculated from the inputs.

In this proposition, the spiking process is deterministic and spiking is seen as a *timed* *decision*. This is completely different from rate-based theory, in which spikes are random and instantiate a time-varying quantity. Although this is a rather abstract sensory scenario, the idea that spikes could be corrective signals is powerful. It connects to the point I made in the previous post, that the key point in spike-based theories is not temporal precision or reproducibility, as is sometimes wrongly claimed, it is the fact that spikes between different neurons are coordinated. When this theory is extended to a population of neurons, an immediate consequence is that spiking decisions are dependent on the decisions made by other neurons, and it follows that although spiking is deterministic at the cellular level and precise at the functional level (estimation of the hidden variable), it may not be reproducible between trials. In fact, even the relative timing of neurons may not be reproducible – for exactly the same reason as in sparse coding theory.