Rate vs. timing (IX) The fluctuation-driven regime and the Softky-Shadlen debate

In the 1990s, there was a famous published exchange about the rate vs. timing debate, between Softky and Koch on one side, and Shadlen and Newsome on the other side. Softky and Koch argued that if spike trains were random, as they seemed to be in single unit recordings, and cortical neurons sum many inputs, then by the law of large numbers their output should be regular, since the total input would be approximately constant. Therefore, so they argued, there is an inconsistency in the two hypotheses (independence of inputs and integration). They proposed to resolve it by postulating that neurons do not sum their inputs but rather detect coincidences at a millisecond timescale, using dendritic nonlinearities. Shadlen and Newsome demonstrated that the two hypotheses are in fact not contradictory, if one postulates that the total mean input is subthreshold, so that spikes only occur when the total input fluctuates above its average. This is called the “fluctuation-driven regime”, and it is a fairly well accepted hypothesis nowadays. When there are many inputs, this can happen when excitation is balanced by inhibition, hence the other standard name “balanced regime” (note that balanced implies fluctuation-driven, but not the other way round). An electrophysiological signature of this regime is a distribution of membrane potential that peaks well below threshold (instead of monotonically increasing towards threshold).

In the fluctuation-driven regime, output spikes occur irregularly, because the neuron only spikes when there is a fluctuation of the summed input. Thus the two hypotheses are not contradictory: it is completely possible that a neuron receives independent Poisson inputs, integrates them, and fires in a quasi-Poisson way. This argument indeed makes the submillisecond coincidence detection hypothesis unnecessary. However, Softky then correctly argued that even then, output spikes are still determined by input spikes, so they cannot be seen as random. To be more precise: input spike trains are independent Poisson processes, the output spike train is (approximately) a Poisson process, but inputs and outputs are not independent. In their reply, Shadlen and Newsome miss this argument. They show that if they replay the same pattern of spikes to the neuron that led to a spike, but with a different history of inputs, then the neuron may not spike. This happened in their model for two reasons: 1) they used a variation of the perfect integrator, a very particular kind of model that is known to be unreliable, contrary to almost every other spiking neuron model, and to actual neurons (Brette & Guigon 2003), 2) they considered a pattern of input spikes restricted to a window much shorter than the integration time constant of the neuron. If they had played a pattern covering one integration time window to a standard integrate-and-fire model (or any other model), then they would have seen output spikes. But perhaps more importantly, even if the input pattern is restricted either temporally or to a subset of synapses, the probability that the neuron fires is much higher than chance. In other words, the output spike train is not independent of any of the input spike trains. This would appear in a cross-correlogram between any input and the output, as an extra firing probability at positive lags, on the timescale of the integration time constant, with a correlation of order 1/N (since there is 1 output spike for N input spikes, assuming identical rates).

Note that this is a trivial mathematical fact, if the output depends deterministically on the inputs. Yet, it is a critical point in the debate. Consider: here is an elementary example in which all inputs are independent Poisson processes with the same constant firing rate, and the output is also a (quasi-) Poisson process with constant rate. But the fact that one input neuron spikes is informative about whether the output neuron will spike shortly after, conditionally to the knowledge of the rate of the first neuron. In other words, rates do not fully describe the (joint) activity of the network. This is a direct contradiction of the rate-based postulate.

Even though this means that the rate-based hypothesis is mathematically wrong (at least in this case), it may still be that it is a good enough approximation. If one input spike is known, one gets a little bit of extra information about whether the output neuron spikes, compared to the sole knowledge of the rates. Maybe this is a slight discrepancy. But consider: if all input spikes are known, one gets full information about the output spikes, since the process is deterministic and reliable. This is a very strong discrepancy with the rate-based hypothesis. One may ask the question: if I observe p input spikes occurring together, how much can I predict about output spiking? This is the question we tried to answer in Rossant et al. (2011), and it follows an argument proposed by Abeles in the 1980s. In a fluctuation-driven regime, if one observes just one input spike, chances are that the membrane potential is far from threshold, and the neuron is very unlikely to fire. But if, say, 10 spikes are observed, each producing a 1 mV depolarization and the threshold is about 10 mV about the mean potential, then there is 50% chance of observing an output spike. Abeles called the ratio between the extra firing produced by 10 independent spikes and by 10 coincident spikes the “coincidence advantage”, and it is a huge number. Consider again: if you only know the input rates, then there is a 5% chance of observing a spike in a 10 ms window, for an output neuron firing at 5 Hz; if you additionally know that 10 spikes have been fired, then there is a 50% chance of observing an output spike. This is a huge change, involving the observation of just 0.1% of all synapses (assuming 10,000 synapses).

Thus, it is difficult to argue here that rates entirely determine the activity of the network. Simply put, the fact that the input-output function of neurons is essentially deterministic introduces strong correlations between input and output spike trains. It is a simple fact, and it is well known in the theoretical literature about neural network dynamics. For example, one line of research, initiated mainly by Nicolas Brunel, tries to determine the firing rates (average and time-varying) of networks of spiking models, using a self-consistent analysis. It is notably difficult to do this in general in the fluctuation-driven regime, because of the correlations introduced by the spiking process. To solve it, the standard hypothesis is to consider sparse networks with a random connectivity. This ensures that there is no short cycle in the connectivity graph, and therefore that inputs to a given neuron are approximately independent. But the theoretical predictions break when this hypothesis is not satisfied. It is in fact a challenge in theoretical neuroscience to extend this type of analysis to networks with realistic connectivity – i.e., with short cycles and non-random connectivity.

It is interesting to note that the concept of the balanced or fluctuation-driven regime was proposed in the 1990s as a way to support rate-based theories. In fact, analysis shows that it is specifically in this regime, and not in the mean-driven regime, that 1) neurons are essentially deterministic, 2) neurons are highly sensitive to the relative timing of input spikes, 3) there is a strong coordination between input and output spikes. The rate-based postulate is not valid at all in this regime.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *