Rate vs. timing (XXI) Rate coding in motor control

Motor control is sometimes presented as the prototypical example of rate coding. That is, muscle contraction is determined by the firing rate of motoneurons, so ultimately the “output” of the nervous system follows a rate code. This is a very interesting example, precisely because it is actually not an example of coding, which I previously argued is a problematic concept.

I will briefly recapitulate what “neural coding” means and why it is a problematic concept. “Coding” means presenting some property of things in the world (the orientation of a bar, or an image) in another form (spikes, rates). That a neuron “codes” for something means nothing more than its activity co-varies with that thing. For example, pupillary diameter encodes the amount of light captured by the retina (because of the pupillary contraction reflex). Or blood flow in the primary visual cortex encodes local visual orientation (this is what is actually measured by intrinsic optical imaging). So coding is really about observations made by an external observer, it does not tell much about how the system works. It is a common source of confusion because when one speaks of neural coding, there is generally the implicit assumption that the nervous system “decodes it” somehow. But presumably the brain does not “read-out” blood flow to infer local visual orientation. The coding perspective leaves the interesting part (what is the “representation” for?) largely unspecified, which is the essence of the homunculus fallacy.

The control of muscles by motoneurons does not fit this framework, because each spike produced by a motoneuron has a causal impact on muscle contraction: its activity does not simply co-vary with muscle contraction, it causes it. So first of all, motor control is not an example of rate coding because it is not really an example of coding. But still, we might consider that it conforms to rate-based theories of neural computation. I examine this statement now.

I will now summarize a few facts about muscle control by motoneurons, which are found in neuroscience textbooks. First of all, a motoneuron controls a number of muscle fibers and one fiber is contacted by a single motoneuron (I will only discuss α motoneurons here). There is indeed a clear correlation between muscle force and firing rate of the motoneurons. In fact, each single action potential produces a “muscle twitch”, i.e., the force increases for some time. There is also some amount of temporal summation, in the same way as temporal summation of postsynaptic potentials, so there is a direct relationship between the number of spikes produced by the motoneurons and muscle force.

Up to this point, it seems fair to say that firing rate is what determines muscle force. But what do we mean by that exactly? If we look at muscle tension as a function of a time, resulting from a spike train produced by a motoneuron, what we see is a time-varying function that is determined by the timing of every spike. The rate-based view would be that the precise timing of spikes does not make a significant difference to that function. But it does make a difference, although perhaps small: for example, the variability of muscle tension is not the same if the spike train is regular (small variability) or if it is random, e.g. Poisson (larger variability). Now this gets interesting: during stationary muscle contraction (no movement), those motoneurons generate constant muscle tension and they fire regularly, unlike cortical neurons (for example). Two remarks: 1) this does not at all conform to standard rate-based theory where rate is the intensity of a Poisson process (little stochasticity); 2) regularly firing is exactly what motoneurons should be doing to minimize variability in muscle tension. This latter remark is particularly significant. It means that, beyond the average firing rate, spikes occur at a precise timing that minimizes tension variability, and so spikes do matter. Thus motor control rather seems to support spike-based theories.

Exact simulation of integrate-and-fire models with synaptic conductances

Brette, R. (2006). Exact simulation of integrate-and-fire models with synaptic conductances. Neural Comput 18(8): 2004-2027.

Abstract. Computational neuroscience relies heavily on the simulation of large networks of neuron models. There are essentially two simulation strategies: 1) using an approximation method (e.g. Runge-Kutta) with spike times binned to the time step; 2) calculating spike times exactly in an event-driven fashion. In large networks, the computation time of the best algorithm for either strategy scales linearly with the number of synapses, but each strategy has its own assets and constraints: approximation methods can be applied to any model but are inexact; exact simulation avoids numerical artefacts but is limited to simple models. Previous work has focused on improving the accuracy of approximation methods. In this paper we extend the range of models that can be simulated exactly to a more realistic model, namely an integrate-and-fire model with exponential synaptic conductances.

Download code:

  • Functions for exact simulation: IFSC.h, IFSC.c (to include in your favourite event-driven simulator)
  • Scilab implementation of functions for exact simulation and an example script for a random network: ExpCondIF(Brette).zip. The archive includes a generic event-driven simulator written in Scilab for networks without delays (this is for pedagogical purposes and is not intended to be efficient at all). N.B.: Scilab is a free scientific software (resembling Matlab).

Exact simulation of integrate-and-fire models with exponential currents

Brette, R. (2007). Exact simulation of integrate-and-fire models with exponential currentsNeural Comput 19(10): 2604-2609.

AbstractNeural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have 1) an explicit expression for the evolution of the state variables between spikes and 2) an explicit test on the state variables which predicts whether and when a spike will be emitted. In a previous work, we proposed a method which allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note we propose a method, based on polynomial root finding, which applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.

Download code:

  • Scilab implementation of functions for exact simulation and an example script for a random network: ScilabExpIF.zip. The archive includes an event-driven simulator written in Scilab for networks with random external events and without delays (this is for pedagogical purposes and is not intended to be efficient at all). N.B.: Scilab is a free scientific software (resembling Matlab).
  • C++ implementation of the same example: EventDrivenExpIF.zip (with the algorithm from the paper) and ClockDrivenExpIF.zip (with a standard clock-driven algorithm).

Neural development of binaural tuning through Hebbian learning predicts frequency-dependent best delays

Fontaine, B. and Brette, R. (2011). Neural development of binaural tuning through Hebbian learning predicts frequency-dependent best delays. J Neurosci 31(32):11692–11696

Abstract. Birds use microsecond differences in the arrival times of the sounds at the two ears to infer the location of a sound source in the horizontal plane. These interaural time differences (ITDs) are encoded by binaural neurons which fire more when the ITD matches their "best delay". In the textbook model of sound localization, the best delays of binaural neurons reflect the differences in axonal delays of their monaural inputs, but recent observations have cast doubts on this classical view because best delays were found to depend on preferred frequency. Here we show that these observations are in fact consistent with the notion that best delays are created by differences in axonal delays, provided ITD tuning is created during development through spike-timing-dependent plasticity: basilar membrane filtering results in correlations between inputs to binaural neurons, which impact the selection of synapses during development, leading to the observed distribution of best delays.

Movie 1. Evolution of the synaptic weights of 3 neurons with CF = 2, 4 and 6, when presented with a binaurally delayed white noise with ITD = 167 µs.

Movie 2. Evolution of the best delays of 160 neurons with CF distributed between 2 kHz and 8 kHz, when presented with uncorrelated binaural noise.

Movie 3. Evolution of the best delays of 160 neurons with CF distributed between 2 kHz and 8 kHz, when presented with natural stereo recordings.

Spatial perception of pain (IV) Empirical evidence

In this post I confront the propositions I previously described about where it hurts with experimental evidence. There is a recent review about spatial perception of pain, which contains a lot of relevant information (Haggard et al., 2013).

First of all, proposition A (spatial information is independently provided by tactile receptors) is quite clearly ruled out by empirical evidence. There are three types of nerve fibers innervating the skin. Aβ fibers mediate tactile sensations, while Aδ and C fibers mediate pain and temperature sensations. There is a lot of evidence that these are clearly separated, and the type of sensation does not depend on stimulation frequency (i.e., stimulating Aβ fibers never elicits pain). In addition, spatial localization does not seem to rely on Aβ fibers. For example, it is possible to block conduction in Aβ and Aδ fibers, leaving only C fibers. In this case, noxious stimuli can be localized on the skin with about the same resolution as when nerve conduction is normal (Koltzenburg et al., 1993). This implies that the pattern of activations of nociceptors conveys all the necessary information for spatial localization.

Now these patterns could be given a spatial meaning in different ways. One is learned association with tactile or visual stimuli (proposition C). In primary somatosensory cortex (S1), there are nociceptive somatotopic maps of single digits that are highly aligned with maps of responses to Aβ stimuli; there are also neurons that are sensitive to both mechanical stimulation and temperature. But this is only suggestive of a common spatial frame for both modalities. More specifically, if spatial information in pain-specific fibers were acquired from the tactile modality, then the spatial resolution of pain could never be better than that of touch – we would expect that it is similar and perhaps slightly worse. A systematic mapping on the whole body surface shows that this is the case in most locations, but not all (Mancini et al., 2014). Specifically, spatial resolution for pain is more accurate than for touch at the shoulder. In addition, the gradients of spatial resolution from shoulder to hand are opposite for touch and pain, in accordance with the gradients of innervation density of the corresponding fibers. Finally, there was one case of a subject completely lacking Aβ fibers that had normal spatial resolution. These observations rule out the proposition that spatial information in pain is acquired from the tactile modality (at least that it is entirely acquired from it).

It could be that spatial information in pain is acquired from vision. Although I have not seen such an experimental study, I would be very surprised to learn that blind people cannot localize pain. Finally, we could then postulate that spatial localization of pain is acquired from either touch or vision, whichever one is present. The best way to test it would be to map the spatial resolution of a blind subject lacking Aβ fibers. Without this test, the possibility is still quite implausible. Indeed, the subject lacking Aβ fibers had similar spatial resolution as other subjects. But the map includes the lower back, which cannot be seen directly (however the data is not shown for that specific location – I am assuming it follows the same pattern since the authors don't comment on it). Therefore, in that region there is neither vision nor touch for that subject. All these observations tend to reject proposition C.

There is an interesting observation about the relation between innervation density and spatial resolution. As I mentioned above, along the arm there are different gradients of spatial resolution between and touch and they agree with gradients of innervation density of the corresponding fibers. But in the fingertips, the relation does not hold: spatial resolution for pain is high but innervation density of pain-related fibers is low (Mancini et al., 2013). How is it possible? One possibility is cognitive factors, for example we use our hand a lot, perhaps attention is focused on the fingers or we have more experience grasping things and thus developing our spatial discrimination abilities. Another possibility (which I haven't seen mentioned in these studies) is that the patterns of activation may be intrinsically more discriminable, because of the shape, tissue composition or the presence of the fingerprints.

We are then left with proposition B (and variation B2): you feel pain at a particular location because specific movements that you make produce that pain. I noted earlier that this proposition raises a problem, which is that you cannot localize pain that you have not produced yourself in the past. It seems a bit implausible, when we think for example of tooth ache. I argued then that to solve this problem, one would need to postulate that nociceptors can be activated at a “subthreshold” level that does not produce pain. In this case, to feel pain at a particular location requires previously producing specific movements that produce a similar (but possibly less intense) pattern of activation of the pain receptors. The subthreshold activity of these fibers should reach the central nervous system and induce plastic changes supporting future localization of noxious stimuli, without producing any conscious sensation. Finally, I note that there is a potential problem in the fact that intensity and spatial information are carried through the same channel (pain-related fibers), and therefore intensity of pain changes the pattern of activation from which spatial information is extracted. If spatial localization is learned at subthreshold levels, then there is a potential issue of generalizing to pain-inducing levels, with possibilities for biases in pain localization.

Haggard P, Iannetti GD, Longo MR (2013) Spatial sensory organization and body representation in pain perception. Curr Biol CB 23:R164–176.
Koltzenburg M, Handwerker HO, Torebjörk HE (1993) The ability of humans to localise noxious stimuli. Neurosci Lett 150:219–222.
Mancini F, Bauleo A, Cole J, Lui F, Porro CA, Haggard P, Iannetti GD (2014) Whole-body mapping of spatial acuity for pain and touch. Ann Neurol 75:917–924.
Mancini F, Sambo CF, Ramirez JD, Bennett DLH, Haggard P, Iannetti GD (2013) A Fovea for Pain at the Fingertips. Curr Biol 23:496–500.