Is cancer due to bad luck?

A few weeks ago, a Science paper about cancer was relayed in all newspapers, as a study showing that most cancers are due to random factors rather than to either genes or environmental factors. I was highly skeptical. Many cancer scientists were apparently critical too. But I'm not a cancer scientist, and the reason for my skepticism was firstly epistemological: how do you prove that something is due to chance, rather than to any unknown factor? By definition, you cannot quantify what you don't know. So when I read: two cancers out of three are due to pure chance, I frowned. But again I'm not a cancer scientist; so maybe I was just missing some key element.

There was so much press coverage about this story that I finally decided to read the original paper. You do not need to be a cancer scientist to understand the study and its methodology. It is actually quite simple (conceptually). It is known that different types of tissues (e.g. brain, stomach) have widely different risks of cancer. Why? This could be for example because environmental factors target more some tissues than others. The authors' hypothesis was simply that cells (specifically stem cells) in different tissues have different division rates. Indeed, cancer-causing mutations are introduced during cell division, and so you would expect that there are more cancers in tissues where cells divide more. And so that's exactly what they found: 0.8 correlation between cell division rate and cancer risk across different types of tissues.

I suppose this is certainly quite an interesting result for cancer research. But how did journalists interpret that as cancers being mostly due to bad luck? Well this time, it seems that authors are to be blamed. I quote the ending sentence of the paper: “For [a subset of] tumors, primary prevention measures are not likely to be effective”. It is pretty clear what it means, and it goes way beyond the results of the study.

Let me restate: the study shows that differences in average risk between types of tissues is highly correlated with stem cell division rate. In other words: statistically, variations in average risk between different tissues can be explained essentially by different division rates. This has nothing to do with the variation in risk between different people! Let me just give you a simple example. Imagine that the risk of developing a cancer is the product of cell division rate, which is tissue-specific, and of an environmental factor, which impacts all tissues in the same way (to simplify). To make my point even simpler, imagine this factor is either 0 or some fixed quantity, depending on whether the person smokes or not. Then in this case, the average risk in a tissue is completely explained by the tissue's cell division rate (correlation is 1), since environmental factors affect all tissues equally. Does this mean that cancers are only due to chance? Of course not, since the risk for a person is either 0 for a non-smoker or a fixed value for a smoker. Now in the light of this example, I find the article's conclusion that “primary prevention measures are not likely to be effective” very worrying, since in this case primary prevention would entirely eradicate the disease.

A quick summary: the study shows that the risk of cancer, averaged across all individuals, varies between types of tissues mainly because those tissues have different average cell division rates (I'm being optimistic here since the study showed correlation and not causality). That's all. It says exactly zero about whether prevention is helpful or not, or about the influence of genes. I have to say the interpretation made by the authors is rather apalling.

Notes on consciousness. (III) Modern panpsychism: about the integrated information theory of consciousness

In the last decade, a number of neuroscientists have become interested in the question of consciousness. For example Christof Koch, Stanilas Dehaene, Gerald Edelman, and many others. There have been a number of interesting new insights on this old subject, mostly focused on the so-called “neural correlates of consciousness”, that is, the properties of neural activity that are associated with conscious states, as opposed to say coma. However, to my mind there is no convincing theory that explains what is consciousness, why we are conscious at all and why we feel anything at all (phenomenal consciousness). But there have been attempts. A recent one is the integrated information theory (IIT) proposed by Tononi, which proposes that consciousness is a property of all systems that have a high level of “integrated information”. In a nutshell, such a system is a dynamical system that cannot be divided into smaller independent (or weakly dependent) systems. The term “information” should be understood in the sense of information theory: how much information (uncertainty reduction) there is in the state of a subsystem about the future of another subsystem. In a nutshell, the problem with this theory is that it is as much about consciousness as information theory is about information.

Christof Koch is a notorious fan of IIT. He describes the theory in a popular science article entitled “Ubiquitous minds” (available on his web page). I should point out that it is not an academic paper, so maybe my criticisms will seem unfair. So to be fair, let us say that what follows is a criticism of the arguments in that article, but perhaps not of Koch's thought in general (admittedly I have not read his book yet).

Koch correctly presents IIT as a modern form of panpsychism, that is, the idea that lots of things are conscious to some level. Animals, of course, but also any kind of system, living or not, that has high “integrated information” (named “phi”). On his blog, Scott Aaronson, a theoretical computer scientist, gives an example of a matrix multiplication system that has this property and therefore should be highly conscious according to IIT if it where physically implemented. Now Tononi and Koch do not see this counter-intuitive implication as a problem with the theory, but on the contrary they embrace it as a highly interesting implication. Koch speculates for example that the internet might be conscious.

Koch starts by describing the naïve version of panpsychism, which indeed can easily be dismissed. Naïve panpsychism states that everything is conscious, to different levels: a brain, a tree, a rock. This immediately raises a big problem (refered to as the “problem of aggregates” in the article): you might claim that everything is conscious, but then you need to define what a “thing” is. Is half a rock conscious? Then which half? Is any set of 1000 particles randomly chosen in the universe conscious? Is half of my brain plus half of your stomach a conscious entity?

IIT is more restricted than naïve panpsychism, but it suffers from the same problem: how do you define a “system”? Wouldn't a subsystem of a conscious system also be conscious, according to the theory? As Koch writes, the theory offers no intrinsic solution to this problem, it must be augmented by an ad hoc postulate (“that only “local maxima” of integrated information exist”). What puzzles me is that the paper ends on the claim that IIT offers an “elegant explanation for [the existence of] subjective experience”. What I have read here is an interesting theory of interdependence in systems, and then a claim that systems made of interdependent parts are conscious. Where is the explanation in that? A word (“consciousness”) was arbitrarily put onto this particular property of systems, but no hint was provided at any point about a connection between the meaning of that word and the property those systems. Why would this property produce consciousness? No explanation is given by the theory.

If it is not an explanation, then it must simply be a hypothesis; the hypothesis that systems with high integrated information are conscious. That is, it is a hypothesis about which systems are conscious and which are not. As we noted above, this hypothesis assigns consciousness to non-living things, possibly including the internet, and definitely including some rather stupid machines that no one would consider conscious. I would consider this a problem, but tenants of IIT would simply adopt panpsychism and consider that, counter-intuitively, those things are actually conscious. But then this means admitting that no observation whatsoever can give us any hint about what systems are conscious (contrary to the first pages of Koch's article, where he argues that animals are conscious on those grounds); in other words, that the hypothesis is metaphysical and not testable. So the hypothesis is either unscientific or wrong.

Now I am not saying that the theory is uninteresting. I simply think that it is a theory about consciousness and not of consciousness. What about is it exactly? Let us go back to what integrated information is supposed to mean. Essentially, high integrated information means that the system cannot be subdivided in two independent systems – the future state of system A depends on the current state of system B and conversely. This corresponds to an important property of consciousness: the unity of consciousness. You experience a single stream of consciousness that integrates sound, vision, etc. Sound and vision are not experienced by two separate minds but by a single one. Yet this is what should happen if there were two unconnected brain areas dealing with sound and light. Thus a necessary condition for a unique conscious experience is that the substrate of consciousness cannot be divided into causally independent subsets. This is an important requirement, and therefore I do think that the theory has interesting things to say about consciousness, in particular what its substrate is, but it explains nothing about why there is a conscious experience at all. It provides a necessary condition for consciousness – and that's already quite good for a theory about consciousness.

But that's it. It does not explain why an interdependent system should be conscious – and in fact, given some examples of such systems, it seems unlikely that it is the case. What is missing in the theory? I hinted at it in my introduction: the problem with integrated information theory is that it is as much about consciousness as information theory is about information. The word “information” in information theory has little to do with information in the common sense of the word, that is, something that carries meaning for the receiver. But information theory is actually better described as a theory of communication. In fact, one should remember that Shannon's seminal paper was entitled “A Mathematical Theory of Communication”, not of information. In a communication channel, A is encoded into B by a dictionary, and B carries information about A insofar as one can recover A from B. But of course it only makes sense for the person sitting at the receiving end of the communication channel if 1) she has the dictionary, 2) A already makes sense to her. “Information” theory says nothing about how A acquires any meaning at all, it is just about the communication of information. For this reason, “integrated information” fails to address another important aspect of consciousness, which in philosophy is named “intentionality”: the idea that one is always conscious of something, i.e. consciousness has a “content” - not just a “quantity of consciousness”. Any theory that is solely based on information in Shannon's sense (dictionary) cannot say much about phenomenonal consciousness (how it feels like).

For the end of this post, I will simply quote Scott Aaronson:
“But let me end on a positive note. In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.”

Why does a constant stimulus feel constant? (I)

What is the relationship between neural activity and perception? That is, how does the quality of experience (qualia) relate with neural activity? For any scientist working in sensory neuroscience, this should be a central question – perhaps THE question. Unfortunately the obsession of the community for the question of “neural coding”, which is about relating neural activity with externally defined properties of sensory stimuli, does not help much in this regard. In fact, very naïve philosophical assumptions about this question seem to pervade the field. A popular one is that the perceived intensity of a stimulus corresponds to the firing rate of the neurons that are sensitive to it, in particular sensory receptor neurons. This was indeed an idea proposed by Lord Adrian in the 1920s, and the basic argument is that the firing rate of sensory neurons generally increases when the strength of the stimulus is increased. Clearly the argument is very weak (only an observed correlation), but I will try to refute it explicitly, because it triggers some interesting remarks. To refute it, I will turn the perspective around: why is it that a constant stimulus feels constant? In fact, what is a constant stimulus? This is a very basic question about qualia, but it turns out that it is a surprisingly deep one.

Let us start by listing a few sensory stimuli that feel constant, in terms of perceptual experience. A pure tone (e.g. the sound produced by a diapason or a phone) feels constant. In particular its intensity and pitch seem to be constant. Another example could be a clear blue sky, or any object that you fixate. In the tactile modality, a constant pressure on a finger. For these stimuli, there is a perceived constancy in their qualities, ie, the color of the sky does not seem to change, the frequency of the tone does not seem to change. Attention to the stimulus might fade, but one does not perceive that the stimulus changes. In contrast, neural activity is not constant at all. For a start, neurons fire spikes, and that means that their membrane potential always changes, but we do not feel this change. Secondly, in sensory receptor neurons but also in most sensory neurons in general, the frequency of those spikes changes in response to a constant stimulus: it tends to decrease (“adapt”). But again, a blue sky still feels the same blue (and not darker) and the pitch and intensity of a pure tone do not decrease. There appears to be no such simple connection between neural activity and the perceived intensity of a stimulus. Why is it that the intensity of a pure tone feels constant when the firing rate of every auditory nerve fiber decreases?

In response to this question, one might be tempted to propose a homunculus-type argument: the brain analyzes the information in the responses of the sensory neurons and “reconstructs” the true stimulus, which is constant. In other words, it feels constant because the brain represents the outside world and so it can observe that the stimulus is constant. As I noted a number of times in this blog, there is a big conceptual problem with this kind of argument (which is vastly used in “neural coding” approaches), and that is circular logic: since the output of the reconstruction process is in the external world (the stimulus), how can the brain know what that output might be, as it is precisely the aim of the reconstruction process to discover it? But in fact in this case, the fallacy of this argument is particularly obvious, for two reasons: 1) whereever the representation is supposed to be in the brain, neural activity is still not constant (in particular, made of spikes); 2) even more importantly, in fact what I called “constant stimulus” is physically not constant at all.

Physically, a pure tone is certainly not constant: air vibrates at a relatively high rate, the acoustic pressure at the ear fluctuates. The firing of auditory nerve fibers actually follows those fluctuations, at least if the frequency is low enough (a phenomenon called phase locking), but it certainly doesn't feel this way. Visual stimuli are also never constant, because of eye movements – even when one fixates an object (these are called fixational eye movements). In fact, if the retinal image is stabilized, visual perception fades away quickly. In summary: the sensory stimulus is not constant, neurons adapt, and generally neural activity is dynamic for any stimulus. So the question one should ask is not: how does the brain know that the stimulus is constant, but rather: what is it that make those dynamic stimuli feel perceptually constant?

Rate vs. timing (XXII) What Robert Rosen would say about rates and spikes

Robert Rosen was an influential theoretical biologist, who worked in particular on the nature of life. He made the point that living organisms are very special kinds of natural systems, in that they are anticipatory systems, which is the name of one of his most important books. He spends a substantial part of that book on epistemological issues, in particular on what a model is. The following figure is in my opinion a brilliant illustration of what a model is:

modeling relation

The natural system is what is being modeled. For example, the brain or the solar system. The formal system is the model, for example sets of differential equations describing Newtonian mechanics. The model has state variables that represent observables of the natural system. The mapping from the natural system to those state variables is called “encoding” here – it corresponds to measurement.  Decoding is the converse process. Causality describes changes occurring in the natural system, while implication describes changes in the formal system. A good model is one that is such that this diagram commutes. That is, causality in the natural system corresponds to implication in the formal system.

Let us apply this framework to the debate at hand: rate-based vs. spike-based theories of the brain. The question then is: can there be a good rate-based model of the brain, i.e., a model in which observables are rates, or is it necessary to include spikes in the set of observables? The question has little to do with the question of coding (how much information there is in either spike timing or rates about some other observable). It has to do with whether rates, as observables, sufficiently characterize the natural system so that the evolution of a formal system based on them can be mapped to the evolution of the natural system. In other words: do rates have a causal value in the dynamics of neural networks? It is easy to imagine how spikes might, because neurons (mainly) communicate with spikes and there are some biophysical descriptions of the effect of a single spike on various biophysical quantities. It is not so easy for rates. The problem is that in our current biophysical understanding of neurons, spikes are observables that have a causal role (e.g. the notion of a postsynaptic potential), but rates are primarily described as observables (averages of some sort) with no causal nature. To support rate-based theories is to demonstrate the causal nature of rates. As far as I know, it has not been done, and I have heard no convincing reason why rates might have a causal nature.

In fact, given that a rate is an observable that is defined on top of another observable, spikes, the question reduces to a more formal question, about relations between two formal systems: can a spike-based model of the brain be approximated by a rate-model? (in the same sense as depicted on the figure above) This is an interesting remark, because now the question is not primarily empirical but formal, and therefore it can be addressed theoretically. In fact, this question has already been addressed. This is precisely the goal of all studies trying to derive mean-field descriptions of spiking neural networks. So far, the results of those studies are that 1) it is not possible in the general case; 2) it is possible under some specific assumptions about the structure of the spiking model, which are known not to be empirically valid (typically: random sparse connectivity, independent external noise to all neurons).

Spontaneous firing in auditory nerve fibers might reflect mechanical noise

One thing has always puzzled me in auditory nerve physiology. Most auditory nerve fibers saturate quickly when sound level is increased. This implies that, at least for tones, firing rate carries no information at high level. But then it is also known that above a few kHz (depending on species), neural firing in response to tones also becomes imprecise (loss of phase locking), so that spike timing does not carry information. So in other words: at high levels, high frequency auditory nerve fibers spend a lot of energy firing spikes that carry close to zero information. It is odd.

Now in a recent study in J Neurosci, Temchin and Ruggero show some data that suggest (although it remains a bit speculative) that spontaneous firing of fibers may reflect mechanical vibrations rather than neural noise. In my mind, this implies that there is shared variability in the firing of neighboring fibers, and therefore, the loss in spike timing precision in high frequency may hide precise relative timing between neighoring fibers, and this would carry information about the temporal fine structure of sounds.

A neurological case supporting the sensorimotor theory of perception

I just read chapter 5 of Oliver Sacks' “The man who mistook his wife for a hat”, “Hands”. He describes the case of a 60 year old lady with cerebral palsy, who could not do anything with her hands, and more interestingly, with a profound impairment of manual perception – inability to recognize any object by touch. Yet her tactile senses were intact, and she actually could move her hands – although she did not use them. Sacks reasoned that perhaps she had not developed her manual abilities by exploration as a child would, because she had been looked after so much due to her congenital disease. Then as a therapeutic experiment, he tricked her into grasping things with her hands (e.g. putting food slightly out of reach). In a matter of weeks, she developed complete functional use of her hands (after 60 years!), and even started to make remarkable sculptures. Simultaneously, she also developed normal tactile perception (instant object recognition).

This is quite an interesting case for psychological theories of perception. As the sensorimotor theory of perception predicts, it shows that normal perception requires action. Normal sensory neural structures do not by themselves give rise to perception.

On the expanding volume of experimental data – (I) So... where is the data?

One of the major motivations of the HBP, as I have heard from Henry Markram, is that we already produce tons of experimental data, and yet we still don't understand much about the brain. So what we need is not more data, but rather to do something with all that data. I don't think that the specific way in which the HBP proposes to deal with the data necessarily follows from this remark, but I do think that the remark is generally correct. There are so many more papers than one can read, and so much more data that one can analyze or comprehend. I would like to give my view on this problem as a theoretical neuroscientist.

First of all, as a theoretical neuroscientist, I have to ask the question that many of my colleagues probably have in mind: yes, there is an enormous amount of data, but where is it? Most theoreticians crave data. They want data. Where is it: in the hard drives of the scientists who produce the data, and they usually stay there. So there is a very simple reason why this enormous amount of data is not exploited: it is not shared in the first place. In my view, this would be the first problem to solve. How is it that the data are not shared? If you carefully read the instructions to authors of journals, you generally find that they explicitly ask the authors to share all the data analyzed in the paper with anyone who would ask. And authors have to sign the statement. I can hear some theoreticians laughing in the back! Let's face it: it almost never happens, unless you personally know the authors, and even then it is complicated. Why? I have heard mainly two reasons. There is the “I have to dig out the data” type of reason. That is, the data lie in a disorganized constellation of files and folders, with custom formats etc: basically, it's a mess. This is probably the same reason why many modelling papers also don't share the code for their models: they are not proud of their code! There are some efforts underway to address this issue by developing standardized formats – for example NeuroML for neuron models, and there's a similar effort for experimental data that the Allen Institute is leading. I doubt that this will entirely solve the problem, but it is something.

In any case, I think this first issue is solvable (see below). The second reason why people don't share their data is much more profound: they want to own the data they have produced. They want to own it because it has value. In the academic world of neuroscience, data is wealth. It takes effort and resources to produce data, and you don't want someone taking your data, analyzing it in a different way and publishing it. Many scientists have in mind a few ideas to use their data in different ways, and so they want to protect it. Yet this attitude directly contradicts the statement they have signed when submitting their papers.

There is an additional factor that has more to do with the epistemological biases of the community. Compare these two abstracts. (1) “We wanted to test hypothesis A, and so we did the following experiment: it works”. Here the authors used data that they previously acquired, and confronted them with some hypothesis. (2) “We wanted to test hypothesis A, and so we checked previously published data: it works”. Here the authors used data that other authors previously acquired, and confronted them with some hypothesis. Exactly the same methodology. But certainly you know that abstract (2) will probably get rejected in the highest impact journals. It is not “new data”. It happened to me a number of times that reviewers complained that we actually used “old data”. Apparently the timestamp of the data was a problem. This might mean two things. One is that the value of a paper is essentially in the experimental data, and so if the raw data is the same, then the paper has no additional value. I find this statement philosophically inept, but in any case it is rather hypocritical as it does not actually distinguish between abstracts (1) and (2). A second interpretation is that data is only meaningful if it was acquired after rather than before a hypothesis was expressed. From a logical standpoint, this is absurd. But I think it stems from the wide-spread Popperian fantasy that science is about testing theories with critical experiments. That is, at a given moment in time, there are a number of theories that are consistent with current facts, and you distinguish between them by a carefully chosen experiment. This is certainly not true from a historical point of view (see Thomas Kuhn), and it is naïve from a philosophical point of view (see Imre Lakatos). I have discussed this elsewhere in detail. More trivially, there is the notion in the mind of many scientists (reviewers) that to have the data beforehand is sort of “cheating”, because then it is trivial to come with a hypothesis or theory that is consistent with it. I would argue that it is trivial only if a small amount of data is considered and the explanation is allowed to be as complex as the data, but otherwise it is not trivial at all – rather, it is actually what science is all about. But in any case, it is again hypocritical because you don't know when you read abstract (1) whether the data was actually produced after or before the hypothesis was expressed.

So what can we do about it? First, there is a trivial way to make a lot of progress on data sharing. Instructions to authors already specify that the data should be public. So let's just stop being hypocritical and make the submission of data mandatory at the time of paper submission, or acceptance. This could be dealt with by journals, or by archiving initiatives (such as Arxiv for preprints or ModelDB for models). I believe this will partly solve the issue of “I have to dig out the data” because authors will know in advance that they will have to submit the data. So they will do what is necessary – let's be honest for a minute: this is probably not the most complicated thing scientists are supposed to deal with. Quite possibly, standardized formats might help, but the first significant step is really to make the data accessible.

For the epistemological issue, the problem is more profound. Scientists have to accept that producing data is not the only valuable thing in science, and that making sense of the data is also valuable. I will address this specific point in the next post of this short series.

Update (20.08.2014). I just realized Neuron actually makes it mandatory that some types of datasets are submitted along with the paper (but not all types).

The challenge of retrograde amnesia to theories of memory

I am reading Oliver Sacks' “The man who mistook his wife for a hat”. On chapter 2, he describes a case of retrograde amnesia. Around 1970, the patient went into an episode of alcoholism, which resulted in the loss of 25 years of his most recent memories (declarative memory). As a result, the patient thought he was 19 and lived in 1945, as if time had stopped. So not only could he not transfer short-term memory to long-term memory, but a large part of his previously stored memory got erased. In addition, it is not a random fraction of his memories that were erased: it was exactly the most recent ones. He seemed to perfectly remember old memories, and have absolutely no memory of the more recent events.

This is quite a challenge for current neural theories of memory. The main theoretical concept about memory is the notion of neural assembly supporting associative memory. Imagine a memory is made of a number of elements that are associated together, then the substrate of this memory is a connected network of neurons that “code” for those elements, in some structure in the brain, with connections to relevant parts of the brain (say, the visual cortex for visual features, etc). This conceptual framework can be extended with sequential activation of neurons. Now in this framework, how do you erase the most recent memories? Note that by “most recent”, I am talking of 25 years, not of short-term memory.

One trivial possibility would be that each memory has a timestamp, encoded as part of the neural assembly supporting that memory. Then some mechanism erases all memories that have a timestamp more recent than a particular date. Why and how this would happen is mysterious. In addition, “reading the timestamp” would entail activating those memories (all of them), which would then need to exist at that time, and then erasing them. It simply sounds absurd. A more plausible explanation is that, for some reason, recent memories are more fragile than old ones. But why is that?

This is a very interesting point, because in current neural theories of memory, it is the old memories that are more fragile than the recent ones. The reason is that memories are imprinted by modifications of synaptic connections according to a Hebbian mechanism (neurons that are co-activated strengthen their connections), and then these connections get degraded over time because of the activation of the same neurons in other contexts, by ongoing activity. So in current theories of memory, memory traces decay over time. But what retrograde amnesia implies is exactly the opposite: memory traces should strengthen over time. How is it possible that memories strengthen over time?

One possibility is that memories are replayed. If you recall a memory, the neurons supporting that memory activate and so the corresponding connections strengthen. But conscious recollection will probably not do the trick, because then there would not be a strict temporal cut-off: i.e., some recent memories might be recalled more often than some older ones. So what seems to be necessary is a continuous subconscious replay of memories, independent of emotional or attentional states. Clearly, this is quite a departure from current neural theories of memory.

Rate vs. timing (XXI) Rate coding in motor control

Motor control is sometimes presented as the prototypical example of rate coding. That is, muscle contraction is determined by the firing rate of motoneurons, so ultimately the “output” of the nervous system follows a rate code. This is a very interesting example, precisely because it is actually not an example of coding, which I previously argued is a problematic concept.

I will briefly recapitulate what “neural coding” means and why it is a problematic concept. “Coding” means presenting some property of things in the world (the orientation of a bar, or an image) in another form (spikes, rates). That a neuron “codes” for something means nothing more than its activity co-varies with that thing. For example, pupillary diameter encodes the amount of light captured by the retina (because of the pupillary contraction reflex). Or blood flow in the primary visual cortex encodes local visual orientation (this is what is actually measured by intrinsic optical imaging). So coding is really about observations made by an external observer, it does not tell much about how the system works. It is a common source of confusion because when one speaks of neural coding, there is generally the implicit assumption that the nervous system “decodes it” somehow. But presumably the brain does not “read-out” blood flow to infer local visual orientation. The coding perspective leaves the interesting part (what is the “representation” for?) largely unspecified, which is the essence of the homunculus fallacy.

The control of muscles by motoneurons does not fit this framework, because each spike produced by a motoneuron has a causal impact on muscle contraction: its activity does not simply co-vary with muscle contraction, it causes it. So first of all, motor control is not an example of rate coding because it is not really an example of coding. But still, we might consider that it conforms to rate-based theories of neural computation. I examine this statement now.

I will now summarize a few facts about muscle control by motoneurons, which are found in neuroscience textbooks. First of all, a motoneuron controls a number of muscle fibers and one fiber is contacted by a single motoneuron (I will only discuss α motoneurons here). There is indeed a clear correlation between muscle force and firing rate of the motoneurons. In fact, each single action potential produces a “muscle twitch”, i.e., the force increases for some time. There is also some amount of temporal summation, in the same way as temporal summation of postsynaptic potentials, so there is a direct relationship between the number of spikes produced by the motoneurons and muscle force.

Up to this point, it seems fair to say that firing rate is what determines muscle force. But what do we mean by that exactly? If we look at muscle tension as a function of a time, resulting from a spike train produced by a motoneuron, what we see is a time-varying function that is determined by the timing of every spike. The rate-based view would be that the precise timing of spikes does not make a significant difference to that function. But it does make a difference, although perhaps small: for example, the variability of muscle tension is not the same if the spike train is regular (small variability) or if it is random, e.g. Poisson (larger variability). Now this gets interesting: during stationary muscle contraction (no movement), those motoneurons generate constant muscle tension and they fire regularly, unlike cortical neurons (for example). Two remarks: 1) this does not at all conform to standard rate-based theory where rate is the intensity of a Poisson process (little stochasticity); 2) regularly firing is exactly what motoneurons should be doing to minimize variability in muscle tension. This latter remark is particularly significant. It means that, beyond the average firing rate, spikes occur at a precise timing that minimizes tension variability, and so spikes do matter. Thus motor control rather seems to support spike-based theories.

Exact simulation of integrate-and-fire models with synaptic conductances

Brette, R. (2006). Exact simulation of integrate-and-fire models with synaptic conductances. Neural Comput 18(8): 2004-2027.

Abstract. Computational neuroscience relies heavily on the simulation of large networks of neuron models. There are essentially two simulation strategies: 1) using an approximation method (e.g. Runge-Kutta) with spike times binned to the time step; 2) calculating spike times exactly in an event-driven fashion. In large networks, the computation time of the best algorithm for either strategy scales linearly with the number of synapses, but each strategy has its own assets and constraints: approximation methods can be applied to any model but are inexact; exact simulation avoids numerical artefacts but is limited to simple models. Previous work has focused on improving the accuracy of approximation methods. In this paper we extend the range of models that can be simulated exactly to a more realistic model, namely an integrate-and-fire model with exponential synaptic conductances.

Download code:

  • Functions for exact simulation: IFSC.h, IFSC.c (to include in your favourite event-driven simulator)
  • Scilab implementation of functions for exact simulation and an example script for a random network: ExpCondIF(Brette).zip. The archive includes a generic event-driven simulator written in Scilab for networks without delays (this is for pedagogical purposes and is not intended to be efficient at all). N.B.: Scilab is a free scientific software (resembling Matlab).