This week's paper selection (6 – 13 Jan 2016)

 

What is computational neuroscience? (XXIV) - The magic of Darwin

Darwin’s theory of evolution is possibly the most important and influential theory in biology. I am not going to argue against that claim, as I do believe that it is a fine piece of theoretical work, and a great conceptual advance in biology. However, I also find that the explanatory power of Darwin’s theory is often overstated. I recently visited a public exhibition in a museum about Darwin. Nice exhibition overall, but I was a bit bothered by the claim that Darwin’s theory explains the origin, diversity and adaptedness of species, case solved. I have the same feeling when I read in many articles or when I hear in conversations with many scientists that such and such observed feature of living organisms is “explained by evolution”. The reasoning generally goes like this: such biological structure is apparently beneficial to the organism, and therefore the existence of that structure is explained by evolution. As if the emergence of that structure directly followed from Darwin’s account of evolution.

To me, the Darwinian argument is often used as magic, and is mostly void of any content. Replace “evolution” by “God” and you will notice no difference in the logical structure or arguments. Indeed, what the argument actually contains is 1) the empirical observation that the biological organism is apparently adapted to its environment, thanks to the biological feature under scrutiny; 2) the theoretical claim that organisms are adapted to their environment. Note that there is nothing in the argument that actually involves evolution, i.e., the change of biological organisms through some particular process. Darwin is only invoked to back up the theoretical claim that organisms are adapted, but there is nothing specifically about Darwinian evolution that is involved in the argument. It could well be replaced by God, Lamarck or aliens.

What makes me uneasy is that many people seem to think that Darwin’s theory fully explains how biological organisms get to be adapted to their environment. But even in its modern DNA form, it doesn’t. It describes some of the important mechanisms of adaptation, but there is an obvious gap. I am not saying that Darwin’s theory is wrong, but simply that it only addresses part of the problem.

What is Darwin’s theory of evolution? It is based on three simple steps: variation, heredity and selection. 1) Individuals of a given species vary in different respects. 2) Those differences are inherited. In the modern version, new variations occur randomly at this step, and so variations are introduced gradually over generations. 3) Individuals with adapted features survive and reproduce more than others (by definition of “adapted feature”), and therefore spread those features in the population. There is ample empirical evidence for these three claims, and that was the great achievement of Darwin.

The gap in the theory is the nature and distribution of variations. In the space of all possible small variations in structure that one might imagine, do we actually see them in a biological population? Well for one, there are a substantial number of individuals that actually survive for a certain time, so a large number of those variations are not destructive. Since the metaphor of the day is to see the genome as a code for a program, let us consider computer programs. Take a functional program and randomly change 1% of all the bits. What is the probability that 1) the program doesn’t crash, 2) it produces something remotely useful? I would guess that the probability is vanishingly small. You will note that this is not a very popular technique in software engineering. Another way to put it: consider the species of programs that calculate combinatorial functions (say, factorials, binomial coefficients and the like). Surely one might argue that individuals vary by small changes, but conversely, would a small random change in the code typically produce a new combinatorial function?

So it doesn’t follow logically from the three steps of Darwin’s theory that biological organisms should be adapted and survive to changing environments. There is a critical ingredient that is missing: to explain how, in sharp contrast with programs, a substantial fraction of new variations are constructive rather than destructive. In more modern terms, how is it that completely random genetic mutations result in variations in phenotypes that are not arbitrary?

Again I am not saying that Darwin is wrong, but simply that his theory only addresses part of the problem, and that it is not correct to claim that Darwin’s theory fully explains how biological organisms are adapted to their environment (ie, perpetuate themselves). A key point, and a very important research question, is to understand how new variations can be constructive. This can be addressed within the Darwinian framework, as I outlined in a previous post. It leads to a view that departs quite substantially from the program metaphor. A simple remark: the physical elements that are subject to random variation cannot be mapped to the physical elements of structure (e.g. molecules) that define the phenotype, for otherwise those random variations would lead to random (ie mostly destructive) phenotypes. Rather, the structure of the organism must be the result of a self-regulatory process that can be steered by the elements subject to random variation. This is consistent with the modern view of the genome as a self-regulated network of genes, and with Darwin’s theory. But it departs quite substantially from the magic view of evolution theory that is widespread in the biological literature (at least in neuroscience), and instead points to self-regulation and optimization processes operating at the scale of the individual (not of generations).

Is peer review a good thing?

Everyone knows peer review is a good thing for science. It's almost what makes it science: papers are critically examined by scientists and so only (or mostly) the good stuff get published, or at least errors are generally corrected in the process. Of course no system is perfect, but it certainly raises the quality of science.

Well for someone interested in epistemology like myself, this is quite interesting. Here are a bunch of scientists, possibly most scientists, who are claiming that the claims of their fellow scientists should be scrutinized for adequate empirical evidence (especially p-values !) and filtered so as to improve the quality of science. And how about that claim? Is it backed up by any empirical evidence?

So the situation is interesting, because the widespread belief in peer review is purely theoretical and based on no evidence at all. Sounds a bit like the liar paradox! Some people have looked for evidence though, but as far as I know what they found is mostly negative evidence (see for example Smith (2010), Classical peer review: an empty gun).

What is peer review meant to be good for?

  1. to remove errors (verification);
  2. to identify the most significant work and increase their visibility (publicity);
  3. to give proper credit to good work (a label of quality used for grant applications and careers).
  4. to scare scientists into producing better papers (prevention).

1) We all know that any study can be published, whether it's flawed or not, significant or not. There are just so many journals that it ought to get published somewhere. We also know that errors also often get into most visible journals. Why wouldn't they? Whether prestigious or not, the journal still relies on a couple of scientists giving an opinion on a paper. Private peer-review is constitutively incapable of spotting errors. If the paper is not selected by the peers, no one will ever know those errors, which will be published in another journal. A more useful alternative would be that scientists who spot errors or disagree with the interpretations publicly write their comments under their name, to which the authors can respond, all of this linked to the original publication. If this were constitutive of the publication process, it could perhaps be done in a civilized way (as opposed to the occasional angry letters of the editor).

2) In the most prestigious journals, the selection is not done by peers anyway. So the publicity argument doesn't work. More importantly, there appears to be no reason to apply this editorial filter before peer review. Since the paper is going to be peer reviewed anyway, why not do an editorial selection afterwards? This would then take the form of a reader's digest. Editorial boards could invite selected authors to write a shortened version of their work, for example. One could imagine many methods of increasing the visibility of selected papers, which do not need to be centralized by journals. For example, each scientist could make a selection of papers he liked most; someone who values the choices of that scientist could then have a look. An organized online social network might do the job.

3) My feeling is that this is what scientists are actually expecting of the peer-review system: credit. That is, to identify the best scientists out there. Careers are made and grants are distributed based on the outcomes of the publication system. Note: those are not based on peer review itself (in particular on the criticisms that may have been raised in the process), but on which peer-reviewed journal the paper lands in. So it's actually not exactly based on peer-review, since as noted above the most stringent filters are not applied by peers but by editors. But in any case, if what we are expecting of this process is a “label of quality”, then there appears to be no reason why it should be applied before publication rather than after, and that could be done as outlined above.

4) I think the only vaguely valuable argument in favor of pre-publication peer review is that it influences the way scientists write their papers. In particular, one might argue that even though peer review does not do a great job at spotting errors, it may nevertheless reduce the flaws in publications by its preventive action: scientists are more careful because they want their paper published. This might be true, but why would it be different if peers commented on the paper after publication? I would actually think that the opposite should be true: to spot flaws in the private circle of journals and peer reviewers is not so bad for one's reputation as spotting flaws in the public sphere.

So what is the conclusion? My feeling is that the current peer review system is based on an outdated publication system, with printed journals having to make a selection of papers for economical reasons. Now that publication is actually not the limiting factor anymore (e.g. you can publish on arxiv), there appears to be no reason to apply filters before publication. Papers could be reviewed, commented and selected by scientific communities, after online publication. This would save a lot of effort and money, produce a richer scientific debate and possibly reduce some biases.

This week's paper selection (23 Dec – 6 Jan 2016)

This week I decided to make a subselection because my lists tend to be quite long!

This week's paper selection (16-23 Dec 2015)

This week, action potentials (surprising!), in particular in unicellular organisms.

 

This week's paper selection (2 Dec 2015 – 16 Dec 2015)

I missed last week's paper selection, so this is actually two weeks.

 

Why do neurons spike?

Why do neurons produce those all-or-none electrical events named action potentials?

One theory, based on the coding paradigm, is that the production of action potentials is like analog-to-digital conversion, which is necessary if a cell wants to communicate to a distant cell. It would not be necessary if neurons were only communicating with their neighbors. For example, in the retina, most neurons do not spike but interact through graded potentials, and only retinal ganglion cells produce spikes, which travel over long distances (note that there is actually some evidence of spikes in bipolar cells). In converting graded signals into discrete events, some information is lost, but that is the price to pay in order to transmit any signal at all over a long distance. There is some theoretical work on this trade-off by Manwani and Koch (1999).

Incidentally, this theory is sometimes (wrongly) used to argue that spike timing does not matter because spikes are only used as a proxy for an analog signal, which is reflected by the firing rate. This theory is probably not correct, or at least incomplete.

First, neurons start spiking before they make any synaptic contact, and that activity is important for normal development (Pineda and Ribera, 2009). Apparently, normal morphology and mature properties of ionic channels depend on the production of spikes. In many neuron types, those early spikes are long calcium spikes.

A more convincing argument to me is the fact that a number of unicellular organisms produce spikes. For example, in paramecium, calcium spikes are triggered in response to various sensory stimuli and trigger an avoidance reaction, where the cell swims backward (reverting the beating direction of cilia). An interesting point here is that those sensory stimuli produce graded depolarizations in the cell, so from a pure coding perspective, the conversion of that signal to an all-or-none spike in the same cell seems very weird, since it reduces information about the stimuli. Clearly, coding is the wrong perspective here (as I have tried to argue in my recent review on the spike vs. rate debate). The spike should not be seen as a code for the stimulus, but rather as a decision or action, in this case to reverse the beating direction. This argues for another theory, that action potentials mediate decisions, which are by definition all-or-none.

Action potentials are also found in plants. For example, mimosa pudica produces spikes in response to various stimuli, for example if it is touched, and those spikes mediate an avoidance reaction where the leaves fold. Those are long spikes, mostly mediated by chloride (which is outward instead of inward). Again the spike mediates a timed action. It also propagates along the plant. Here spike propagation allows organism-wide coordination of responses.

It is also interesting to take an evolutionary perspective. I have read two related propositions that I found quite interesting (and neither is about coding). Andrew Goldsworthy proposed that spikes started as an aid to repair a damaged membrane. There is a lot of calcium in the extracellular space, and so when the membrane is ruptured, calcium ions rush into the cell, and they are toxic. Goldsworthy argues that the flow of ions can be reduced by depolarizing the cell, while repair takes place. We can immediately make two objections: 1) if depolarization is mediated by calcium then this obviously has little interest; 2) to stop calcium ions from flowing in, one needs to raise the potential to the reversal potential of calcium, which is very high (above 100 mV). I can think of two possible solutions. One is to trigger a sodium spike, but it doesn't really solve problem #2. Another might be to consider evenly distributed calcium channels on the membrane, perhaps together with calcium buffers/stores near them. When the membrane is ruptured, lots of calcium ions enter through the hole, and the concentration increases locally by a large amount, which probably immediately starts damaging the cell and invading it. But if the depolarization quickly triggers the opening of calcium channels all over the membrane, then the membrane potential would increase quickly with relatively small changes in concentration, distributed over the membrane. The electrical field then reduces the ion flow through the hole. It's an idea, but I'm not sure the mechanism would be so efficient in protecting the cell.

Another related idea was proposed in a recent review. When the cell is ruptured, cellular events are triggered to repair the membrane. Brunet and Arendt propose that calcium channels sensitive to stretch have evolved to anticipate damage: when the membrane is stretched, calcium enters through the channels to trigger the repair mechanisms before the damage actually happens. In this theory, it is the high toxicity of calcium that makes it a universal cellular signal. The theory doesn't directly explain why the response should be all-or-none, however. An important aspect, maybe, is cell-wide coordination: the opening of local channels must trigger a strong enough depolarization so as to make other calcium channels open all over the membrane of the cell (or at least around the stretched point). If the stretch is very local, then this requires an active amplification of the signal, which at a distance is only electrical. In other words, fast coordination at the cell-wide level requires a positive electrical feedback, aka an action potential. Channels must also close (inactivate) once the cellular response has taken place, since calcium ions are toxic.

Why would there be sodium channels? It's actually obvious: sodium ions are not as toxic as calcium and therefore it is advantageous to use sodium rather than calcium. However, this is not an entirely convincing response since in the end, calcium is in the intracellular signal. But a possible theory is the following: sodium channels appear whenever amplification is necessary but no cellular response is required at that cellular location. In other words, sodium channels are useful for quickly propagating signals across the cell. It is interesting to note that developing neurons generally produce calcium spikes, which are then converted to sodium spikes when the neurons start to grow axons and make synaptic contacts.

These ideas lead us to the following view: the primary function of action potentials is cell-wide coordination of timed cellular decisions, which is more general than fast intercellular communication.

This week's paper selection (25 Nov – 2 Dec 2015)

I have not necessarily read these papers yet, so do not take these as recommendations, but simply papers I am curious to read. Most of them are not recent.

This week it has come to my attention that a number of unicellular organisms (such as Paramecium) produce action potentials, which are triggered by sensory events and produce behavioral responses.

 

This week's paper selection (18-25 Nov 2015)

I have decided to post once a week the list of papers I have noticed.

I have not necessarily read these papers yet, so do not take these as recommendations, but simply papers I am curious to read. Some are not recent papers, in which case I mention the year of publication.

 

Some propositions for future spatial hearing research (III) - The coding problem

In the previous posts, I have proposed that we should look at the ecological problem of sound localization, and that in terms of physiology we should go beyond tuning curves. However, if all of this is addressed, we still have a big problem. We are looking for “neural representations” or “codes”, but neural representations are observer-centric concepts that make little sense from the viewpoint of the organism, as I have discussed a few times before (for example there). Neural responses are not there to be read by some little homunculus, they are just neurons that are exciting other neurons, which you are not recording. Those other neurons are not “reading the code”, you are. Those neurons are just reacting instantly to the electrical stimulation of the neurons that constitute what we like to call “neural representation”.

Not everyone is receptive to the philosophical points, so let me just give one example. You could look at the reponses of lots of binaural neurons and realize they have lots of different tunings. So you could suggest: maybe sound location is represented by the most active neurons. But someone else realizes that the average response of all those neurons varies gradually with sound location, so maybe sound location is actually encoded in the average response? Wait a minute: why throw all this information away? maybe sound location is represented by the entire pattern of activity? The problem we are facing here is not that we don't know how to determine which one is true, but rather that all of these are true (the last one being trivially true). Yes, sound location is represented by the identity of most active neurons, the average response and the pattern of activity: there is a mapping between sound location and those different features. That is, you, the external observer, can look at those features and guess what the sound location was. What is this supposed to prove?

By focusing on neural representations, we are not looking at the right problem. What we want to know in the end is not so much how neural activity varies with various parameters of an experiment, but how neural activity constitutes the spatial percept, or perhaps more modestly, how it drives behavioral orientation responses. Now certainly looking at neural responses is a necessary first step, but we can't answer the interesting question if we stop there. So how can we answer the interesting question?

Well, I won't claim that I have a good answer because I think that's one of the major conceptual problems in systems neuroscience today. But one proposition that I think goes in the right direction is to do stimulations instead of, or in conjunction with recordings. Ideally, those stimulations should be such as to trigger behaviors. Is average activity the important feature? Stimulate neurons at different places and you should see the same orientation response. Is the identity of active neurons important? With the same experiment, you should see different responses, varying systematically with stimulated neurons.

It's possible: it has actually been done 45 years ago (Syka and Straschill, 1970). Electrical stimulation of the inferior colliculus with microelectrodes can trigger specific orienting responses. These days one could also probably do optogenetic stimulation. It's not going to be simple, but I think it's worth it.