Some propositions for future spatial hearing research (II) - Tuning curves

In the previous post, I proposed to look at the ecological problem of sound localization, rather than the artificial and computationally trivial problem that is generally addressed. As regards physiology, this means that a neural representation of sound location is a property of collective neural responses that is unchanged for the class of stimuli that produce the same spatial percept. This is not a property that you will find at a single neuron level. To give a sense of what kind of property I am talking about, consider the Jeffress model, a classic model of sound localization. It goes as follows: each neuron is tuned to a particular location, and there are a bunch of neurons with different tunings. When a sound is presented, you identify the most active neuron, and that tells you where the sound comes from. If it is the same neuron that is most active for different sounds coming from the same location, then you have the kind of representation I am talking about: maximally active neuron is a representation of (specifically) sound location.

The Jeffress model actually has this kind of nice property (unlike competitors), but only when you see it as a signal processing model (cross-correlation) applied to an idealized acoustical situation where you have no head (ie two mics with just air between them). What we pointed out in a recent paper in eLife is that it loses that property when you consider sound diffraction introduced by the head; quite intriguingly, it seems that binaural neurons actually compensate for that (ie their tunings are frequency-dependent in the same way as interaural time differences are frequency-dependent).

But I want to discuss a more fundamental point that has to do with tuning curves. By “tuning curve”, I am referring to a measurement of how the firing rate of a neuron varies when one stimulus dimension is varied. Suppose that indeed you do have neurons that are tuned to different sound locations. Then you present a stimulus (of the same kind) and you look for the maximally active neuron. The tuning of that neuron should match the location of the presented stimulus. Right? Well, actually no. At least not in principle. That would be true if all tuning curves had exactly the same shape and peak value and only differed by a translation, or at least if the shape and magnitude were not correlated with tuning. But otherwise it's just an incorrect inference. If you don't see what I mean look at this paper on auditory nerve responses. Usually one would show selectivity curves of auditory nerve fibers, ie firing rate vs. sound frequency for a bunch of fibers (note that auditory scientists also use “tuning curve” to mean something else, which is minimum sound level that elicits a response vs. frequency). Here the authors show the data differently on Fig. 1: responses of all fibers along the cochlea for a bunch of frequencies. I bet that it is not what you would expect from reading textbooks on hearing. Individually, fibers are tuned to frequency. Yet you can't really pick the most active fiber and tell what sound frequency was presented. Actually there are different frequencies at which the response peaks at the same place. It's basically a mess. But that is what the auditory gets when you present a sound: the response of the entire cochlea for one sound, not the response of one neuron to lots of different stimuli.

So, what about sound localization and binaural neurons, do we have this kind of problem or not? Well I don't know for sure because no one actually shows whether the shape of tuning curves vary systematically with tuning or not. Most of the time, one shows a few normalized responses and then extracts a couple of features of the tuning curves for each cell (ie the tuning in frequency and ITD) and shows some trends. The problem is we can't infer the population response from tunings unless we know quite precisely how the tuning curves depend on tuning. That is particularly problematic when tuning curves are broad, which is the case for the rodents used in many physiological studies.

I see two ways to solve this problem. One is to prove that there is no problem. You look at tuning curves, and you show that there is no correlation between tuning and any other characteristic of tuning curves (for examples, calculate average tuning curves with the same tuning, and compare across tunings). That would be quite reassuring. My intuition: that will work in high frequency, maybe, or in the barn owl perhaps (quite narrow curves), but not in low frequency, and not for most cells in rodents (guinea pigs and gerbils).

If it doesn't work and there are correlations, then the problem will get quite complicated. You could think of looking for a parametric representation of the responses. It's a possibility and one might make some progress this way, but it might become quite difficult to do when you add extra stimulus dimensions (level etc). There is also the issue of gathering data from several animals, which will introduce extra variability.

The only clean way I see of dealing with this problem is to actually record the entire population response (or a large part of the structure). It sounds very challenging, but large-scale recording techniques are really progressing quite fast these days. Very dense electrode arrays, various types of imaging techniques; it's difficult but probably possible at some point.

Some propositions for future spatial hearing research (I) – The ecological situation and the computational problem

In these few posts, I will be describing my personal view of the kind of developments I would like to see in spatial hearing research. You might wonder: if this is any good, then why would I put it on my blog rather than in a grant proposal? Well, I have hesitated for a while but there are only so many things you can do in your life, and in the end I would just be glad if someone would pick up some of these ideas and made some progress in an interesting direction. Some of them are pretty demanding both in terms of efforts and expertise, which is also a reason why I am not likely to pursue all of these myself. And finally I believe in open science, and it would be interesting to read some comments or have some discussions. All this being said, I am open to collaboration on these subjects if one is motivated enough.

The basic question is: how do we (or animals) localize sounds in space? (this does not cover all of spatial hearing)

My personal feeling is that the field has made some real progress on this question but has now exploited all there is to exploit in the current approaches. In a nutshell, those approaches are: consider a restricted set of lab stimuli, typically a set of sounds that are varied in one spatial dimension, and look at how physiological and behavioral responses change when you vary that spatial parameter (the “coding” approach).

Let us start with what I think is the most fundamental point: the stimuli. For practical reasons, scientists want to use nice clean reproducible sounds in their experiments, for example tones and bursts of white noise. There are very good reasons for that. One is that if you want to make your results reproducible by your peers, then it's simpler to write that you used a 70 dB pure tone of frequency 1000 Hz than the sound of a mouse scratching the ground, even though the latter is clearly a more ecologically relevant sound for a cat. Another reason is that you want a clean, non noisy signal both for reproducibility reasons and because you don't want to do lots of experiments. Finally, you typically vary just one stimulus parameter (e.g. azimuthal angle of the source) because that already makes a lot of experiments.

All of this is very sensible, but it means that in terms of the computational task of localizing a sound, we are actually looking at a really trivial task. Think about it as if you were to design a sound localization algorithm. Suppose all sounds are going to be picked up from a set of tones that vary along a spatial dimension, say azimuth. How would you do it? I will tell you how I would do it: measure the average intensity at the left ear, and use a table to map it to sound direction. Works perfectly. Obviously that's not what actual signal processing techniques do, and probably that's not what the auditory system does. Why not? Because in real life, you have confounding factors. With my algorithm, you would think loud sounds come from the left and soft sounds from the right. Not a good algorithm. The difficulty of the sound localization problem is precisely to locate sounds despite all the possible confounding factors, ie all the non-spatial properties of sounds. There are many of them: level, spectrum, envelope, duration, source size, source directivity, early reflections, reverberation, noise, etc. That's why it's actually hard and algorithms are not that good in ecological conditions. That is the ecological problem, but there is actually very little research on it (in biology). As I argued in two papers (one about the general problem and one applied to binaural neurons), the problem that is generally addressed is not the ecological problem of sound localization, but the problem of sensitivity to sound location, a much simpler problem.

This state of affairs is very problematic in my opinion when it comes to understanding “neural representations” of sound location, or more generally, how the auditory system deals with sound location. For example, many studies have looked at the information content of neural responses and connected it with behavioral measurements. There are claims such as: this neuron's firing contains as much information about sound location as the entire organism. Other studies have claimed to have identified optimal codes for sound location, all based on the non-ecological approach I have just described. Sorry to be blunt, but: this is nonsense. Such claims would have been meaningful if we actually lived in a world of entirely identical sounds coming from different directions. And so in that world my little algorithm based on left ear intensity would probably be optimal. But we don't live in that world, and I would still not use the left-ear algorithm even if I encountered one of those sounds. I would use the algorithm that works in general, and not care so much about algorithms that are optimal for imaginary worlds.

What do we mean when we say that “neurons encode sound location”? Certainly we can't mean that neurons responses are sensitive to location, ie they vary when you vary sound location, because that would be true of basically all neurons that respond to sounds. If this is what we mean, then we are just saying that a sizeable portion of the brain is sensitive to auditory stimuli. Not that interesting. I think we mean, or at least we should mean, that neurons encode sound location specifically, that is, there is something in the collective response of the neurons that varies with sound location and not with other things. This something is the “representation”, and its most basic property is that it does not change if the sound location percept does not change. Unfortunately that property cannot be assessed if all you ever vary in your stimulus is the spatial dimension, and so in a nutshell: current approaches based on restricted stimulus sets cannot, by construction, address the question of neural representations of sound location. They address the question of sensitivity – a prerequisite, but really quite far from the actual ecological problem.

So I think the first thing to do would be to start actually addressing the ecological problem. This means essentially inverting the current paradigm: instead of looking at how responses (physiological/behavioral) change when a spatial dimension is varied, look at how they change (or at what doesn't change) when non-spatial dimensions are varied. I would proceed in 3 steps:

1) Acoustics. First of all, what are the ecological signals? Perhaps surprisingly, no one has measured that systematically (as far as I know). That is, for an actual physical source at a given location, not in a lab (say in a quiet field, to simplify things), how do the binaural signals look like? What is the structure of noise? How do the signals vary over repetititions, or if you use a different source? One would need to do lots of recordings with different source sources and different acoustic configurations (we have started to do that a little bit in the lab). Then we would start to have a reasonable idea of what the sound localization problem really is.

2) Behavior. The ecological problem of sound localization is difficult, but are we actually good at it? So far, I have not seen this question addressed in the previous literature. Usually, there is a restricted set of sounds, with high signal-to-noise ratio, often noises or clicks. So actually, we don't know how good we (or animals) are at localizing sounds in ecological situations. Animal behavior experiments are difficult, but a lot could be done with humans. There is some psychophysical research that tends to show that humans are generally not too much affected by confounding factors (eg level); it's a good starting point.

3) Physiology. As mentioned above, the point is to identify what in neural responses is specifically about sound location (or more precisely, perceived sound location), as opposed to other things. That implies to vary not only the spatial dimension but also other dimensions. That's a problem because you need more experiments, but you could start with one non-spatial dimension that is particularly salient. There is another problem, which is that you are looking for stable properties of neuron responses, but it's unlikely that you find that in one or a few neurons. So probably, you would need to record from many neurons (next post), and this gets quite challenging.

Next post is a criticism of tuning curves; and I'll end on stimulating vs. recording.

 

Update (6 Jan 2021): I am sharing a grant proposal on this subject. I am unlikely to do it myself, so feel free to reuse the ideas. I am happy to help if useful.

Why the textbook account of spike initiation is wrong

In previous posts, I have tried to explain why spike initiation is “sharp” or abrupt, basically as in an integrate-and-fire model. The explanation relies on the fact that spikes are initiated in the axon, next to the soma. What may not be so obvious is how this differs from the textbook account of spike initiation. That textbook account would go as follows: first, the spike is initiated at a distal site in the axon, through the interplay between transmembrane currents, then the spike is propagated to the soma. On the course of its propagation, its shape changes and becomes sharper (see the explanation in this paper for example).

This account is wrong on several levels. But fundamentally, it has to do with the inadequacy of the transportation metaphor. Let us take a step back. What do we mean when we say that a spike propagates along an axon? In fact, not much actually moves physically along the axon. There is very little ion movement in the longitudinal direction. Ions move mostly through the membrane, in the radial direction. It produces a spatiotemporal pattern of electrical activity, and the reason we say a spike propagates is that in certain conditions that pattern is a solitary wave for which we can define a velocity. This is actually how Hodgkin and Huxley predicted spike velocity in the squid giant axon, by postulating that the potential V(x,t) can be written as U(x-v.t), and then looking for the velocity v that was consistent with their equations. This velocity has actually nothing to do with the velocity of the ions that carry the electrical charges. So basically we often describe electrical activity in axons in terms of transportation of spikes, but one should keep in mind that this is just a metaphor.

Now when we say that there is a spike that moves and gets transformed on its way, we should realize that the metaphor has reached its limits. As nothing actually gets transported, we are not saying anything else than there is an irregular spatiotemporal pattern of activity. That is, we are not saying anything at all. That is all the more true when we are talking of time scales shorter than the duration of a spike (as in the backprogation from initiation site to soma).

The transportation metaphor, unfortunately, is highly misleading. It leads to incorrect reasoning in this case. Here is the reasoning in question. The onset of a spike at the axonal initiation site is smooth. The onset of a spike at the soma is sharp. Therefore, the spike onset gets sharper through the propagation from initiation site to soma. On the surface, the reasoning seems flawless. But now here is a disturbing fact: from the spike shape at the axonal initiation site, I can make a quantitative prediction of onset rapidness at the soma, without knowing anything of what's in between (that's in a paper being submitted). Therefore, onset rapidness at the soma is determined by properties of spike initiation, not of propagation. How can that be?

The flaw is this: in the transportation metaphor, somatic and axonal spikes are implicitly seen as local events both in space and time, which can then be related by the transportation metaphor (object at place A and time t1 travels to place B at time t2). As mentioned above, the relevance of the transportation metaphor is questionable at this spatial and temporal scale, all the more when what is being transported changes. What I showed in my previous paper and review is precisely that spike initiation is not local. It is determined both by the properties of local Na channels and by resistive properties of the coupling between soma and axonal initiation site, which are not local. For example, if you moved the Na channels away from the soma, the neuron would become more excitable, even though local properties at the initiation site would be identical. Spike initiation is not a local phenomenon because of the proximity of the axonal initial segment to the big somatodendritic compartment.

Thus the sharpness of somatic spikes is actually determined not by propagation properties, contrary to what was claimed before, but by spike initiation properties. The catch is that those properties are not local but rather encompass the whole soma-initial segment system.

I had previously called this biophysical explanation the “compartmentalization hypothesis”. I realize now that this can be very misleading because physiologists tend to think in terms of initiation in one compartment and transportation from one compartment to the other. I will now use a different terminology: “critical resistive coupling”, which emphasizes that spike initiation relies on a system of coupled compartments.

Back from a panel discussion on connectomics

I just came back from a panel discussion on connectomics between Moritz Helmstaedter and myself, in the Donders summer school on neural metrics. I will share my presentation when I figure out how to upload a 50 MB file on this website! (there are a few videos) In my presentation, I essentially point out that connectivity, especially anatomical connectivity (as opposed to functional connectivity), generally tells you very little about function. In particular, it doesn't help you distinguish between general theories of nervous function (say, liquid state machines or attractor networks), because those theories could easily accommodate very different types of connectivity (as long as, say, network connectivity is recurrent).

What came up in reaction to that remark is classical Popperianism. That is, the notion that a scientific theory should aim for critical experiments, experiments that could immediately disqualify that theory. So theories of brain function ought to make falsifiable predictions about connectivity, and if they don't they hardly qualify as scientific theories (the term “laziness” was used).

I have two remarks. First of all, the idea that scientific theories are theories that are falsifiable by critical experiments, and that scientific progress is essentially the performing of those experiments, is an idea that dates back from Popper's 1934 book and a few things have been written since then. Apparently many scientists are philosophically stuck in the 1930s. Thomas Kuhn's historical analysis shows that it is rarely the case that science progresses in this way. Of course it happens sometimes, but it's not the generic case. There are good reasons for that, which have been analyzed by philosophers such as Imre Lakatos. The basic remark is that a scientific theory is one that is falsifiable (the “demarcation criterion”), yes, but in practice it is also one that is falsified. There are always countless observations that do not fit the theoretical framework and those are just ignored or the theory is amended with ad hoc assumptions, which might later be explained in a more satisfactory way (e.g. the feather falls more slowly than the hammer because of some other force, let's call it “friction”). So it is very rare than a single experiment can discard a broad theory, because the outcome can often be accomodated by the theory. This can seem like a flaw in the scientific discovery process, but it's not: it's unavoidable if we are dealing with the complexity of nature; an experimental outcome can be negative because the theory is wrong, or because, say, there might be a new planet that we didn't know about (“let's call it Pluto”). This is why science progresses through the long-term interaction of competing theories (what Lakatos calls “research programs”), and this is why insisting that scientific theories should produce critical experiments is a fundamental epistemological error. Anyone who has spent a little time in research must have noticed that most hypothesis-driven papers actually test positive predictions of theories, the success of which they interpret as support for those theories.

The second remark is that, nonetheless, there is a bit of truth in the claim that theories of neural network function are difficult to confront with experiments. Certainly they are not very mature. I wouldn't say it is out of laziness, though. It is simply a very difficult task to build meaningful theories of the brain! But it is absolutely not true that they are not constrained enough because of the lack of data. Not only are they constrained, but I do not know of any such theory that is not immediately falsified by countless observations. There is not a single model of brain funtion that comes close to accounting for the complexity of animal behavior, let alone of physiological properties. How many theories in systems neuroscience are actually about systems, i.e. about how an organism might interact with an ecological environment, as opposed to describing responses of some neurons to some stimuli, interpreted as “code”? The biggest challenge is not to distinguish between different theories that would all account for current data (none does), but to build at least one that could qualify as a quantitative theory of brain function.

Importantly, if this diagnosis is correct, then our efforts should rather be spent on developing theories (by this I mean broad, ambitious, theories) than on producing yet more data when we have no theoretical framework to make use of them. This will be difficult as long as the field lives in the 1930s when it comes to epistemology, because any step towards an ambitious theory will be a theory that is falsified by current data, especially if we produce much more data. Can you make a scientific career by publishing theories that are empirically wrong (but interesting)? As provocative as it might sound, I believe you should be able to, if we ever want to make progress on the theory of brain function – isn't that the goal of neuroscience?

Sharpness of spike initiation explained with photos of moutain roads

In a previous post, I tried to explain the idea that spike initiation is “sharp” using a few drawings. The drawings represent a ball on a hilly landscape. The position of the ball represents the state of the system (neuron), and the altitude represents its energy. The ball tends to reside in positions of local minima of the energy. Spiking occurs when some energy is added to the system (stimulation with electrode or synaptic input) so that the ball is moved past a hill and then falls down:

Sharp-initiation-2

What is not seen in this drawing is what happens next. The membrane potential is reset and the neuron can fire again, in principle, but in the drawing the ball ends up on a lower valley than at the start. Biophysically, that corresponds to the fact part of the energy that was stored in electrochemical gradients of ion concentration across the membrane has been released by the spike, and so indeed the energy is now lower. Spiking again would mean falling down to an even lower valley. But since energy is finite, this has to stop at some point. In reality it doesn't stop because the neuron slowly rebuilds its electrochemical energy by moving ions against their concentration gradients with the sodium/potassium pump, which requires some external energy (provided by ATP). In the drawing, this would correspond to slowly moving up from the low valley to the high valley. Unfortunately you can't represent that in two dimensions, you need to represent a road spiraling up around a hill. This is the best photographic illustration I have found:

Tianmen-road

This is a road in China (Tianmen mountain). The neuron starts on top (red ball). With some external energy, the ball can be moved up past the barrier and then it falls. It lands on the road, but on a lower level. It can be made to spike again. Slowly, continuously, some energy is added to the neuron to rebuild its electrochemical gradients: the ball moves up the road. As long as spikes are rare enough, the neuron can continue to spike at any time (ideally, the road should spiral around the hill but it's close enough!).

Smooth spike initiation would correspond to something like that (a road in the Alps):

Alps-road

Are there alternatives to animal research?

Recently, Nikos Logothetis announced that he quit primate research because of lack of support against animal activism. Logethetis is a highly respected scientist; one of his contributions is the understanding of the relation between fMRI, a non-invasive brain imaging technique, and neural electrical activity, as recorded with electrodes. In other words, his work allows scientists to use non-invasive recording techniques in experiments instead of implanting electrodes into animal brains; in particular, it allows scientists to use humans instead of animals in some experiments. Quite paradoxically, animal activists chose him as a target in September 2014, when an animal rights activist infiltrated his lab to shoot a movie that was broadcasted on German television. The movie showed a rare emergency situation following surgery as if it were the typical situation in the lab, and it showed stress behaviors deliberately provoked by the activist (yes: the activist intentionally induced stress himself in the animal and then blamed the lab!).

This case raises at least three different types of questions: political, moral and epistemological. Here I mainly want to address the epistemological question, which is the role of animal experiments in knowledge, but let me first very briefly address the two other questions.

First, the political question: it's anybody's right to think that animals should not be used for research, but it's an entirely different thing to use manipulation, slander and terrorism rather than collective social debate for that end. There is ongoing debate about the conditions of animal experiments in research and there are strict regulations enforced by governmental bodies. As far as I know, Logothetis followed those regulations. Maybe regulations should be stricter (the moral question), but in any case I don't think it's right to let minority groups impose their own rules by fear.

Second, the moral question: is it right to use animals (either kill or inflict pain or suffering) for the good of our own species? As all moral questions (see Michael Sandel's excellent online lectures), this is a very difficult question and I'm not going to answer it here. But it's useful to put it in context. About 10 billion land animals (mostly chicken) are killed every year in the US for food. We are all aware that the living conditions of animals in industrial farms is not great, to say the least. According to official data, there are about 1 million warm-blooded animals used for research every year in the US (of which a fraction are killed), excluding mice and rats, which might account for about 10 times more. Animal use is much tightly regulated in research than in industrial farming (for example, any research project needs to be approved by an ethical committee). So animals used in research represent no more than 0.1% of all animals that are killed by humans. In other words, the effect of banning animal research would be equally obtained by having Americans eating 66.3 grams of chicken every day instead of their 67 gram daily consumption. This does not answer the general moral question, but it certainly puts the specific question of animal experimentation in perspective.

Finally, the epistemological question: what kind of science can we do without animal experiments? A number of animal rights activists seem to think that there are alternatives to animal experiments. On the other hand, many (if not most) scientists claim that virtually every major discovery, especially in medicine, has relied on animal experimentation. Since I'm a theoretical neuroscientist, I certainly think that a lot of science can be done without experiments. But it is definitely true that our civilization would be entirely different if we had not done animal experiments. And I'm not just talking about medicine or even biology. To give just one example, the invention of the electrical battery by Volta was triggered by experiments on frogs done by Galvani in the 18th century.

Probably the failure to recognize this fact comes a general misconception that the public has about the nature of research, the idea that discoveries come from specific projects aimed at making those discoveries, as if research projects were engineering projects. According to that misconception, there is medical or applied research, which is useful, and there is basic research, which does not seem to pursue any useful goal, apart from satisfying the curiosity of scientists. But the fact is: it is inaccurate to speak of “applied research”, the right terms should be applications of research. How can you invent a light bulb if you have never heard about electricity? Similarly, a lot of what we know about cancer comes from basic research on cells.

So it's clear that a large part of our knowledge and consequently our current welfare is linked to previous animal experimentation. Now there is a different question, which is: what can we do now without animal experimentation? First of all, there can be no new pharmaceutical drugs without animal experimentation. Any drug put on the market must pass a series of tests: first test the drug on an animal model, then test on humans for safety, then test on humans for efficacy. Most drugs do not pass those tests. If we ban animal tests, then we have to accept that an equivalent number of humans will face secondary effects or die. I don't think society will ever value animal life more than human life, so that means no more drugs would ever be tested.

This does not mean that all aspects of medicine rely on animal experimentation. For example, it would certainly be quite useful to develop prevention, and this could be done with epidemiological studies (although they are certainly quite limited), or with intervention studies in humans (e.g. experimenting with the diet of human volunteers). So prevention could progress without animal experimentation (although animal experiments would certainly help), but progress of cures, probably not. By pointing this out, I am not making a moral or political statement: you could well argue that, after all, better prevention might save more lives than better cures and so perhaps we should focus on the development of prevention anyway and get rid of animal experimentation. But one has to keep in mind that it means essentially giving up on curing diseases, including new infectious diseases (and those happen: in 1918, Spanish flu killed 50-100 million people).

How about basic research, which, as I pointed out above, is the basis of any kind of application? There is of course a lot of interesting research that does not rely on animal experimentation. But if you want to understand biology, you need at some point to look at and interact with biological organisms. What kind of alternative can there be to animal experiments in biology? First of all, let me make a very elementary observation: to demonstrate that something can be used as an alternative to animal experiments, you first need to compare that thing with animal experiments and so you need to do animal experiments at least initially. This is basically what Logothetis did when comparing fMRI signals with electrophysiological signals in monkeys.

I can think of only two possible alternatives: human experiments and computer models. Let's start with human experiments. I have read somewhere that we could use human stem cells for biological experiments. This may well apply to a number of studies in cellular biology. But a cell is not an organism (except for unicellular organisms), so you can't understand human biology by just studying stem cells. Human stem cells won't help you understand how the brain recovers after a stroke (or at least not without other types of experiments), or how epileptic seizures develop. You need at some point to use living organisms, and so to do experiments on living humans, not just human cells. But the kind of experiments you can do on living humans is rather limited. For example in neuroscience, which is my field, you cannot directly record the activity of single neurons, except during surgery on epileptic patients. These days a lot of human experiments are done using fMRI, which is an imaging technique that measures metabolic activity (basically oxygen in the blood vessels of the brain) at a gross spatial and temporal scale. Then to interact with a human brain, there are not many ways. Essentially, you can present something through normal sensory stimulation, for example images or sounds. The kind of information we get with that is: that brain area is involved in language. Useful maybe, but that won't tell us much about how we speak, and certainly not very much about Alzheimer and Parkinson diseases. A lot of modern biology uses mice with altered genomes to understand to the function of genes (e.g. you suppress a gene and see what difference it makes to the organism, or to the development of pathologies). Obviously this can't be done in humans. In fact, even if it were ethically right, it would not even be practical because human development is too long. So, in most cases, animal experiments cannot be replaced by human experiments. Again this is not a political or moral statement: it's anybody's right to think that animal experiments should be banned, but I'm simply pointing out that human experiments are not an alternative.

The other potential alternative is to use computer models instead of animals to do “virtual experiments”. As I am a computational neuroscientist, I can tell you with very high confidence that there is no way that a computer model might be useful for that purpose either now or in the foreseeable future. It is unfortunate that some scientists have occasionally claimed otherwise, perhaps exaggerating their claims to get large funding. We still have no idea how a genome is mapped to an organism. In fact, even when we have the complete sequence of a gene, and therefore the complete specification of the composition of the protein it encodes, we don't know how the protein will look like in a real cell, how it will fold - let alone the way it interacts with other constitutents of cell. And so we are far, very very far, from having something that remotely looks like a functional computer model of a cell, let alone of an organism. Today the only way to know the function of a gene is to express it in a living organism. The same is obviously true of the brain. No one has ever reported that a simulated neural network was conscious, so something quite important must be missing. We have some good (but still incomplete) knowledge of the electrophysiology of neurons, but little knowledge of how they wire, adapt and coordinate to produce behavior and mind. Computer models are not used to simulate actual organisms – not one computer model has ever been reported to live or think. They are used to support scientific theories, which are built through theoretical work combined with empirical (experimental) work (you can have a look at this series of posts on the epistemology of computational neuroscience, starting with this one).

So no, animal experiments cannot (for most of them) be replaced by human experiments or computer simulations. A lot of research can be done without animal experimentation, but it is a distinct type of research and it cannot answer the same questions. Of course, one can consider that those questions are not worth answering, but this is a moral and political debate, which I have just scratched in this post. But as far as epistemological questions are concerned, it is pretty clear that there cannot be much research in biology without animal experiments.

We may agree that research in biology is important enough (say, at least as important as 0.1% of the chicken we eat) and still care about the welfare of animals, try to reduce the number of animals used in research, and reduce their suffering. This is of course partly the job of ethical committees. Personally, I think one relatively simple way in which we could make animal experiments either less frequent or more useful is to impose that labs make all the experimental data they acquire publicly available, so that other scientists can use them (possibly after some embargo period). Today, for a number of reasons, the only data that come out from a lab are in the publications, generally in the form of plots, while tons of potentially useful data remain on hard drives, hidden from other scientists (see my previous post on this question). I think there is increasing recognition of that issue and potential, as seen in the emergence of data sharing infrastructures (e.g. CRCNS or the Allen Institute Brain Atlas) and in the new data sharing policies of some journals (e.g. Neuron or PLoS).

Sharpness of spike initiation explained with a few drawings

In cortical neurons, and probably in most vertebrate neurons, spikes recorded at the soma look “sharp”, ie, the voltage suddenly rises, unlike the more gradual increase seen in standard biophysical models (eg the Hodgkin-Huxley model of the squid giant axon). Recently I reviewed that phenomenon and current explanations. My own explanation, which I called the “compartmentalization of spike initiation”, is related to the fact that the main outward current at spike initiation (where “outward” is relative to the initiation site) is not the transmembrane K+ current but the axial current flowing between the axonal initiation site and the soma (see the original paper where I explain it). An important consequence is that the proportion of open Na+ channels is a discontinuous function of somatic voltage. In other words, spikes are initiated as in an integrate-and-fire model.

I came up with a few simple illustrations to explain what “sharp” initiation means. Imagine that the somatic membrane potential is represented by a ball moving on a landscape:

Spike-initiation-1
Synaptic inputs can make the ball move to right (red, excitation) or to the left (blue, inhibition). If the input moves the ball past the top of the hill, then the ball will continue going down the hill without any additional excitation: it's a spike.

Spike-initiation-2
This is more or less the standard account of spike initiation. Note that if the ball is just past the hill, then it is possible to make it move back with an appropriate amount of inhibition. Sharpness here is represented by the steepness of the slope: a steeper slope will give faster spikes, and will also make it more difficult to go back once the hill has been passed.

What I proposed is that the correct drawing is rather this:

Sharp-initiation-1
When the ball goes past the hill, the ball falls:

Sharp-initiation-2
This occurs quickly, and importantly, there is a point after which there is no possible coming back. So the situation is qualitatively different from the standard account of spike initiation: it's not just that the slope is more or less steep, but rather there is a ravine.

You will note that there is a short distance between the top of the hill and the ravine, where the ball is pushed towards the ravine but not irreversibly. This corresponds to the persistent sodium current near spike initiation, which comes from the axon.

A comment on neural representations

I noticed by chance that my previous blog post on the metaphor of neural representations has been commented on reddit. It appeared that my argument that the representation metaphor is often misused in neuroscience was not fully understood. I will try to respond to those comments here. Here is one comment:

The brain is a representational system because it takes stimuli from the world, transduces it to a neural signal, and then acts on it.”

What is meant here is that perception is indirect in the sense that it is mediated by the activity of neurons. Certainly, this is obviously true if perception arises from the activity of the nervous system. It is also adequate to call, say, retinal activity a representation, but only in the sense that it is a representation for an external observer. For the brain, that activity is just everything it will ever “see” from the world, so it is not a representation, it is the actual input. The problem is that the case for neural representations is generally made (as in the above quote) from the point of view of the external observer, in which it is a trivial statement (outside world and neural firing are different), but then there is a semantic shift in which neural activity is assumed to form representations for the brain, which is an entirely different claim, and a much more difficult one to back up or even make sense of.

Another comment actually illustrates this point:

Suppose I'm looking at dots on a radar screen for things which are underwater. If I can never actually go underwater to compare the dots with the original stimuli, are the dots merely a "presentation" rather than a "representation? I don't think so...

Well actually: if all you ever had the chance to see in your life were those dots, then indeed they would not be representations for you, they would just be dots on the screen. They become representations once you know they can stand for submarines or whales.

There is another sense of representations that is a bit less trivial, and which was posted as a comment to my post:

Abilities like speech perception would be impossible without representation, as each instantiation of a word is unique (noisy).”

What is meant here is that representations are needed for the formation of perceptual categories. But here the term “representation” is inadequate. A sculpture of a man is not a category of man, it's just a piece of stone that looks like a man. What is meant here is rather abstraction, not representation.

The compartmentalization of spike initiation

A couple of years ago, I proposed a new view on the initiation of spikes, which explains why spike initiation is “sharp” - i.e., spikes seem to rise suddenly rather than gradually. I reviewed that hypothesis recently along with two other hypotheses. I have found it quite difficult to explain it in a simple way, without relying on the equations. After reading the description of spike initiation in a textbook (Purves), I came up with a possibly simpler explanation.

In Purves et al. “Neuroscience”, you find the following description, which is based mostly on the work of Hodgkin and Huxley on squid giant axons:

The threshold is that value of membrane potential, in depolarizing from the resting potential, at which the current carried by Na+ entering the neuron is exactly equal to the K+ current that is flowing out. Once the triggering event depolarizes the membrane beyond this point, the positive feedback loop of Na+ entry on membrane potential closes and the action potential “fires”.

This description corresponds to an isopotential model of neuron. There is an ambiguity in it, which is that initiation actually occurs when the sum of Na+ current and stimulation current (electrode or synaptic) equals the K+ current, so the description is correct only if the current is a short pulse (otherwise for a current step the threshold would be lower).

The active backpropagation hypothesis, put forward by David McCormick to explain the sharpness of spike initiation is as follows: a spike is initiated as described above in the axon, and then it actively backpropagated (that is, with Na channels) to the soma. On its way to the soma, its shape becomes sharper. I discussed in my review why I think this explanation is implausible, but it is not the point of this post.

The compartmentalization hypothesis that I proposed differs in an important way from the textbook explanation above. The site of initiation is very close to the soma, which is big, and the axonal initial segment is very small. This implies that the soma is a “current sink” for the initiation site: this means that when the axon is depolarized at the initiation site, the main outgoing current is not the K+ current (through the axonal membrane) but the resistive current to the soma. So the textbook description is amended as follows:

The threshold is that value of membrane potential, in depolarizing the soma from the resting potential, at which the current carried by Na+ entering the axonal initial segment is exactly equal to the resistive current that is flowing out to the soma. Once the triggering event depolarizes the membrane beyond this point, the positive feedback loop of Na+ entry on membrane potential closes and the action potential “fires”.

The difference is subtle but has at least two important consequences. The first one is that the voltage threshold does not depend on the stimulation current, and so the concept of voltage threshold really does make sense. The second one is that the positive feedback is much faster with compartmentalized initiation. The reason is that in the isopotential case (explanation 1), charging time is the product of membrane resistance and membrane capacitance, which is a few tens of milliseconds, while in the compartmentalized case, it is the product of axial resistance and membrane capacitance. The membrane capacitance of the axon initial segment is small because its surface is small (and the axial resistance is not proportionally larger). This makes the charging time several orders of magnitude smaller in the compartmentalized case.

Metaphors in neuroscience (IV) - Plasticity

The next metaphor I examine is the brain is plastic. In particular, synapses are plastic: they change with activity. But this is not the same thing as saying that synapses are dynamic. Synapses are plastic means that synapses are objects that can change shape while keeping the same substance. Specifically, they can be manipulated into different shapes. Plasticity is a possibility for change that is 1) limited in that only the shape and not the substance is changed, 2) persistent, 3) reversible, 4) mediated by an external actor. For example, cell death is a change but it is not plasticity; developmental changes are also not considered as plasticity even though they can be activity-dependent. These two examples are irreversible changes and therefore not cases of plasticity. Internal changes entirely mediated by intrinsic events would not normally be called plasticity. Transient changes would also not be called plasticity: for example a change in spike threshold after firing is called adaptation or accommodation, not plasticity.

This is quite clearly a metaphor, which carries a particular view on how neural structures change. For example, part of what we describe as synaptic plasticity actually corresponds to the elimination of synapses or of receptors (synaptic pruning), and therefore might be better described by the sculpting metaphor. The metaphor also hides the fact that the substance that makes all those structures is continually renewed (protein turn-over), and this is quite different from a plastic object. This is in fact quite different from an object. The persistence of shape (e.g. of a synapse) is mediated by active processes (which involve gene expression), as opposed to passive persistence of a plastic object. Changes of shape then involve interaction with those processes, rather than direct manipulation.