This week's paper selection (6 October 2016)

 

Comments on the soliton model of action potential

Recently a radically new biophysical theory of action potentials has been proposed, which I will call here the “soliton theory”, according to which action potential propagation is a travelling pulse (soliton) of membrane (lipidic) lateral density, i.e., a sound wave along the axon membrane (Heimburg and Jackson, 2005; Andersen et al., 2009). In this theory, the lipid membrane undergoes a phase transition, from liquid to gel, which propagates along the axon. The electrical spike is then attributed to piezoelectric effects (mechanical changes inducing electrical potential changes). No role is attributed to proteins (ionic channels).

I start with some positive comments. Usually only the electrochemical aspects of neural excitability are considered in the field. But it is known that the electrical phenomenon is accompanied by mechanical, optical and thermal effects, which are the main focus of the soliton theory. The theory also has the merit of bringing attention to non-electrical phenomena in biological membranes, such as structural changes and mechanical effects, which are typically ignored in the field.

The theory appears to be motivated by the observation of reversible heat release during the action potential, i.e., heat is released in the rising phase of the spike and absorbed in the falling phase. Quantitatively, release and absorption have the same magnitude (experimental precision being in the range of 10-20% according to the original papers). This is not explained by HH theory. It does not mean, however, that it is contradictory with HH theory; rather, it would require some additional mechanism that is not part of the theory (there are some speculations in the literature, by Hodgkin, Keynes, Tasaki, and probably others). HH theory does not directly address mechanical changes accompanying the spike, but it does imply mechanical changes by at least two plausible mechanisms: 1) osmotic effects, i.e., water enters the cell along with Na+ influx, and exist with K+ outflux, leading to a diameter variation in phase with the spike (see e.g. (Kim et al., 2007); 2) an electrical field applied on the membrane can change the curvature of the membrane. The appeal of the soliton theory is that a sound wave produces reversible heat and mechanical variations; electrical variation is attributed to piezoelectric effects and therefore all these effects should be in phase. Thus in a way, it has some theoretical elegance. Of course, the actual biophysical mechanisms are not necessary elegant.

Let us now examine the premises and predictions of the theory. First, it is assumed that the lipidic membrane is close to a melting transition, which the authors claim occurs slightly below the body temperature of 37°C. It is rather surprising to read this starting point when the theory is meant to address the shortcomings of the HH model. Let us recall that the HH model is a model of the giant axon of squid, which is a cold-blooded animal living in the ocean. Body temperature is thus much colder and variable. But this is not the most problematic aspect of the theory; let us assume for now that the squid membrane does have the required property.

The main quantitative prediction of the theory is conduction velocity, which follows from membrane properties, and it is calculated to be around 100 m/s. The conclusion is that there is “a minimum velocity of the solitons that is close to the propagation velocity in myelinated nerves”. First, the squid axon is not myelinated (it is anyway not clear why the theory should apply to myelinated nerves rather than unmyelinated ones), and conduction velocity is around 20 m/s. In any case, 100 m/s is not the propagation velocity in myelinated nerves. It is the upper bound of conduction velocity in nerve, which varies over several orders of magnitude and is much smaller for most axons. It actually varies with diameter, quite in line with predictions from HH and cable theory (scaling with square root of diameter for unmyelinated axons; with diameter for myelinated axons). One of the main predictions of the 1952 HH paper where the model is described is conduction velocity, which is accurate within 20% (Hodgkin and Huxley, 1952); to be compared with 500% error in the soliton theory. The prediction was calculated as follows (see chapter 3 of my book in progress): the HH model was built and fitted on a space-clamped (isopotential) squid axon; then it was extended to a model of propagating spike with the cable formalism, ie by adding the axial current term (based on measurement of diameter and intracellular resistivity); then the model was run and a conduction velocity of around 20 m/s was found.

If one of the main predictions of the soliton theory is a minimum conduction velocity of around 100 m/s, then it is definitely wrong. There are of course many other aspects of the theory that are very problematic. HH theory is essentially the ionic hypothesis, ie the idea that changes in membrane potential are due to ionic and capacitive transmembrane currents. There have been numerous quantitative tests of this hypothesis, such as: the peak of the spike is well predicted by the Nernst potential of Na+, the influx of Na+ and outflux of K+ per spike (measured with radioactive tracers) are predicted by the HH model; fluorescence imaging now shows influxes of Na+ in phase with spikes. Early work by HH and colleagues have showed that the squid axon is inexcitable when extracellular Na+ is replaced by choline. All this body of work, which includes many accurate quantitative predictions of HH theory, is contradictory with the soliton theory. The authors seem to deny the existence of ionic channels, which is extremely strange. The detailed molecular and genetic structure of ionic channels is known, as well as their electrophysiological properties (see e.g. (Hille, 2001)); drugs targetting Na+ channels (for which there is huge empirical evidence) block action potentials. There is also the Na/K pump, a major contributor of energy consumption in neurons, which maintains the Na+/K+ concentration gradients necessary in the ionic hypothesis, and which seems totally absurd in the soliton theory.

As it stands, the soliton theory of spikes is contradictory with an extremely large body of experimental evidence, which is explained by HH theory (ie the ionic hypothesis) (note that there have been a number of alternative theories, eg by Ling and Tasaki).

 

Update (21.7.2016): A great resource about the evidence in favor of the ionic hypothesis is (Hodgkin, 1951). There is in fact a section dealing with heat production, where it is by the way noted that overall nerve activity does produce heat, but a quite small amount. It is also clear in the text that heat production in the ionic hypothesis is not that of the equivalent electrical circuit that is used to present the HH model – which is only equivalent in terms of the mathematical equations describing the currents, not physically. That is, the axial current does follow the expectation from the electrical circuit, since in the theory it is due to the electrical field in an electrolyte, but not the transmembrane current, which corresponds to mixing of extra- and intra-cellular solutions in addition to field effects (plus the at the time unknown mechanisms of permeability changes).

 

Andersen SSL, Jackson AD, Heimburg T (2009) Towards a thermodynamic theory of nerve pulse propagation. Prog Neurobiol 88:104–113.

Heimburg T, Jackson AD (2005) On soliton propagation in biomembranes and nerves. Proc Natl Acad Sci U S A 102:9790–9795.

Hille B (2001) Ion Channels of Excitable Membranes. Sinauer Associates.

Hodgkin A, Huxley A (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol Lond 117:500.

Hodgkin AL (1951) The Ionic Basis of Electrical Activity in Nerve and Muscle. Biol Rev 26:339–409.

Kim GH, Kosterin P, Obaid AL, Salzberg BM (2007) A mechanical spike accompanies the action potential in Mammalian nerve terminals. Biophys J 92:3122–3129.

Perceiving and knowing

Perceiving space is knowing where things are in the world. Or is it?

I am sitting in my living room, and there are big windows on a courtyard. The windows are sound-proof and so if I open just one, acoustical waves mostly enter the room through that window. Now someone enters the courtyard on the right, walks across it and arrives at the door on the left. If I close my eyes, I know that the person is walking from right to left. However, what I hear is the sound of someone walking, always coming from the same direction, that of the window. If someone asks me where the person is at a given moment time, I could point to the more or less correct direction, by inference. But this is not what I perceive. I always perceive the sound coming from the same direction. There is a difference between perceiving (phenomenological) and knowing (conceptual). And there is a difference between phenomenology and behavior.

Another striking example is referred pain. Referred pain is a pain that one feels at a location away from the cause of the injury. For example, in a heart attack, one may feel pain in the arm rather than in the chest. This is a known phenomenon and if you know it, you may correctly identify the location of injury in the heart when you feel pain in the arm. But it doesn't change the fact that you feel pain in the arm. You may entirely convince yourself that the injury is in the heart, and all your behavior might be consistent with that belief, but still you will feel the pain in the arm.

There are several interesting conclusions we can draw from these remarks. First, perception is not entirely reducible to behavior. Here we are touching the hard problem of consciousness (qualia): you could observe a cat turning its head to a sound source and you would think that the cat perceives that the sound came from the source, but in reality you don't know. Maybe the cat perceives it somewhere else but it corrects its movement because it knows its perception tends to be biased. With humans, you could perhaps distinguish between these possibilities because humans speak. But without this option, a purely functionalist approach to perception (in terms of relationships between sensory stimuli and behavior) misses part of the phenomenon.

Second, inference is not the same as perception. Spatial perception is not just the process of inferring where something is from sensory inputs. There is also the experience of perception, which is not captured by the objectivist view.

Spatial perception of pain (III) How can we feel pain inside the body?

I will first start with a summary of the different propositions I made in the previous post about where it hurts.

- Proposition A (independent channels): there are two independent channels, one that provides pain information (intensity or quality of pain, through pain receptors) and another one that provides spatial information (through tactile receptors or vision). The two channels are bound by co-occurrence.
- Proposition B (sensorimotor): you feel pain at a particular location because specific movements that you make produce that pain.
- Proposition B2 (sensorimotor): you feel pain at a particular location because whenever this particular activation pattern of pain receptors is present, you can manipulate this pattern or the intensity of pain by specific movements or actions.
- Proposition C (learned association): the localization of pain is inferred from the activation pattern of pain receptors (which must be spatially selective), by association with another channel that carries spatial information (e.g. tactile receptors).

Note that in A and C, I have moved the problem of spatial information to another modality, either touch or vision. We may consider that spatial information in touch and vision is constituted by sensorimotor contingencies, but it is not an important assumption here. The puzzle is the following: we can only touch our skin, the surface of our body, and we cannot see inside our body. If touch is central to the spatial perception of pain, then how is it possible that we can feel pain inside the body (say, in the stomach or in the head)?

I have discussed a similar example in spatial perception: when one hears music or speech through headphones, it usually feels like the sound comes from “inside the head”. First of all, there is a simple argument why sounds should feel as coming from your body in this case: when you move the head, the sound is unaffected, which means the source is part of your head – either on the surface (skin) or inside the head. The same argument applies to pain felt inside the body: rigid displacements of the body do not change the pain or any information associated with it. Therefore the pain is in you, not in the external world. However, this remark does not explain why pain feels inside the body and not on the skin.

I mentioned another possibility for sounds, inside as a default hypothesis: if you cannot identify the source as coming from somewhere outside, then the sound feels located inside. The default hypothesis raises a question: why does it feel located inside rather than not located at all? There is also another problem here: pain does not simply feel inside, it feels at a particular place inside the body (e.g. the stomach).

A first answer is proposition B2. Perhaps you feel a headache in the head and not in the stomach because the pain is only affected by movements of the head. In the same way, touching your stomach may alter the intensity of pain but not touching other parts. This explanation is a combination of default hypothesis (it's not on the skin so it's inside) and sensorimotor theory (B2). It is appealing but let's see how it applies to the perception of sounds inside the head. Here again, sounds do not simply feel inside the head, but at a particular place inside the head (say on the left or on the right). But no movement that you make has any impact on the sound, and so proposition B2 only explains why the sound is inside the head, but not where in the head it is.

Let us formalize the problem more precisely. Your stomach hurts. There is a pattern of activation of receptors that is characteristic of this condition, but no movement that you can make generates this pattern. In addition, in the case of auditory perception inside the head, no movement may alter this pattern. The default hypothesis is logical inference: since it is a new pattern, it must be located where I cannot produce it: in my body. But as we saw, this not sufficiently precise. To make some progress, I will start with an experiment of thought.

Imagine that in your life, you have touched only two points on your skin, points A and B. When something touches point A, you feel it located at A because you recognize the activation pattern of the tactile receptors. But what if something touches a point between A and B? One possibility would be that you don't feel it located at all, you just feel that something touches you. But it contradicts the fact that you feel sounds inside the head or pain inside the body. Another possibility is the default hypothesis: since you have never encountered the activation pattern, then you know it is neither A nor B, so you feel the touch somewhere outside of A and B. But this logical inference does not produce anything more precise. It seems to contradict the fact that we can hear sounds in our head on the left or on the right. To feel the touch somewhere between A and B requires some form of interpolation: if the new activation pattern resembles the pattern that is characteristic of A, then the touch was probably located somewhere near A; if it resembles both A and B, then it was probably located between A and B.

More generally, we can only have a finite number of experiences, and so t is unlikely that the exact activation pattern of receptors is encountered twice. Even if physical stimuli were identical, the body changes over time. Thus, it appears that we could not have any perceptual experience at all unless there is some form of interpolation. A natural proposition is then that detailed perception inside our body results from perceptual interpolation. This is not the same as logical inference, as in the case of the default hypothesis, because it necessary involves some arbitrariness: there is no way you can logically know where exactly between A and B your skin was touched if you have never encountered the activation pattern before, so the perceived location is a guess.

Now let us go back to our specific problem. How can pain be located inside our body? The idea of interpolation seems to imply that the pattern of receptor activation induced by such pains should resemble that of pains induced on the skin at opposite locations on the body. For example, pain in a joint, say the knee, should produce activation patterns resembling those of pains induced at the skin all around the knee.

There are two interesting points to note about the interpolation idea:
1) Sounds and pains located inside the body tend be less precisely localized, the location is “vague”. This means that the concept of interpolation as in picking a particular point between two points is incorrect: somehow the process of perceptual interpolation also affects the uncertainty of the location, or perhaps the perceived size.
2) How specifically are perceptual locations interpolated? In other words, what is the topology of spatial perception?

Spatial perception of pain (II)

Where does it hurt? A common answer to this question is: it hurts at the location of the pain receptors in the body. I will discuss three counter-arguments to this proposition, starting with the simple one. The simple argument is that there is a discrete set of pain receptors on the skin, but the spatial perception of pain is not discrete (the same argument applies to touch): we do not feel pain at discrete locations on our skin. It seems that the phenomenal space of pain is continuous, and this observation does not match anatomy. Then there are the anecdotical arguments: we have headaches but there is no pain receptor in the head; when you lose a limb (say a hand), the non-existent limb can hurt (phantom limb pain); there are systematic mislocalizations of the causes of pain, for example heart pains are felt in the arm. Finally there is a more sophisticated argument. Let us assume that we do feel pain at the location of our pain receptors. But then how do you know where your pain receptors are? One answer would be: we know it because it is somehow encoded in our genes. The “somehow” would deserve some precise explanation, but this is not necessary for this argument. This proposition requires that there is a systematic mapping between our genes and our body at a fine level of detail. That is, the precise location of pain receptors should depend only on our genes. But we know that this is not true. For example the size of our body depends on the diet we had when we were kids, and therefore so does the location of nerves. Therefore, even if we felt pain at the location of the receptors, we would still need to find out where these receptors are.

Another common answer to the question “where does it hurt?” is the objectivist answer: it hurts where the cause of the pain is, or where the injury is. An important difference with the previous one is that it does not imply discrete spatial perception. From our experience, this proposition seems to be correct most of the time but a simple objection is that there are cases of mislocalizations of pain (e.g. heart pains). The same argument as above leads us to the same question: how do you know where the injury or cause of pain is?

If genes are not sufficient, then it must be based on experience. Let us imagine that you hit your knee against a wall. You can see the knee hitting the wall; you also have a tactile experience; you feel an intense pain at the moment of contact, which perhaps gradually fades out. I start with proposition A: there are two independent channels, one that provides pain information (intensity of pain, through pain receptors) and another one that provides spatial information (through tactile receptors or vision). The same question now applies to the spatial channel: how do you know where something touches you? This is simpler to answer because you can touch yourself: you can associate your movements with activation patterns of tactile receptors when you touch your skin. You know where a tactile stimulus is in the sense that you know how to make movements to touch it. An objection to proposition A is: what if there is no external stimulus that activates the tactile receptors? For example, your stomach could hurt because of acidity or a tooth could hurt because of bacteria. There is nothing you can see, and all the tactile receptors are on the skin, so there is no independent source of spatial information, and yet the pain feels precisely localized. The only way to save proposition A is to assume that there actually is another source of spatial information. For example, in the case of tooth pain, maybe tactile receptors (or the nerves) are actually activated. In the case of stomach ache, it is harder to imagine that these receptors on the skin are activated (but I leave it as an open question), and in this case you would need to hypothesize that there are other types of receptors, perhaps acidity receptors, that carry spatial information. But then we are back to the same problem as before: how do these receptors get to carry any spatial information at all? (how do you know where these neurons are?) You would then need to assume that these receptors inside your body can also be activated with your own movements. I leave this as an open possibility. There is still one difficulty, which I will address later because it is shared with other propositions: how can pain be localized inside the body?

I will now discuss two other related propositions. Proposition B is purely sensorimotor: you feel pain at a particular location because specific movements that you make produce that pain. This explanation only requires pain receptors, but these receptors must be activated in a way that is spatially specific (i.e., which varies systematically with the location of the pain stimulus). For example, by pinching yourself, you associate the pattern of pain receptor activation with the location of the skin where you pinched yourself. This proposition implies that you cannot feel any localized pain unless you have previously produced it yourself. But what about when a tooth aches? It seems that you could feel tooth ache without having hurt your tooth yourself before. To save proposition B, it seems necessary to assume that the pain receptors can be activated at a “subthreshold” level that does not produce pain. In this case, to feel pain at a particular location requires previously producing specific movements that produce a similar (but possibly less intense) pattern of activation of the pain receptors.

There is a variation of B, which I will call proposition B2, which goes as follows. You feel the pain at a particular location because whenever this particular activation pattern of pain receptors is present, you can manipulate this pattern or the intensity of pain by specific movements or actions. For example, you hurt your knee and then you know you will feel a bit better if you put your hand on it, maybe because of the heat. Proposition B2 is slightly different from B by the fact that it is how you can manipulate pain, rather than how you can cause pain, that provides spatial information. The example of the tooth would then be: your tooth aches, and you know where it aches because by moving your tongue on your teeth you alter the intensity of pain.

Proposition C is learned association: the localization of pain is inferred from the activation pattern of pain receptors (which must be spatially selective), by association with another channel that carries spatial information (e.g. tactile receptors). For example: you hit your knee against the wall, the tactile receptors carry information about the location of pain, which you associate with the activation pattern of pain receptors. Later, you knee hurts but there is no mechanical contact with anything: you still feel the pain in the knee because it is the same activation pattern as when it was hit by the wall. In proposition C, you could not experience the location of a pain unless you have previously experienced the same pain in conjunction with an independent cue of location. So we have the same problem as in proposition A: what if a tooth aches for the first time? The proposition can be saved in the same way by assuming that pain receptors can be activated at a subthreshold level that does not induce pain.

There are now two questions I want to address: 1) How can we feel pain inside the body? 2) Why do we make systematic errors in localizing pain?

Spatial perception of pain (I)

Pain is a great example of many important themes in philosophy of perception. Here I want to focus on spatial perception, but I will start with a few general comments.

First of all, why do we feel pain in the first place? Scientists tend to offer two types of explanations. One is expressed in terms of efficient causes: you hit your knee against a wall, it activates receptors in your skin, these receptors make some neurons in a particular region of the brain fire, and then these neurons produce a substance (molecules) that is characteristic of pain experience (material cause). Such an explanation has some value, for example it might suggest pharmacological targets for pain relief. However, it does not explain the experience of pain at all. Why is it that a particular molecule induces the experience of pain? This does not seem to be a much better explanation than to say that the contact of the wall induces pain (it is somewhat better because it is more universally associated with pain – i.e., pain caused by other events). This problem is what philosophers call the problem of “qualia”: to explain how pain feels like, why pain hurts rather than just being some information about the state of your body. It is notoriously difficult to explain qualia in terms of efficient causes (see Thomas Nagel's famous paper, “What is it like to be a bat?”). It is much easier to explain the informative content of pain (what is going on in your body), than to explain the phenomenal content of pain (how it feels like).

A second type of explanation is in terms of final cause: it hurts because it is an experience you should avoid. It is useful for you to feel pain because then you will learn to avoid dangerous stimuli: pain has a survival value, which is why it has been selected by evolution. But again this type of explanation fails to address the phenomenal content of pain, because what it requires is a modification of behavior with dangerous stimuli, not necessarily an emotional experience. You could well imagine that when your knee is hit, you get the information that this is something that you should avoid in the future, without carrying an unpleasant feeling. You could also imagine that the event triggers a series of cognitive responses (e.g. negative conditioning) without producing any feeling at all. You could imagine that you hit your knee while sleeping, without being conscious of the event, and that your body reacts to it, perhaps even with negative conditioning (e.g. avoiding to turn in the same direction again), without you actually experiencing pain. So why does it hurt?

I do not know why it hurts. So in this series I want to address another question: where does it hurt? This is also quite an interesting question, because although it sounds obvious to us what is meant by the location of pain, we are really asking about the perceived location in the world of a feeling. What kind of weird question is this?

What is computational neuroscience? (XXII) The whole is greater than the sum of its parts

In this post, I want to come back on methodological reductionism, the idea that the right way, or the only way, to understand the whole is to understand the elements that compose it. A classical rebuttal of methodological reductionism is that the “whole is greater than the sum of its parts” (Aristotle). I feel that this argument is often misunderstood, so I have thought of a simple example from biology.

Cells are enclosed by membranes, which are made of lipids. A membrane is a closed surface that defines an interior and an exterior. No part of a membrane is a membrane, because it is not a closed surface. You could study every single lipid molecule that forms a membrane in detail, and you would still have no understanding of what a membrane is, despite the fact that these molecules are all there is in the membrane (ontological reductionism), and that you have a deep understanding of every single one of them. This is because a membrane is defined as a particular relationship between the molecules, and therefore is not contained in or explained by any of them individually.

There is another important epistemological point in this example. You might want to take a “bottom-up” approach to understanding what a membrane is. You would start by looking at a single lipid molecule. Then you could take a larger patch of membrane and study it, building on the knowledge you have learned from the single molecule. Then you could look at larger patches of membrane to understand how they differ from smaller patches; and so on. However, at no stage in this incremental process do you approach a better understanding of what a membrane is, because the membrane only exists in the whole, not in a part of it, even a big part. “Almost a membrane” is not a membrane. In terms of models, a simple model of a cell membrane consisting of only a small number of lipid molecules arranged as a closed surface captures what a membrane is much better than a large-scale model consisting of almost all molecules of the original cell membrane.

This criticism applies in particular to purely data-driven strategies to understand the brain. You could think that the best model of the brain is the one that includes as much detailed empirical information about it as possible. The fallacy here is that no part of the brain is a brain. An isolated cortex in a box, for example, does not think or behave. A slice of brain is also not a brain. Something “close to the brain” is still not a brain. A mouse is a better model of a human than half a human, which is bigger and physically more similar but dead. This is the same problem as for understanding a membrane (a much simpler system!): the methodologically reductionist strategy misses that it is not the elements themselves that make the whole, it is the relationship between the elements. So the key to understand such systems is not to increase the level of detail or similarity, but to capture relevant higher-order principles.

What is computational neuroscience? (XXI) Lewis Carroll and Norbert Wiener on detailed models

The last published novel of Lewis Carroll, Sylvie and Bruno (1893 for the second volume), contains a passage that explains that a high level of detail is not necessarily what you want from a model. I quote it in full:

“What a useful thing a pocket-map is!” I remarked.

“That’s another thing we’ve learned from your Nation,” said Mein Herr, “map-making. But we’ve carried it much further than you. What do you consider the largest map that would be really useful?”

“About six inches to the mile.”

“Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”

“Have you used it much?” I enquired.

“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.”

In other words: if the model is nearly as complex as the thing it applies to, then it is no more useful than the thing itself. This theme also appears in a 1945 essay by Arturo Rosenblueth and Norbert Wiener, “The Role of Models in Science”:

“The best material model for a cat is another, or preferably the same cat. In other words, should a material model thoroughly realize its purpose, the original situation could be grasped in its entirety and a model would be unnecessary. […] This ideal theoretical model cannot probably be achieved. Partial models, imperfect as they may be, are the only means developed by science for understanding the universe. This statement does not imply an attitude of defeatism but the recognition that the main tool of science is the human mind and that the human mind is finite.”

The last sentence is the most important: a model is not something that is meant to mimic reality; it is something that is constructed by and for the human mind to help it grasp complex aspects of reality.

Subjective physics

I just finished writing a text about "subjective physics": a term I made up to designate the description of the laws that govern sensory signals and their relationships with actions. It is relevant to systems computational neuroscience, embodiment theories and psychological theories of perception (in particular Gibson's ecological theory and the sensorimotor theory). Here is the abstract:

Imagine a naive organism who does not know anything about the world. It can capture signals through its sensors and it can make actions. What kind of knowledge about the world is accessible to the organism? This situation is analog to that of a physicist trying to understand the world through observations and experiments. In the same way as physics describes the laws of the world obtained in this way by the scientist, I propose to name subjective physics the description of the laws that govern sensory signals and their relationships with actions, as observed from the perspective of the perceptual system of the organism. In this text, I present the main concepts of subjective physics, illustrated with concrete examples.

What is computational neuroscience? (XX) What is a realistic model?

What is a realistic neuron model? There is a hierarchy among neuron models, which goes like this: least realistic model is the integrate-and-fire model, which is phenomenological; then the single-compartment Hodgkin-Huxley model; then multicompartmental Hodgkin-Huxley models (this hierarchy is questioned by a recently accepted paper that I wrote, but I will discuss it when the paper is out).

But what is meant exactly by “realistic”? Take two models of a plane: a toy plane made of wood, and a simple paper plane. The first model certainly looks more like a plane. It has different recognizable elements of a plane: wings, helixes, a cockpit. One might say that this model is more realistic. The second model doesn’t have a cockpit, and in fact doesn’t really look like a plane. However, unlike the first model, it flies – definitely an important characteristic of planes. So which one is more realistic?

There are generally two types of answers to justify the fact that the Hodgkin-Huxley model (HH) is more realistic than the integrate-and-fire model (IF). One is: the HH model has ionic channels, the IF model doesn’t. Another one is: the HH model has been proven right with experiments.

Let us start with the first type of answer. Strictly speaking, the HH model does not have ionic channels. Ionic channels are proteins. The HH model is a set of equations. There are parts of these equations that we identify with properties of proteins, but they are not the real things. Saying that the HH model has ionic channels is like saying that the wooden plane has a helix: there is something we call a “helix”, yes, but functionally it is not a helix, it is a nice-looking piece of wood. Specifically, in the HH model, the sodium gating variable (m) has no biophysical counterpart in the actual sodium channel. The sodium current in the HH model corresponds to something that can be physically measured, but it is described as proportional to the third power of gating variable m, only because exponent 3 was the best fit to their data. We call it “gating” variable only because it is part of a story in which it is a gating variable: the story that there are three independent gates that must all be open for the channel to be open. It is an attractive story, but we now know that this is not what happens with the sodium channel. So the model is consistent with a story in which there is a neuron with sodium channels, but the story is not an accurate description of reality. We might call this “wooden plane realism”.

The second of type of answer is more scientific in its expression. However, it is a bit ambiguous. What Hodgkin and Huxley proved is that their model was an accurate description of the electrical behavior of a giant squid axon, which was space-clamped with a metal wire. But when we claim that the HH model is realistic, we mean something more general than that. We mean that the same “kind” of model would successfully account for electrical behavior of other neurons. It would not be exactly the same model, because parameters and ionic channels would be different, and would have to be properly adjusted. So in fact it is rather the HH theory or formalism that is meant to be more realistic. However, for a given neuron, the HH “model” is only more realistic if the structure and parameters of the model are properly adjusted for that given neuron.

These remarks touch on several epistemological concepts that have been described by Karl Popper (The logic of scientific discovery, 1935). The first one is the notion of “empirical content” of a theory, which is defined as the set of possible falsifiers of the theory. In short, for a model, it is the type of (non-tautological) predictions that a model can make. For example, the integrate-and-fire model can make predictions about the membrane potential and the spike times, as a function of the input current. The HH model can additionally make predictions about the sodium and potassium currents. This is just about the logical structure of the models, in their articulation with empirical data, not about whether the models are accurate or not. We can consider greater empirical content as a more satisfying way to rephrase the idea that the HH model is more realistic because it “has” ionic channels. But it is a mistake to identify realism with empirical content: a theory can have a very large empirical content and make predictions that turn out to be all completely wrong.

Related to this notion is the “levels of universality”. Consider these two statements (taken from Popper): all orbits of planets are ellipses; all orbits of heavenly bodies are ellipses. The second statement is more universal, because planets are heavenly bodies. So in this sense it is a better theory. HH theory has this quality of being quite universal: it is meant to apply to spiking and non-spiking neurons, for example.

Finally, a theory can be characterized by its “degree of precision”. Taking again an example from Popper: all orbits of planets are circles; all orbits of planets are ellipses. Independently of the empirical validity of these two statements, the first one is more precise than the second one, because all circles are ellipses. Applied to models, this is related to the number of parameters that are left unspecified. For example, multicompartmental models have a greater empirical content than single-compartment models, because they can make predictions about membrane potential at different locations on the dendritic tree. However, they are not necessarily more realistic because they are less precise: there are many unspecified parameters, and the additional empirical content is only accurate if these parameters are properly set.

So in fact there are two aspects of realism that can be discussed about models. One has to do with the logical structure of the model: what cases it is meant to apply to (empirical content, universality), how precise it is in its predictions (precision); in other words: the ambition of the model. On this dimension, one seeks models with greater universality, greater empirical content, greater precision. Another way to phrase it is to say that a useful model is one that has many opportunities to be wrong. It is less easy than we might think to compare HH and IF models on this dimension: on one hand the HH model is more universal, but on the other hand it is less precise than the IF model (for example, a HH model does not necessarily spike).

This first aspect has nothing to do with how accurate the model is, with respect to empirical observations. It only has to do with the logical structure of the model. The second aspect has to do with empirical validity: how accurate the model predictions are. For example, we could well imagine that a phenomenological model produces more accurate predictions than a biophysical model, which has a greater empirical content. In this case the biophysical model makes more predictions, but they do not match empirical observations as well as the phenomenological model. Which model is more realistic?