What is computational neuroscience? (XX) What is a realistic model?

What is a realistic neuron model? There is a hierarchy among neuron models, which goes like this: least realistic model is the integrate-and-fire model, which is phenomenological; then the single-compartment Hodgkin-Huxley model; then multicompartmental Hodgkin-Huxley models (this hierarchy is questioned by a recently accepted paper that I wrote, but I will discuss it when the paper is out).

But what is meant exactly by “realistic”? Take two models of a plane: a toy plane made of wood, and a simple paper plane. The first model certainly looks more like a plane. It has different recognizable elements of a plane: wings, helixes, a cockpit. One might say that this model is more realistic. The second model doesn’t have a cockpit, and in fact doesn’t really look like a plane. However, unlike the first model, it flies – definitely an important characteristic of planes. So which one is more realistic?

There are generally two types of answers to justify the fact that the Hodgkin-Huxley model (HH) is more realistic than the integrate-and-fire model (IF). One is: the HH model has ionic channels, the IF model doesn’t. Another one is: the HH model has been proven right with experiments.

Let us start with the first type of answer. Strictly speaking, the HH model does not have ionic channels. Ionic channels are proteins. The HH model is a set of equations. There are parts of these equations that we identify with properties of proteins, but they are not the real things. Saying that the HH model has ionic channels is like saying that the wooden plane has a helix: there is something we call a “helix”, yes, but functionally it is not a helix, it is a nice-looking piece of wood. Specifically, in the HH model, the sodium gating variable (m) has no biophysical counterpart in the actual sodium channel. The sodium current in the HH model corresponds to something that can be physically measured, but it is described as proportional to the third power of gating variable m, only because exponent 3 was the best fit to their data. We call it “gating” variable only because it is part of a story in which it is a gating variable: the story that there are three independent gates that must all be open for the channel to be open. It is an attractive story, but we now know that this is not what happens with the sodium channel. So the model is consistent with a story in which there is a neuron with sodium channels, but the story is not an accurate description of reality. We might call this “wooden plane realism”.

The second of type of answer is more scientific in its expression. However, it is a bit ambiguous. What Hodgkin and Huxley proved is that their model was an accurate description of the electrical behavior of a giant squid axon, which was space-clamped with a metal wire. But when we claim that the HH model is realistic, we mean something more general than that. We mean that the same “kind” of model would successfully account for electrical behavior of other neurons. It would not be exactly the same model, because parameters and ionic channels would be different, and would have to be properly adjusted. So in fact it is rather the HH theory or formalism that is meant to be more realistic. However, for a given neuron, the HH “model” is only more realistic if the structure and parameters of the model are properly adjusted for that given neuron.

These remarks touch on several epistemological concepts that have been described by Karl Popper (The logic of scientific discovery, 1935). The first one is the notion of “empirical content” of a theory, which is defined as the set of possible falsifiers of the theory. In short, for a model, it is the type of (non-tautological) predictions that a model can make. For example, the integrate-and-fire model can make predictions about the membrane potential and the spike times, as a function of the input current. The HH model can additionally make predictions about the sodium and potassium currents. This is just about the logical structure of the models, in their articulation with empirical data, not about whether the models are accurate or not. We can consider greater empirical content as a more satisfying way to rephrase the idea that the HH model is more realistic because it “has” ionic channels. But it is a mistake to identify realism with empirical content: a theory can have a very large empirical content and make predictions that turn out to be all completely wrong.

Related to this notion is the “levels of universality”. Consider these two statements (taken from Popper): all orbits of planets are ellipses; all orbits of heavenly bodies are ellipses. The second statement is more universal, because planets are heavenly bodies. So in this sense it is a better theory. HH theory has this quality of being quite universal: it is meant to apply to spiking and non-spiking neurons, for example.

Finally, a theory can be characterized by its “degree of precision”. Taking again an example from Popper: all orbits of planets are circles; all orbits of planets are ellipses. Independently of the empirical validity of these two statements, the first one is more precise than the second one, because all circles are ellipses. Applied to models, this is related to the number of parameters that are left unspecified. For example, multicompartmental models have a greater empirical content than single-compartment models, because they can make predictions about membrane potential at different locations on the dendritic tree. However, they are not necessarily more realistic because they are less precise: there are many unspecified parameters, and the additional empirical content is only accurate if these parameters are properly set.

So in fact there are two aspects of realism that can be discussed about models. One has to do with the logical structure of the model: what cases it is meant to apply to (empirical content, universality), how precise it is in its predictions (precision); in other words: the ambition of the model. On this dimension, one seeks models with greater universality, greater empirical content, greater precision. Another way to phrase it is to say that a useful model is one that has many opportunities to be wrong. It is less easy than we might think to compare HH and IF models on this dimension: on one hand the HH model is more universal, but on the other hand it is less precise than the IF model (for example, a HH model does not necessarily spike).

This first aspect has nothing to do with how accurate the model is, with respect to empirical observations. It only has to do with the logical structure of the model. The second aspect has to do with empirical validity: how accurate the model predictions are. For example, we could well imagine that a phenomenological model produces more accurate predictions than a biophysical model, which has a greater empirical content. In this case the biophysical model makes more predictions, but they do not match empirical observations as well as the phenomenological model. Which model is more realistic?

4 réflexions au sujet de « What is computational neuroscience? (XX) What is a realistic model? »

  1. Hi Romain,

    interesting read! A related question is whether getting "the most realistic" model is what we should strive for in the first place. I don't know whether you are aware of it but Alain de Cheveigné wrote a short section "What is a model?" in his "Pitch perception models" book chapter. I like this paragraph in it: Norbert Wiener is quoted as saying: “The best material model of a cat is another, or preferably the same, cat.” I disagree: a cat is no easier to handle than itself, and thus not a useful model. Model and world must differ. Faithfulness is not sufficient." And of course George Box's famous quote always applies: "Essentially, all models are wrong, but some are useful.".

    Marcel

  2. Hi Marcel,

    Absolutely, it's essentially the difference between explaining and predicting. So I also don't think we generally want the most realistic model (although in practice this point might be partly captured by the notion of universality: very "faithful" models or theories are also generally not very universal). By the way, note that actually biologists do use the term "model" for animals, e.g. a species is taken as a model for a given human disease. I'm not sure Wiener's quote is correct in any case, because while a cat can be modeled by another cat, it cannot be modeled by the same cat: if you have to perform the experiment in order to make the prediction, it's not really a prediction anymore...

    cheers
    Romain

    • Actually I've found the original paper by Wiener. He does write that sentence, but then he says pretty much the same thing, ie that models are made for the human mind to grasp, and so are not meant to be fully reproduce reality.

  3. My two cents. Taking the idea from this paper (http://genesdev.cshlp.org/content/21/9/1010.long ) , I would consider a model to be “more realistic” than another based on the topology of the model, whereas accuracy of prediction depends on weights on the parameters and is per se not correlated with topology. After all, you can reproduce a certain behaviour with completely different topologies, and each topology in turn can give you different behaviour. Hubs here could be the concentration of main ions, total current for a given ion, voltage etc. Simple nodes could be ion channels, or morphological compartments.

    I agree with the cat-model-of-a-cat question, since the “wooden plane approximation” is in my opinion a blurry thing. Is a plane with a broken wing a good model for a plane that flies?
    And given you can have a Fokker biplane flying, if you make a model of it and try to get a sonic boom you’re up to some frustration. Then you look at what sort of engine would give you more speed. I mean, (sorry I am still new at this so these idea may be naïve, if that is the case please correct me) when you’re in codimension 1 you can happily know what’s happening, when bifurcations arise, what they are going to do. But when you have dozens of parameters, then the presence of a certain behaviour is no longer being necessarily correlated with the presence of another, and exploring parameter spaces becomes a headache. At least for me, the very naïve idea behind “realistic models” is something like: “when I will find some other behaviour to reproduce, I will already be in a close enough position in the parameter space that it should be possible in the realistic model to also reproduce these other behaviours, and if not I can look at the topology and try to devise experiments to test for flaws in it” (given that it still means quite a bit of head banging on the wall 😉 ).

    Does this make any sense?

Répondre à romain Annuler la réponse

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *