The notion of optimality of biological structures is both quite popular as a type of explanation and highly criticized by many scientists. It is worth understanding why exactly.
In a previous post, I observed that there are different types of explanations, one of which is final cause. Final cause would be for example the minimization of energy in physics or survival and reproduction in biology. Evolutionary theory makes final causes very important in biology. However, I find that such explanations are often rather weak. Such explanations generally take the form: we observe such biological structure because it is optimal in some sense. What exactly is explained or meant here is not always so clear. That a biological structure is optimal means that we consider a set of possible structures, and among that set the observed structure maximizes some criterion. But what is this set of possible structures? Surely not all possible molecular structures. Indeed evolutionary theory does not say that biological organisms are optimal. It says that changes in structure that occur from one generation to the next tend to increase “adaptability” of the organism (there are variations around this theme, such as gene-centered theories). Evolution is a process and biological organisms result from an evolutionary history: they are not absolute optima among all possible molecular structures (otherwise there would not be many species).
To see this, consider the following analogy. From the postulate that people tend to maximize their own interest, I propose the following explanation of social structure: rich people are rich because they want to be rich. Why is this explanation not satisfying? Because both poor and rich people tend to maximize their own interest (by assumption), and yet only the latter are rich. The problem is that we have specified a process that has a particular trend (increasing self interest), but there is no necessity that this process reaches a general optimal of some sort. It is only optimal within a particular individual history. Maybe the poor people have always acted in their own interest, and maybe they are richer than they would be otherwise, but that doesn’t mean they end up rich. In the same way, evolution is a process and it only predicts an optimum within a particular evolutionary history.
Thus, the first remark is that optimality must be understood in the context of a process, both phylogenetic (species history) and ontogenetic (development), not as a global property. Optimality can only be local with respect to that process – after all, there are many species, not a single “optimal” one. That is to say, the fitness criterion (which has to defined more precisely, see below) tends to increase along the process, so that, at equilibrium (assuming there is such a thing – see the theory of punctuated equilibria), the criterion is locally maximized with respect to that process (i.e., cannot increase by the process).
This is the first qualification. There are at least two other types of criticisms that have been raised, which I want to address now, one empirical and one theoretical. The empirical criticism is that biological organisms are not always optimal. The theoretical criticism is that biological organisms do not need to be optimal but only “good enough”, and there might be no evolutionary pressure when organisms are good enough.
I will first address the empirical criticism: biological organisms are not always optimal. First, they are not expected to be, because of the above qualification. But this is not the deep point. This criticism raises the problem of specification: optimal with respect to what? The Darwinian argument only specifies (local) optimality with respect to survival and reproduction. But optimality is generally discussed with respect to some particular aspect of structure or behavior. The problem is that it is generally not obvious at all how the evolutionary fitness criterion should translate to in terms of structure. This is the problem of specification.
For example, I have heard the argument that “people are not optimal”. I take it that it is meant that people are not rational. This is indeed a very well established fact of human psychology. If you haven’t read it yet, I invite you to read “Thinking, fast and slow” by Daniel Kahneman. There are all sorts of cognitive biases that make us humans not very rational in general. To give you a random example, take the “planning fallacy”: if you try to plan the duration of a substantial project (say, building a house or writing a book), then you will almost always underestimate it by an order of magnitude. The reason is that when planning, you imagine a series of steps that are necessary to achieve the project but you don’t imagine all the possible accidents that might happen (say the contractor dies). Any specific accident is very unlikely so you don’t or can’t think about it, but it is very likely that one accident of this type happens, and so you seriously underestimate the completion time. Annoyingly, you still do if you know about the fallacy (at least I still do). This is the problem of epistemic uncertainty (events that are not part of your probabilistic model, as opposed to probabilistic uncertainty, as in rolling a die – see e.g. the Black Swan by Taleb). So humans are not optimal with respect to the rationality criterion. Why is that? Perhaps rationality does not give you an evolutionary advantage. Or perhaps it would by itself, but it would also come with a very large cost in terms of maintaining the adequate structure. Or perhaps it would require such a different brain structure from what humans currently have that no evolutionary step could possibly take us there. Or perhaps it is just impossible to be rational, because the problem of epistemic uncertainty is so fundamental. I am not trying to give an answer, but simply pointing out that the evolutionary argument does not imply that structure and behavior should be optimal with respect to all criteria that seem desirable. Evolutionary “fitness” is a complex notion that encompasses a set of contradicting subcriteria and history effects.
With this important qualification in mind, it should be noted that there are many aspects of biological structure and behavior that have been shown quite convincingly to be optimal or near-optimal with respect to appropriately chosen criteria. It would be sad to discard them, because those explanations give parsimonious accounts of large sets of empirical data. For example, while people are generally not rational or consistent in their reasoning and decisions, when it comes to perceptual or motor tasks it is well documented that humans tend to be near optimal, as accounted by the Bayesian framework. There are of course important qualifications, but it is the case that many aspects of perception are well predicted by the Bayesian framework, at a quantitative (not just qualitative) level (note that I don’t mean perception in the phenomenological sense, but simply in the sense of sensory-related tasks). One big difference with the preceding example is that there is no epistemic uncertainty in these tasks; that is, when perceptual systems have a good statistical model of reality, then they seem to use it in a Bayesian-optimal way.
There are also convincing cases of optimality for biological structures. Robert Rosen discusses some of them in his book “Optimality principles in biology”, in particular the structure of the vascular system (which also seems to apply to lungs). Many geometrical aspects of the vascular system, such as angle and diameter at branching points and even the number of blood vessels can be accounted for by optimality principles with respect to appropriately chosen (but importantly, simple) criteria. This latter point is critical, as is pointed out in that book. Two criteria are simultaneously considered in this case: maximizing the surface of contact and minimizing the resistance to flow (and thus the required energy).
Another well documented case, this time in neurophysiology, is in the geometrical and molecular properties of axons. There is a short paper by Hodgkin that in my opinion shows pretty good optimality reasoning, including some of the qualifications I have mentioned: “The Optimum Density of Sodium Channels in an Unmyelinated Nerve” (1975). He starts by noting that the giant squid axon mediates the escape reflex, and it is critical for survival that this reflex is fast. Therefore speed of conduction along the axon is a good candidate for an optimality analysis: it makes sense, from an evolutionary viewpoint, that the structure of the axon is optimized for speed. Then he tries to predict the density of sodium channels that would maximize speed. As it turns out, this simple question is itself quite complex. He argues that each channel also increases the membrane capacitance, in a way that depends on voltage because the geometrical conformation of the channels is voltage-dependent. Nonetheless he manages to estimate that effect and derives an optimal channel density, which turns out to be of the right order of magnitude (compared with measurements). He also notes that the relation between channel density and velocity has a “flat maximum”, so the exact value might also depend on other aspects than conduction speed.
He then discusses those other aspects at the end of the text. He notes that in other cases (different axons and species), the prediction based on speed does not work so well. His argument then is that speed may simply not be the main relevant criterion in those other cases. It was in the case of the squid axon because it mediates a time-critical escape reflex, but in other cases speed may not be so important and instead energy consumption might be more relevant. Because the squid axon mediates an escape reflex, it very rarely spikes and so energy consumption is presumably not a big issue – compared to being eaten alive because you are too slow. But energy consumption might be a more important criterion for axons that fire more often (say, cortical neurons in mammals). There is indeed a large body of evidence that tends to show that many properties of spike initiation and propagation are adapted for energy efficiency (again, with some qualifications, e.g. fast spiking cells are thought to be less efficient because it seems necessary to fire at high firing rates). There are other structures where axon properties seem to be tuned for isochrony, yet another type of criterion. Isochrony means that spikes produced by different neurons arrive at the same time at a common projection. This seems to be the case in the optic tract (Stanford 1987, “Conduction Velocity Variations Minimizes Conduction Time Differences Among Retinal Ganglion Cell Axons”) and many other structures, for example the binaural system of birds. Thus many aspects of axon structure seem to show a large degree of adaptation, but to a diversity of functional criteria, and it often involves trade-offs. This concludes this discussion of the problem of specification.
The second criticism is not empirical but theoretical: biological organisms do not need to be optimal but only “good enough”, and there might be no evolutionary pressure when organisms are good enough. There is an important sense in which this is true. This is highlighted by Hodgkin in the paper I mentioned: there is a broad range of values for channel density that leads to near-optimal (“good enough”) velocities, and so the exact value might depend on other, less important, criteria, such as energy consumption. But note that this reasoning is still about optimality; simply, it is acknowledged that organisms are not expected to be optimal with respect to any single criterion, since survival depends on many aspects. A related point is that of robustness and redundancy. It appears that there is a lot of redundancy in biological systems. You could lose a kidney and still be fine, and this is also true at cellular level. This again can be thought of in terms of epistemic uncertainty: you could build something that is optimal with respect to a particular model of the world, but it might make that something very fragile to unexpected perturbations, events that are not predicted by the model. Thus redundancy or more generally robustness is desirable, even though it makes organisms suboptimal with respect to any specific model.
But note that we have not left the framework of evolutionary fitness, since we have described redundancy as a desirable feature (as opposed to a random feature). We have simply refined the concept of optimality, which it should be clear now is quite complex, as it must be understood with respect to a constellation of possibly contradictory subcriteria as well as with respect to epistemic uncertainty. But we are still making all those qualifications within the framework of adaptive fitness of organisms. This does not mean that biological organisms can be suboptimal because of lack of strong evolutionary pressure. More precisely, it means that they can be suboptimal with respect to a particular criterion for which there is a lack of strong evolutionary pressure, if the same structure is also subjected to evolutionary pressure on another criterion. These two criteria could be for example conduction speed and energy consumption.
Yet it could be (and has been) argued that even if a structure were subjected to a single criterion, it might still not be optimal with respect to that criterion if evolutionary pressure is weak. For example, it is often stated that spikes of the squid giant axon are not efficient, as in the Hodgkin-Huxley model they are about 3-4 times more energetically expensive than strictly necessary. Because those axons fire very rarely, it makes little difference whether spikes are efficient or not. Considering this fact, spike efficiency is “good enough”.
I find this theoretical argument quite weak. First, let me note that the 3-4 factor applies to the Hodgkin-Huxley model, which was calibrated mainly for action potential shape, but since then it has been refined and the factor is actually smaller (see e.g. work by Benzanilla). But it’s not the important point. From a theoretical viewpoint, even if evolutionary pressure is weak, it is not inexistent. By a simple argument I exposed before, biological organisms must live in environments where resources are scarce, and so there is strong pressure for efficient use of energy and resources in general. Thus even if the giant axon’s spike is a small proportion of that use, there is still some pressure on its efficiency. Squids are a very old species and there seems to be no reason why that pressure might not have applied at some point. But again this is not the most important point. In my view, the most important point is that evolutionary pressure does not apply at the level of individual elements of structure (e.g., on each axon). It applies at the level of genes, which have impact on the entire organism, or at least a large part of it. So the question is not whether the giant squid axon is energy efficient, but rather whether spike conduction along axons is efficient. It is also quite possible that mechanisms related to metabolism are more generic. Thus, while there might be little evolutionary pressure on that particular axon, there certainly is on the set of all axons.
Why then is the squid giant axon inefficient? (I’m assuming it actually is, although to a lesser degree than usually stated) Here is a possible explanation. Efficiency of spike propagation and initiation depends on the properties of ionic channels. In particular, to have spikes that are both fast and efficient you need sodium channels to inactivate very fast. There is likely a limit to how fast it can be, since proteins are discrete structures (which might explain that fast spiking cortical neurons are relatively inefficient). In mammals, fast inactivation is conveyed not by the main protein of the sodium channel (alpha subunit) but by so called beta subunits, which are distinct proteins that modulate channel properties. This comes with a cost, since all those proteins need to be actively maintained (the resting cost). If the neuron spikes often, most of the energetic cost is incurred by spikes. If it spikes very rarely, most of the energetic cost is the resting cost. When that resting cost is taken into account, it might well be that spikes in the squid giant axon is actually quite efficient. The same line of reasoning might explain why such a big axon is not myelinated (or doesn’t show a similar kind of adaptation): myelination decreases the energetic cost of spike propagation for large diameter axons, but it increases the resting cost (you need glial cells producing myelin).
To conclude: optimality principles are important in biology because these are principles that are specific of living organisms (i.e., they are somehow adapted) and they do explain a large body of empirical data. However, these must be applied with care, keeping in mind the problem of specification (optimality with respect to what; i.e. what is actually important for survival), the problem of history effects (optimality is local relative to phylogenetic and ontogenetic processes) and the problem of epistemic uncertainty (leading to robustness principles).
Update: I noticed that in Hodgkin’s “Sherrington Lectures” (1964), the author estimates that mean firing frequency of the giant axon in the life of a squid does not exceed a few impulses per minute, which should produce an amount of sodium intake due to spikes of about 1/300 the amount without spike (leak). Thus the cost of spikes is indeed a negligible proportion of the total cost of axon maintenance.
Ping : What is computational neuroscience? (XXVI) Is optimization a good metaphor of evolution? | Romain Brette