What is computational neuroscience? (XXVI) Is optimization a good metaphor of evolution?

Is the brain the result of optimization, and if so, what is the optimization criterion? The popular argument in favor of the optimization view goes as follows. The brain is the result of Darwinian evolution, and therefore is optimally adapted to its environment, ensuring maximum survival and reproduction rates. In this view, to understand the brain is primarily to understand what “adapted” means for a brain, that is, what is the criterion to be optimized.

Previously, I have pointed out a few difficulties in optimality arguments used in neuroscience, in particular the problem of specification (what is being optimized) and the fact that evolution is a history-dependent process, unlike a global optimization procedure. An example of this history dependence is the fascinating case of mitochondria. Mitochondria are organelles in all eukaryotes cells that produce most of the cellular energy in the form of ATP. At this date, the main view is that these organelles are a case of symbiosis: mitochondria were once prokaryote cells that have been captured and farmed. This symbiosis has been selected and conserved through evolution, but optimization does not seem to be the most appropriate metaphor in this case.

Nonetheless, the optimization metaphor can be useful when applied to circumscribed problems that a biological organism might face, for example the energy consumption of action potential propagation. We can claim for example that, everything else being equal, an efficient axon is better than an inefficient one (with the caveat that in practice, not everything else can be made equal). But when applied at the scale of an entire organism, the optimization metaphor starts facing more serious difficulties, which I will discuss now.

When considering an entire organism, or perhaps an organ like the brain, then what criterion can we possibly choose? Recently, I started reading “Guitar Zero” by Gary Marcus. The author points out that learning music is difficult, and argues that the brain has evolved for language, not music. This statement is deeply problematic. What does it mean that the brain has evolved for language? Language does not preexist to speakers, so it cannot be that language was an evolutionary (“optimization”) criterion for the brain, unless we have a more religious view of evolution. Rather, evolutionary change can create opportunities, which might be beneficial for the survival of the species, but there is no predetermined optimization criterion.

Another example is the color visual system of bees (see for example Ways of coloring by Thompson et al.). A case can be made that the visual system of bees is adapted to the color of flowers they are interested in. But conversely, the color of flowers is adapted to the visual system of bees. This is a case of co-evolution, where the “optimization criterion” changes during the evolutionary process.

Thus, the optimization criterion does not preexist to the optimization process, and this makes the optimization metaphor weak.

A possible objection is that there is a preexisting optimization criterion, which is survival or reproduction rate. While this might be correct, it makes the optimization metaphor not very useful. In particular, it applies equally to all living species. The point is, there are species and they are different even though the optimization criterion is the same. Not all have a brain. Thus, optimization does not explain why we have a brain. Species that have a brain have different brains. The nervous system of a nematode is not the same as that of a human, even though they are all equally well adapted, and have evolved for exactly the same amount of time. Therefore, the optimization view does not explain why we speak and nematodes don’t, for example.

The problem is that “fitness” is a completely contextual notion, which depends both on the environment and on the species itself. In a previous post where I discussed an “existentialist” view of evolution, I proposed the following thought experiment. Imagine a very ancient Earth with a bunch of living organisms that do not reproduce but can survive for an indefinite amount of time. By definition, they are adapted since they exist. Then at some point, an accident occurs such that one organism starts multiplying. It multiplies until it occupies the entire Earth and resources become scarce. At this point of saturation, organisms start dying. The probability of dying being the same for both non-reproducing organisms and reproducing ones, at some point there will be only reproducing organisms. Thus in this new environment, reproducing organisms are adapted, whereas non-reproducing ones are not. If we look at the history of evolution, we note that the world of species constantly changes. Species do not appear to converge to some optimal state, because as they evolve, the environment changes and so does the notion of fitness.

In summary, the optimization criterion does not preexist to the optimization process, unless we consider a broad existentialist criterion such as survival, but then the optimization metaphor loses its usefulness.

The magic of Popper vs. the magic of Darwin

In a previous post, I pointed out that Darwin's theory of evolution is incomplete, because it does not explain why random variations are not arbitrary. The emergence of complex adapted living beings does not follow from the logical structure of the explanation: introduce random variations and select the best ones. I gave the example of binary programs: introduce random variations in the code and select the best ones. It doesn't work. Yet the Darwinian explanation has something self-evident in it: if you select the best variations around something, then you should end up with something better. The fallacy is that a variation does not necessarily result in better and worse outcomes with comparable probabilities. For a program, changing one bit generally results in a faulty program. This is why no one in software engineering uses the Darwinian method to write any program.

For the record, and even though it should be self-evident, I am not advocating for “intelligent design” of any sort. I find the debate between educated creationists (ie intelligent design advocates) and neo-Darwinians generally quite disappointing. Creationists would point out some mysterious aspect of life and evolution, and assume that anything mysterious must be divine. Neo-Darwinians would respond that there is nothing mysterious. I would think that a scientific attitude is rather to point out that mysterious does not imply divine, and to try to understand that mysterious thing.

Later it occurred to me that the same fallacy occurs in epistemology, namely in Karl Popper's view of science, probably the most influential epistemological theory among scientists. Popper proposed that a scientific statement, as opposed to a metaphysical statement, is something that can be falsified by an observation. A scientific statement is nothing else than a logical proposition, and you can make an infinite number of such propositions. To distinguish between them, you need to do experiments that falsify some of them. So many scientists seem to think that the scientific process is to design experiments that can distinguish between different theories. This explains the focus on tools that I have talked about before.

There is a rather direct analogy with Darwin's theory of evolution. The focus is on selection, but neglects a critical aspect of the process, which is the creative process: how do you come up with candidate theories in the first place? I discussed this problem in a previous post. For any given set of observations, there is an infinite number of logical statements that are consistent with it. Therefore, you cannot deduce theories from observations; this is the problem of induction. How then do scientists propose theories, and why do we test some theories and not others that would have the same degree of logical validity? (e.g theory = existing set of observations + random prediction of a new observation) This is what we might call the hard problem of epistemology, in reference to the hard problem of consciousness. Popper doesn't address that problem, yet it is critical to the scientific process. How about this epistemological process:

  • Consider a set of theories, which are logical propositions.
  • Select the best ones using Popper's falsificationnism.
  • Allow those theories to reproduce, kill the other ones.
  • Introduce random variations, e.g. randomly add/remove quantifiers or variables.
  • Repeat.

How well would that work? Science does seem to follow a sort of evolutionary process, but selection itself is not sufficient to explain it; one also needs to explain the creative process.

It is true that Popper has correctly identified experiment as a central aspect of science, just as Darwin has correctly identified selection as a central aspect of evolution. But Popper does not address the hard problem of epistemology, just as Darwin does not address the hard problem of evolution, and Tononi does not address the hard problem of consciousness.

What is computational neuroscience? (XXIV) - The magic of Darwin

Darwin’s theory of evolution is possibly the most important and influential theory in biology. I am not going to argue against that claim, as I do believe that it is a fine piece of theoretical work, and a great conceptual advance in biology. However, I also find that the explanatory power of Darwin’s theory is often overstated. I recently visited a public exhibition in a museum about Darwin. Nice exhibition overall, but I was a bit bothered by the claim that Darwin’s theory explains the origin, diversity and adaptedness of species, case solved. I have the same feeling when I read in many articles or when I hear in conversations with many scientists that such and such observed feature of living organisms is “explained by evolution”. The reasoning generally goes like this: such biological structure is apparently beneficial to the organism, and therefore the existence of that structure is explained by evolution. As if the emergence of that structure directly followed from Darwin’s account of evolution.

To me, the Darwinian argument is often used as magic, and is mostly void of any content. Replace “evolution” by “God” and you will notice no difference in the logical structure or arguments. Indeed, what the argument actually contains is 1) the empirical observation that the biological organism is apparently adapted to its environment, thanks to the biological feature under scrutiny; 2) the theoretical claim that organisms are adapted to their environment. Note that there is nothing in the argument that actually involves evolution, i.e., the change of biological organisms through some particular process. Darwin is only invoked to back up the theoretical claim that organisms are adapted, but there is nothing specifically about Darwinian evolution that is involved in the argument. It could well be replaced by God, Lamarck or aliens.

What makes me uneasy is that many people seem to think that Darwin’s theory fully explains how biological organisms get to be adapted to their environment. But even in its modern DNA form, it doesn’t. It describes some of the important mechanisms of adaptation, but there is an obvious gap. I am not saying that Darwin’s theory is wrong, but simply that it only addresses part of the problem.

What is Darwin’s theory of evolution? It is based on three simple steps: variation, heredity and selection. 1) Individuals of a given species vary in different respects. 2) Those differences are inherited. In the modern version, new variations occur randomly at this step, and so variations are introduced gradually over generations. 3) Individuals with adapted features survive and reproduce more than others (by definition of “adapted feature”), and therefore spread those features in the population. There is ample empirical evidence for these three claims, and that was the great achievement of Darwin.

The gap in the theory is the nature and distribution of variations. In the space of all possible small variations in structure that one might imagine, do we actually see them in a biological population? Well for one, there are a substantial number of individuals that actually survive for a certain time, so a large number of those variations are not destructive. Since the metaphor of the day is to see the genome as a code for a program, let us consider computer programs. Take a functional program and randomly change 1% of all the bits. What is the probability that 1) the program doesn’t crash, 2) it produces something remotely useful? I would guess that the probability is vanishingly small. You will note that this is not a very popular technique in software engineering. Another way to put it: consider the species of programs that calculate combinatorial functions (say, factorials, binomial coefficients and the like). Surely one might argue that individuals vary by small changes, but conversely, would a small random change in the code typically produce a new combinatorial function?

So it doesn’t follow logically from the three steps of Darwin’s theory that biological organisms should be adapted and survive to changing environments. There is a critical ingredient that is missing: to explain how, in sharp contrast with programs, a substantial fraction of new variations are constructive rather than destructive. In more modern terms, how is it that completely random genetic mutations result in variations in phenotypes that are not arbitrary?

Again I am not saying that Darwin is wrong, but simply that his theory only addresses part of the problem, and that it is not correct to claim that Darwin’s theory fully explains how biological organisms are adapted to their environment (ie, perpetuate themselves). A key point, and a very important research question, is to understand how new variations can be constructive. This can be addressed within the Darwinian framework, as I outlined in a previous post. It leads to a view that departs quite substantially from the program metaphor. A simple remark: the physical elements that are subject to random variation cannot be mapped to the physical elements of structure (e.g. molecules) that define the phenotype, for otherwise those random variations would lead to random (ie mostly destructive) phenotypes. Rather, the structure of the organism must be the result of a self-regulatory process that can be steered by the elements subject to random variation. This is consistent with the modern view of the genome as a self-regulated network of genes, and with Darwin’s theory. But it departs quite substantially from the magic view of evolution theory that is widespread in the biological literature (at least in neuroscience), and instead points to self-regulation and optimization processes operating at the scale of the individual (not of generations).

What is computational neuroscience? (XXIII) On optimality principles in neuroscience

The notion of optimality of biological structures is both quite popular as a type of explanation and highly criticized by many scientists. It is worth understanding why exactly.

In a previous post, I observed that there are different types of explanations, one of which is final cause. Final cause would be for example the minimization of energy in physics or survival and reproduction in biology. Evolutionary theory makes final causes very important in biology. However, I find that such explanations are often rather weak. Such explanations generally take the form: we observe such biological structure because it is optimal in some sense. What exactly is explained or meant here is not always so clear. That a biological structure is optimal means that we consider a set of possible structures, and among that set the observed structure maximizes some criterion. But what is this set of possible structures? Surely not all possible molecular structures. Indeed evolutionary theory does not say that biological organisms are optimal. It says that changes in structure that occur from one generation to the next tend to increase “adaptability” of the organism (there are variations around this theme, such as gene-centered theories). Evolution is a process and biological organisms result from an evolutionary history: they are not absolute optima among all possible molecular structures (otherwise there would not be many species).

To see this, consider the following analogy. From the postulate that people tend to maximize their own interest, I propose the following explanation of social structure: rich people are rich because they want to be rich. Why is this explanation not satisfying? Because both poor and rich people tend to maximize their own interest (by assumption), and yet only the latter are rich. The problem is that we have specified a process that has a particular trend (increasing self interest), but there is no necessity that this process reaches a general optimal of some sort. It is only optimal within a particular individual history. Maybe the poor people have always acted in their own interest, and maybe they are richer than they would be otherwise, but that doesn’t mean they end up rich. In the same way, evolution is a process and it only predicts an optimum within a particular evolutionary history.

Thus, the first remark is that optimality must be understood in the context of a process, both phylogenetic (species history) and ontogenetic (development), not as a global property. Optimality can only be local with respect to that process – after all, there are many species, not a single “optimal” one. That is to say, the fitness criterion (which has to defined more precisely, see below) tends to increase along the process, so that, at equilibrium (assuming there is such a thing – see the theory of punctuated equilibria), the criterion is locally maximized with respect to that process (i.e., cannot increase by the process).

This is the first qualification. There are at least two other types of criticisms that have been raised, which I want to address now, one empirical and one theoretical. The empirical criticism is that biological organisms are not always optimal. The theoretical criticism is that biological organisms do not need to be optimal but only “good enough”, and there might be no evolutionary pressure when organisms are good enough.

I will first address the empirical criticism: biological organisms are not always optimal. First, they are not expected to be, because of the above qualification. But this is not the deep point. This criticism raises the problem of specification: optimal with respect to what? The Darwinian argument only specifies (local) optimality with respect to survival and reproduction. But optimality is generally discussed with respect to some particular aspect of structure or behavior. The problem is that it is generally not obvious at all how the evolutionary fitness criterion should translate to in terms of structure. This is the problem of specification.

For example, I have heard the argument that “people are not optimal”. I take it that it is meant that people are not rational. This is indeed a very well established fact of human psychology. If you haven’t read it yet, I invite you to read “Thinking, fast and slow” by Daniel Kahneman. There are all sorts of cognitive biases that make us humans not very rational in general. To give you a random example, take the “planning fallacy”: if you try to plan the duration of a substantial project (say, building a house or writing a book), then you will almost always underestimate it by an order of magnitude. The reason is that when planning, you imagine a series of steps that are necessary to achieve the project but you don’t imagine all the possible accidents that might happen (say the contractor dies). Any specific accident is very unlikely so you don’t or can’t think about it, but it is very likely that one accident of this type happens, and so you seriously underestimate the completion time. Annoyingly, you still do if you know about the fallacy (at least I still do). This is the problem of epistemic uncertainty (events that are not part of your probabilistic model, as opposed to probabilistic uncertainty, as in rolling a die – see e.g. the Black Swan by Taleb). So humans are not optimal with respect to the rationality criterion. Why is that? Perhaps rationality does not give you an evolutionary advantage. Or perhaps it would by itself, but it would also come with a very large cost in terms of maintaining the adequate structure. Or perhaps it would require such a different brain structure from what humans currently have that no evolutionary step could possibly take us there. Or perhaps it is just impossible to be rational, because the problem of epistemic uncertainty is so fundamental. I am not trying to give an answer, but simply pointing out that the evolutionary argument does not imply that structure and behavior should be optimal with respect to all criteria that seem desirable. Evolutionary “fitness” is a complex notion that encompasses a set of contradicting subcriteria and history effects.

With this important qualification in mind, it should be noted that there are many aspects of biological structure and behavior that have been shown quite convincingly to be optimal or near-optimal with respect to appropriately chosen criteria. It would be sad to discard them, because those explanations give parsimonious accounts of large sets of empirical data. For example, while people are generally not rational or consistent in their reasoning and decisions, when it comes to perceptual or motor tasks it is well documented that humans tend to be near optimal, as accounted by the Bayesian framework. There are of course important qualifications, but it is the case that many aspects of perception are well predicted by the Bayesian framework, at a quantitative (not just qualitative) level (note that I don’t mean perception in the phenomenological sense, but simply in the sense of sensory-related tasks). One big difference with the preceding example is that there is no epistemic uncertainty in these tasks; that is, when perceptual systems have a good statistical model of reality, then they seem to use it in a Bayesian-optimal way.

There are also convincing cases of optimality for biological structures. Robert Rosen discusses some of them in his book “Optimality principles in biology”, in particular the structure of the vascular system (which also seems to apply to lungs). Many geometrical aspects of the vascular system, such as angle and diameter at branching points and even the number of blood vessels can be accounted for by optimality principles with respect to appropriately chosen (but importantly, simple) criteria. This latter point is critical, as is pointed out in that book. Two criteria are simultaneously considered in this case: maximizing the surface of contact and minimizing the resistance to flow (and thus the required energy).

Another well documented case, this time in neurophysiology, is in the geometrical and molecular properties of axons. There is a short paper by Hodgkin that in my opinion shows pretty good optimality reasoning, including some of the qualifications I have mentioned: “The Optimum Density of Sodium Channels in an Unmyelinated Nerve” (1975). He starts by noting that the giant squid axon mediates the escape reflex, and it is critical for survival that this reflex is fast. Therefore speed of conduction along the axon is a good candidate for an optimality analysis: it makes sense, from an evolutionary viewpoint, that the structure of the axon is optimized for speed. Then he tries to predict the density of sodium channels that would maximize speed. As it turns out, this simple question is itself quite complex. He argues that each channel also increases the membrane capacitance, in a way that depends on voltage because the geometrical conformation of the channels is voltage-dependent. Nonetheless he manages to estimate that effect and derives an optimal channel density, which turns out to be of the right order of magnitude (compared with measurements). He also notes that the relation between channel density and velocity has a “flat maximum”, so the exact value might also depend on other aspects than conduction speed.

He then discusses those other aspects at the end of the text. He notes that in other cases (different axons and species), the prediction based on speed does not work so well. His argument then is that speed may simply not be the main relevant criterion in those other cases. It was in the case of the squid axon because it mediates a time-critical escape reflex, but in other cases speed may not be so important and instead energy consumption might be more relevant. Because the squid axon mediates an escape reflex, it very rarely spikes and so energy consumption is presumably not a big issue – compared to being eaten alive because you are too slow. But energy consumption might be a more important criterion for axons that fire more often (say, cortical neurons in mammals). There is indeed a large body of evidence that tends to show that many properties of spike initiation and propagation are adapted for energy efficiency (again, with some qualifications, e.g. fast spiking cells are thought to be less efficient because it seems necessary to fire at high firing rates). There are other structures where axon properties seem to be tuned for isochrony, yet another type of criterion. Isochrony means that spikes produced by different neurons arrive at the same time at a common projection. This seems to be the case in the optic tract (Stanford 1987, “Conduction Velocity Variations Minimizes Conduction Time Differences Among Retinal Ganglion Cell Axons”) and many other structures, for example the binaural system of birds. Thus many aspects of axon structure seem to show a large degree of adaptation, but to a diversity of functional criteria, and it often involves trade-offs. This concludes this discussion of the problem of specification.

The second criticism is not empirical but theoretical: biological organisms do not need to be optimal but only “good enough”, and there might be no evolutionary pressure when organisms are good enough. There is an important sense in which this is true. This is highlighted by Hodgkin in the paper I mentioned: there is a broad range of values for channel density that leads to near-optimal (“good enough”) velocities, and so the exact value might depend on other, less important, criteria, such as energy consumption. But note that this reasoning is still about optimality; simply, it is acknowledged that organisms are not expected to be optimal with respect to any single criterion, since survival depends on many aspects. A related point is that of robustness and redundancy. It appears that there is a lot of redundancy in biological systems. You could lose a kidney and still be fine, and this is also true at cellular level. This again can be thought of in terms of epistemic uncertainty: you could build something that is optimal with respect to a particular model of the world, but it might make that something very fragile to unexpected perturbations, events that are not predicted by the model. Thus redundancy or more generally robustness is desirable, even though it makes organisms suboptimal with respect to any specific model.

But note that we have not left the framework of evolutionary fitness, since we have described redundancy as a desirable feature (as opposed to a random feature). We have simply refined the concept of optimality, which it should be clear now is quite complex, as it must be understood with respect to a constellation of possibly contradictory subcriteria as well as with respect to epistemic uncertainty. But we are still making all those qualifications within the framework of adaptive fitness of organisms. This does not mean that biological organisms can be suboptimal because of lack of strong evolutionary pressure. More precisely, it means that they can be suboptimal with respect to a particular criterion for which there is a lack of strong evolutionary pressure, if the same structure is also subjected to evolutionary pressure on another criterion. These two criteria could be for example conduction speed and energy consumption.

Yet it could be (and has been) argued that even if a structure were subjected to a single criterion, it might still not be optimal with respect to that criterion if evolutionary pressure is weak. For example, it is often stated that spikes of the squid giant axon are not efficient, as in the Hodgkin-Huxley model they are about 3-4 times more energetically expensive than strictly necessary. Because those axons fire very rarely, it makes little difference whether spikes are efficient or not. Considering this fact, spike efficiency is “good enough”.

I find this theoretical argument quite weak. First, let me note that the 3-4 factor applies to the Hodgkin-Huxley model, which was calibrated mainly for action potential shape, but since then it has been refined and the factor is actually smaller (see e.g. work by Benzanilla). But it’s not the important point. From a theoretical viewpoint, even if evolutionary pressure is weak, it is not inexistent. By a simple argument I exposed before, biological organisms must live in environments where resources are scarce, and so there is strong pressure for efficient use of energy and resources in general. Thus even if the giant axon’s spike is a small proportion of that use, there is still some pressure on its efficiency. Squids are a very old species and there seems to be no reason why that pressure might not have applied at some point. But again this is not the most important point. In my view, the most important point is that evolutionary pressure does not apply at the level of individual elements of structure (e.g., on each axon). It applies at the level of genes, which have impact on the entire organism, or at least a large part of it. So the question is not whether the giant squid axon is energy efficient, but rather whether spike conduction along axons is efficient. It is also quite possible that mechanisms related to metabolism are more generic. Thus, while there might be little evolutionary pressure on that particular axon, there certainly is on the set of all axons.

Why then is the squid giant axon inefficient? (I’m assuming it actually is, although to a lesser degree than usually stated) Here is a possible explanation. Efficiency of spike propagation and initiation depends on the properties of ionic channels. In particular, to have spikes that are both fast and efficient you need sodium channels to inactivate very fast. There is likely a limit to how fast it can be, since proteins are discrete structures (which might explain that fast spiking cortical neurons are relatively inefficient). In mammals, fast inactivation is conveyed not by the main protein of the sodium channel (alpha subunit) but by so called beta subunits, which are distinct proteins that modulate channel properties. This comes with a cost, since all those proteins need to be actively maintained (the resting cost). If the neuron spikes often, most of the energetic cost is incurred by spikes. If it spikes very rarely, most of the energetic cost is the resting cost. When that resting cost is taken into account, it might well be that spikes in the squid giant axon is actually quite efficient. The same line of reasoning might explain why such a big axon is not myelinated (or doesn’t show a similar kind of adaptation): myelination decreases the energetic cost of spike propagation for large diameter axons, but it increases the resting cost (you need glial cells producing myelin).

To conclude: optimality principles are important in biology because these are principles that are specific of living organisms (i.e., they are somehow adapted) and they do explain a large body of empirical data. However, these must be applied with care, keeping in mind the problem of specification (optimality with respect to what; i.e. what is actually important for survival), the problem of history effects (optimality is local relative to phylogenetic and ontogenetic processes) and the problem of epistemic uncertainty (leading to robustness principles).

 

Update: I noticed that in Hodgkin’s “Sherrington Lectures” (1964), the author estimates that mean firing frequency of the giant axon in the life of a squid does not exceed a few impulses per minute, which should produce an amount of sodium intake due to spikes of about 1/300 the amount without spike (leak). Thus the cost of spikes is indeed a negligible proportion of the total cost of axon maintenance.

Outline of an existentialist theory of evolution

In previous posts, I pointed out that there is a critical epistemological difference between the study of biological things and of physical things, due to the fact that living things have a fundamental “teleonomic project”. I described this project as “reproductive invariance”, following the words of Jacques Monod, which is simply to say that what characterizes a living being is that it can reproduce itself. I believe many scientists would propose this definition. However, it is not very satisfying. For example, isn’t a sterile individual alive? One might amend the definition by proposing that life can only be defined at the level of species, but this does not seem very satisfying. Mules for example, are generally infertile, but certainly they are living beings. As a thought experiment, we may imagine a species of immortal humans that do not reproduce. Certainly we would consider them alive too.

So even though reproduction is a salient characteristic of life, it might not be the best way to define it. Here is another possible definition: a living being is something with the teleonomic project of living. That is, a living being is a set of self-sustaining processes, subject to the forces of nature (this is similar to Varela’s autopoiesis). Thus an important feature of living beings is that they could die, which distinguish them from other stable forms of existence such as rocks. The fact that they can die is critical, because it implies that there are constraints on their conditions of existence. For example, when removed from its natural environment (for example put in the void), a living being dies, so a living being is contingent on a given environment. So let us say that a living being is a stable form of existence that requires some energy, specific processes and environmental conditions to sustain itself. This could be called the existentialist view of life.

Many things could be said about this definition (such as the notion of multicellular organism), but this was just to introduce a discussion about evolution. Life is a stable form of existence, but evolution means change. The question of evolution, then, is what changes in forms of existence can occur while yielding stable forms (i.e., not dying). Here I will not discuss evolution from an empirical point of view, but simply develop the existentialist perspective.

Evolution is often presented (at least in popular science writing) as an improvement process. Genetic algorithms follow this view: there is an objective criterion to be maximized, and by mutations and selective reproduction, the criterion increases over generations. One obvious problem is that there is an objective criterion, independent of the organism itself, and defined externally. But in reality, there is no other objective criterion than the existence of the organism, in the environment. The criterion for survival is defined jointly by the organism and its environment, which itself is made partly of living beings. For example the survival of carnivores is contingent on the existence of herbivores (the notion of ecosystem). If carnivores exist, then a criterion for existence of an herbivore is its ability to escape carnivores. This is not an externally defined criterion. The existences of various organisms are interrelated, and the existence of a specific organism is determined by its sustainability as a stable form in the environment (stable at some timescale). Therefore the notion of “fixed point” is a better mathematical analogy than optimization. Potential changes, either external (environment) or internal (mutations), lead either to quick death, or to a new existential fixed point.

Let us start with a fixed, stable environment. Imagine there are only unicellular organisms, and they do not reproduce. It is possible that some of them die, because of radiations for example. Those that do not die are (by definition) robust to these radiations. These cells, perhaps, would live for a very long time – let us say they live eternally. But suppose now that, by some accident, one cell is given the ability to reproduce itself. When this happens, the cell initially multiplies exponentially. But in turn, this implies that the environment for each cell changes as the cells multiply. In particular, since each cell requires energy and the energy supply is limited, the population cannot grow indefinitely. At that saturation point, resources start to get scarce and some cells die. All cells are equally affected by this process, both reproductive and non-reproductive ones. When cells die, there are more resources again and so cells that can reproduce themselves occupy this space. Soon enough, the eternal non-reproductive cells are replaced by short-lived reproductive cells, where reproduction only compensates for deaths. This saturation point is reached extremely fast, because growth is exponential. Thus, the living world evolves from a stable fixed point, the small population of eternal non-reproducing cells, to a new stable fixed point, the short-lived reproducing cells that saturate the environment. This evolution is very quick, and perhaps it can be described as “punctuated equilibria”, as proposed by Stephen Jay Gould.

What is interesting in this example is that the new cells are not replaced by “better ones”, in a teleonomic sense. But simply, non-reproductive cells cannot co-exist with reproductive cells.

Let us consider now the notion of “adaptation”. Assume that sometimes, reproduction is imperfect and the new cell is slightly different from the original cell (a “mutation”). It could be that the cell dies, or cannot reproduce, and then that mutation does not yield a stable form of existence. Or it could be that the cell has a higher chance of survival, or reproduces itself at a higher rate than the other cells. If this is so, this cell and its descendants will quickly occupy (at an exponential rate) the entire environment. It could be said that the “species” adapts to its environment. This is essentially Darwinism, which is compatible with the existentialist view.

What kind of mutations would occur in this situation? At this stage, one may speculate that most mutations lead to death. Therefore, the mutations that are more likely to be reproduced in the long run are those that 1) reduce the rate of mutations, 2) fix all sorts or mutations that would otherwise yield to death or lower reproduction rate, 3) preserve the said mutations from subsequent mutations. Note that there may actually be a trade-off in the rate of mutations, because mutations are (at least initially) necessary to yield the repair mechanisms. Thus cells with a higher rate of mutations would be more fragile but also reach the self-repairing stage faster. These remarks suggest that, in such a stable environment, the population of cells should evolve sophisticated repair mechanisms. They are sophisticated for two reasons: they affect all mechanisms important for the existence and reproduction of the cell, and they are recursive (they affect the repair mechanisms themselves). Thus it could be said that the primary mechanisms developed by cells are homeostatic mechanisms. One very important corollary is that, at this stage, most mutations should now yield functional changes rather than death or dysfunction. This makes the mutation process creative (or without effect) rather than destructive.

An interesting point is that, even though living organisms are defined as fixed points, the evolution process has a direction. That is, it is not possible to evolve “backwards” towards non-reproductive cells. The fundamental reason is that evolution occurred because the two forms of existence were mutually exclusive. That is, reproductive cells and non-reproductive cells cannot co-exist, except for a transient time, because reproductive cells almost immediately occupy the whole environment and alter the conditions of existence of the non-reproductive cells. It is not that reproductive cells are “better” in some teleonomic sense, but simply that the other cells cannot simultaneously exist with them. Note that it is also possible that evolution occurs without the new form of life disappearing – otherwise there would be no unicellular organisms anymore.

It could be opposed that the co-existence of A and B necessarily yielding to the disappearing of A and the existence of B defines an order on the set of living things, and therefore it would be actually right to claim B is “better” than A, with respect to that order relation. But in fact, this is not an order relation in the mathematical sense. First of all, there are things that can co-exist, i.e., that cannot be ordered with respect to each other. Thus, if it is an order, then it is only a partial order. But more importantly, an order is transitive, which means that if B is better than A and C is better than B, then C is better than A. This is certainly not true in general for life. For example, imagine that bacteria B produces substances that are toxic for organism A, so B is “better” than A. But then imagine that some mutation of B can make a new bacteria C that is more efficient than B (e.g. reproduces at a higher rate) while producing different substances that are not toxic for A. Then C is “better” than B, yet C can co-exist with A, so C is not “better” than A. One may also imagine that A can be “better” than C in some way (e.g. eats something that is necessary for C). It follows that the dominance relation that I described above is not an order relation. It implies in particular that it is not possible to define a general external criterion that is maximized through evolution.

But this does not mean that there are no optimality principles in evolution. On the contrary, consider again the previous example of the reproductive cells. Such cells always live under conditions of sparse resources, since they grow as long as there are enough resources. This means that cells die because of the scarcity of resources. Therefore, any mutation that improves the efficiency with which resources are used quickly develops in the population. It follows that resource efficiency is optimal in a stable form of life. Here optimal should not be understood as “best” with respect to a specific criterion, but rather as something that cannot be improved through the evolutionary process – or, to remove any notion of goodness, something that is stable through the evolutionary process. So, processes tend to be “optimal” because non-optimal ones are not stable.

Let us refine this notion of optimality. This is a pervasive notion in neuroscience, in particular. For example, it might be stated that wiring between neurons is optimal in some way. Two remarks are in order. First, this does not mean that, if one were to design a wiring plan from scratch, this is how it would like. Optimality must be understood in the sense of something that cannot be incrementally improved through the process of evolution. For example, there may be all sorts of oddities due to the specific evolutionary history of the species. Second, and this is perhaps more important, this does not mean that the organism has a plan (say, in its DNA), in the way an architect would have a plan, and that this plan is optimal. It only means that there are processes that develop the wiring, and that these processes cannot be improved. These processes do not need a “plan” in the sense of representation. They could take the form of rules of the kind “avoid such molecules” or “follow such gradient”.

This takes us to a refinement of the notion of optimality. Recall the previous remarks about homeostatic mechanisms. I noted that the first mechanisms developed by a cell should be those that minimize the bad impact of mutations, so that mutations are creative (or without effect) rather than destructive. Perhaps the same could be said of efficiency mechanisms. It seems that a general mechanism that regulates processes to make them maximally efficient in their present context (for example, in terms of metabolism) would be much more stable through evolution than specific mechanisms evolving towards maximum efficiency. Therefore, I postulate that there are early mechanisms, shared by all species, which regulate other processes of the organism to have to maximize their efficiency.

This idea could be taken one step further. Such regulation mechanisms can be seen as implementing the creative function of evolution in the time course of a single individual’s life. Clearly such meta-evolutionary mechanisms would be more stable than any particular mechanism (specific of a species) since, by construction, they are robust to all sorts of changes. There is at least one well-known example: the immune system. The immune system develops antibodies for specific foreign objects such bacteria, through what can be described as a Darwinian process (selection and cloning). Obviously this mechanism is much more stable through evolution than any mechanism targeted at a specific antigen. Note how this is “optimal” in the existential sense I discussed above: once it exists, any mechanism for a specific antigen loses its evolutionary value (although this may not be true of serious epidemic diseases). Thus it should be expected that there are such generic mechanisms in any stable form of life, rather than a “plan” for all specific actions that an organism must take. In the same way, it could be argued that the nervous system is such a generic system. In fact the analogy with the immune system was explicitly made by Edelman (a Nobel prize in immunology who then went to neuroscience).

These remarks suggest that some of the first and most universally shared mechanisms possessed by living beings should be meta-evolutionary or adaptive mechanisms by which the organism can adapt and maximize its efficiency in the face of changing conditions both in its environment (environmental change) and in the body itself (evolutionary change). This has an important consequence. When there is a discrepancy between empirical evidence and optimality principles, for example when some mechanism does not seem metabolically efficient, it is sometimes argued that it is not contradictory with the theory of evolution because there may be little evolutionary pressure for that specific mechanism. But implicitly, this argument assumes that evolutionary pressure actually applies to that specific mechanism, while I would argue that there may not be such specific mechanisms. Instead, it is more likely that there are generic efficiency-maximizing mechanisms, and for these mechanisms evolutionary pressure fully applies. Therefore, such discrepancy is more likely an indication that part of the interpretation is wrong, or pieces of the puzzle are missing.

To summarize, the conclusions of this existentialist reasoning about evolution is that all organisms, starting with unicellular organisms, must have developed 1) a large range of sophisticated homeostatic and repair mechanisms, starting with those that make mutations creative rather than destructive, 2) general-purpose regulating mechanisms that maximize the efficiency of other processes, 3) adapting or “meta-evolutionary” mechanisms that adapt the organism’s processes to changes, both external and internal, that occur within the time course of the individual’s life.