Outline of an existentialist theory of evolution

In previous posts, I pointed out that there is a critical epistemological difference between the study of biological things and of physical things, due to the fact that living things have a fundamental “teleonomic project”. I described this project as “reproductive invariance”, following the words of Jacques Monod, which is simply to say that what characterizes a living being is that it can reproduce itself. I believe many scientists would propose this definition. However, it is not very satisfying. For example, isn’t a sterile individual alive? One might amend the definition by proposing that life can only be defined at the level of species, but this does not seem very satisfying. Mules for example, are generally infertile, but certainly they are living beings. As a thought experiment, we may imagine a species of immortal humans that do not reproduce. Certainly we would consider them alive too.

So even though reproduction is a salient characteristic of life, it might not be the best way to define it. Here is another possible definition: a living being is something with the teleonomic project of living. That is, a living being is a set of self-sustaining processes, subject to the forces of nature (this is similar to Varela’s autopoiesis). Thus an important feature of living beings is that they could die, which distinguish them from other stable forms of existence such as rocks. The fact that they can die is critical, because it implies that there are constraints on their conditions of existence. For example, when removed from its natural environment (for example put in the void), a living being dies, so a living being is contingent on a given environment. So let us say that a living being is a stable form of existence that requires some energy, specific processes and environmental conditions to sustain itself. This could be called the existentialist view of life.

Many things could be said about this definition (such as the notion of multicellular organism), but this was just to introduce a discussion about evolution. Life is a stable form of existence, but evolution means change. The question of evolution, then, is what changes in forms of existence can occur while yielding stable forms (i.e., not dying). Here I will not discuss evolution from an empirical point of view, but simply develop the existentialist perspective.

Evolution is often presented (at least in popular science writing) as an improvement process. Genetic algorithms follow this view: there is an objective criterion to be maximized, and by mutations and selective reproduction, the criterion increases over generations. One obvious problem is that there is an objective criterion, independent of the organism itself, and defined externally. But in reality, there is no other objective criterion than the existence of the organism, in the environment. The criterion for survival is defined jointly by the organism and its environment, which itself is made partly of living beings. For example the survival of carnivores is contingent on the existence of herbivores (the notion of ecosystem). If carnivores exist, then a criterion for existence of an herbivore is its ability to escape carnivores. This is not an externally defined criterion. The existences of various organisms are interrelated, and the existence of a specific organism is determined by its sustainability as a stable form in the environment (stable at some timescale). Therefore the notion of “fixed point” is a better mathematical analogy than optimization. Potential changes, either external (environment) or internal (mutations), lead either to quick death, or to a new existential fixed point.

Let us start with a fixed, stable environment. Imagine there are only unicellular organisms, and they do not reproduce. It is possible that some of them die, because of radiations for example. Those that do not die are (by definition) robust to these radiations. These cells, perhaps, would live for a very long time – let us say they live eternally. But suppose now that, by some accident, one cell is given the ability to reproduce itself. When this happens, the cell initially multiplies exponentially. But in turn, this implies that the environment for each cell changes as the cells multiply. In particular, since each cell requires energy and the energy supply is limited, the population cannot grow indefinitely. At that saturation point, resources start to get scarce and some cells die. All cells are equally affected by this process, both reproductive and non-reproductive ones. When cells die, there are more resources again and so cells that can reproduce themselves occupy this space. Soon enough, the eternal non-reproductive cells are replaced by short-lived reproductive cells, where reproduction only compensates for deaths. This saturation point is reached extremely fast, because growth is exponential. Thus, the living world evolves from a stable fixed point, the small population of eternal non-reproducing cells, to a new stable fixed point, the short-lived reproducing cells that saturate the environment. This evolution is very quick, and perhaps it can be described as “punctuated equilibria”, as proposed by Stephen Jay Gould.

What is interesting in this example is that the new cells are not replaced by “better ones”, in a teleonomic sense. But simply, non-reproductive cells cannot co-exist with reproductive cells.

Let us consider now the notion of “adaptation”. Assume that sometimes, reproduction is imperfect and the new cell is slightly different from the original cell (a “mutation”). It could be that the cell dies, or cannot reproduce, and then that mutation does not yield a stable form of existence. Or it could be that the cell has a higher chance of survival, or reproduces itself at a higher rate than the other cells. If this is so, this cell and its descendants will quickly occupy (at an exponential rate) the entire environment. It could be said that the “species” adapts to its environment. This is essentially Darwinism, which is compatible with the existentialist view.

What kind of mutations would occur in this situation? At this stage, one may speculate that most mutations lead to death. Therefore, the mutations that are more likely to be reproduced in the long run are those that 1) reduce the rate of mutations, 2) fix all sorts or mutations that would otherwise yield to death or lower reproduction rate, 3) preserve the said mutations from subsequent mutations. Note that there may actually be a trade-off in the rate of mutations, because mutations are (at least initially) necessary to yield the repair mechanisms. Thus cells with a higher rate of mutations would be more fragile but also reach the self-repairing stage faster. These remarks suggest that, in such a stable environment, the population of cells should evolve sophisticated repair mechanisms. They are sophisticated for two reasons: they affect all mechanisms important for the existence and reproduction of the cell, and they are recursive (they affect the repair mechanisms themselves). Thus it could be said that the primary mechanisms developed by cells are homeostatic mechanisms. One very important corollary is that, at this stage, most mutations should now yield functional changes rather than death or dysfunction. This makes the mutation process creative (or without effect) rather than destructive.

An interesting point is that, even though living organisms are defined as fixed points, the evolution process has a direction. That is, it is not possible to evolve “backwards” towards non-reproductive cells. The fundamental reason is that evolution occurred because the two forms of existence were mutually exclusive. That is, reproductive cells and non-reproductive cells cannot co-exist, except for a transient time, because reproductive cells almost immediately occupy the whole environment and alter the conditions of existence of the non-reproductive cells. It is not that reproductive cells are “better” in some teleonomic sense, but simply that the other cells cannot simultaneously exist with them. Note that it is also possible that evolution occurs without the new form of life disappearing – otherwise there would be no unicellular organisms anymore.

It could be opposed that the co-existence of A and B necessarily yielding to the disappearing of A and the existence of B defines an order on the set of living things, and therefore it would be actually right to claim B is “better” than A, with respect to that order relation. But in fact, this is not an order relation in the mathematical sense. First of all, there are things that can co-exist, i.e., that cannot be ordered with respect to each other. Thus, if it is an order, then it is only a partial order. But more importantly, an order is transitive, which means that if B is better than A and C is better than B, then C is better than A. This is certainly not true in general for life. For example, imagine that bacteria B produces substances that are toxic for organism A, so B is “better” than A. But then imagine that some mutation of B can make a new bacteria C that is more efficient than B (e.g. reproduces at a higher rate) while producing different substances that are not toxic for A. Then C is “better” than B, yet C can co-exist with A, so C is not “better” than A. One may also imagine that A can be “better” than C in some way (e.g. eats something that is necessary for C). It follows that the dominance relation that I described above is not an order relation. It implies in particular that it is not possible to define a general external criterion that is maximized through evolution.

But this does not mean that there are no optimality principles in evolution. On the contrary, consider again the previous example of the reproductive cells. Such cells always live under conditions of sparse resources, since they grow as long as there are enough resources. This means that cells die because of the scarcity of resources. Therefore, any mutation that improves the efficiency with which resources are used quickly develops in the population. It follows that resource efficiency is optimal in a stable form of life. Here optimal should not be understood as “best” with respect to a specific criterion, but rather as something that cannot be improved through the evolutionary process – or, to remove any notion of goodness, something that is stable through the evolutionary process. So, processes tend to be “optimal” because non-optimal ones are not stable.

Let us refine this notion of optimality. This is a pervasive notion in neuroscience, in particular. For example, it might be stated that wiring between neurons is optimal in some way. Two remarks are in order. First, this does not mean that, if one were to design a wiring plan from scratch, this is how it would like. Optimality must be understood in the sense of something that cannot be incrementally improved through the process of evolution. For example, there may be all sorts of oddities due to the specific evolutionary history of the species. Second, and this is perhaps more important, this does not mean that the organism has a plan (say, in its DNA), in the way an architect would have a plan, and that this plan is optimal. It only means that there are processes that develop the wiring, and that these processes cannot be improved. These processes do not need a “plan” in the sense of representation. They could take the form of rules of the kind “avoid such molecules” or “follow such gradient”.

This takes us to a refinement of the notion of optimality. Recall the previous remarks about homeostatic mechanisms. I noted that the first mechanisms developed by a cell should be those that minimize the bad impact of mutations, so that mutations are creative (or without effect) rather than destructive. Perhaps the same could be said of efficiency mechanisms. It seems that a general mechanism that regulates processes to make them maximally efficient in their present context (for example, in terms of metabolism) would be much more stable through evolution than specific mechanisms evolving towards maximum efficiency. Therefore, I postulate that there are early mechanisms, shared by all species, which regulate other processes of the organism to have to maximize their efficiency.

This idea could be taken one step further. Such regulation mechanisms can be seen as implementing the creative function of evolution in the time course of a single individual’s life. Clearly such meta-evolutionary mechanisms would be more stable than any particular mechanism (specific of a species) since, by construction, they are robust to all sorts of changes. There is at least one well-known example: the immune system. The immune system develops antibodies for specific foreign objects such bacteria, through what can be described as a Darwinian process (selection and cloning). Obviously this mechanism is much more stable through evolution than any mechanism targeted at a specific antigen. Note how this is “optimal” in the existential sense I discussed above: once it exists, any mechanism for a specific antigen loses its evolutionary value (although this may not be true of serious epidemic diseases). Thus it should be expected that there are such generic mechanisms in any stable form of life, rather than a “plan” for all specific actions that an organism must take. In the same way, it could be argued that the nervous system is such a generic system. In fact the analogy with the immune system was explicitly made by Edelman (a Nobel prize in immunology who then went to neuroscience).

These remarks suggest that some of the first and most universally shared mechanisms possessed by living beings should be meta-evolutionary or adaptive mechanisms by which the organism can adapt and maximize its efficiency in the face of changing conditions both in its environment (environmental change) and in the body itself (evolutionary change). This has an important consequence. When there is a discrepancy between empirical evidence and optimality principles, for example when some mechanism does not seem metabolically efficient, it is sometimes argued that it is not contradictory with the theory of evolution because there may be little evolutionary pressure for that specific mechanism. But implicitly, this argument assumes that evolutionary pressure actually applies to that specific mechanism, while I would argue that there may not be such specific mechanisms. Instead, it is more likely that there are generic efficiency-maximizing mechanisms, and for these mechanisms evolutionary pressure fully applies. Therefore, such discrepancy is more likely an indication that part of the interpretation is wrong, or pieces of the puzzle are missing.

To summarize, the conclusions of this existentialist reasoning about evolution is that all organisms, starting with unicellular organisms, must have developed 1) a large range of sophisticated homeostatic and repair mechanisms, starting with those that make mutations creative rather than destructive, 2) general-purpose regulating mechanisms that maximize the efficiency of other processes, 3) adapting or “meta-evolutionary” mechanisms that adapt the organism’s processes to changes, both external and internal, that occur within the time course of the individual’s life.

What is sound? (VIII) Sounds: objects or events?

In my first post in this series, I described the differences between seeing and hearing. I noted that what characterizes sounds is that they are not persistent. One may say that sounds are “events”, as opposed to “objects”. I avoided this term because it is implied that an event has a definite start and end. Although this sometimes true (for example speech), many sounds actually do not have a definite end. For example, the sound produced when striking an object has a definite start (the impact) but not a definite end (energy decays exponentially). This is not to say that we hear these sounds as lasting forever, but simply that it is somewhat arbitrary to define a clear ending time. Worse, a number of sounds also have no start and no end. For example, the sound made by a river, or by wind. So what characterizes sounds is not exactly that they have a clear start and end, but rather that they are not persistent, they change through time. So, it could be said that sounds are events, but in the sense that they “happen”. When the sound is heard, the acoustical wave responsible for it is actually not here anymore (this is related to Husserl’s phenomenological description of time).

Now it could be argued that, if one could repeat the sound (with a recording for example, or less accurately by physically producing the sound several times), then perhaps it could qualify as an object. The notion of “repeatable object” is discussed by Jérôme Dokic (“Two ontologies of sound”), where there is an interesting remark about the notion of depiction. When seeing a painting, one sees both the content in the painting and the painting itself. But at first sight, it seems that sounds are not like this: the reproduction of a sound is like the original sound – possibly altered, but not a representation of the sound. But in fact there is an interesting auditory example: when a loud voice is heard through a phone and the volume is low, you actually hear a loud sound (the voice) inside a soft sound (the sound coming out of the phone).

Nevertheless, I think even in this case, describing the sound as a sort of “object” is misleading. An object is something that can be manipulated. For example, if you are looking at a box on the table, you can change your perspective on it, turn around it, see a face disappear behind an edge, etc. You can do this exploration because the object is persistent. In the same way, you could touch it, hold it, turn it, etc. So it makes sense to say that visual or tactile experience is about objects. But the same does not hold for sounds because they are transient, you cannot explore them. If you read my post on spatial hearing, you could oppose that you actually can: some of the properties of sound change when you move around the source. It is true, but precisely you do not hear these changes as changes in the sound, but in the localization of the sound. You feel the same sound, coming from some other direction. How about being able to repeat the sound with a recording? The point is that repeating is not manipulating. To manipulate, you need to change the perspective on the object, and this change of perspective tells you something about the object that you could not know before the manipulation (for example looking behind) – to be more precise, it can be said that visual shape is isomorphic to the relationship between viewing angle and visual field. If you repeat a recording exactly as in the original production, there is no manipulation at all. If you repeat it but, say, filter it in some way, you change it but it does not reveal anything about the sound, so it is not a change in perspective. You just produce a different sound, or possibly a depiction of the sound as in the phone example. The right visual analogy would be to insert a colored filter in front of your eyes, and this does not reveal anything about visual shape. Finally, it could be opposed that a sound could be repeatedly produced, for example by hitting the box several times, and the sound could be manipulated by hitting it with different strengths. But this is in fact not accurate: when the box is hit with a different strength, a different sound is produced, not a new perspective on the same sound. Here the object, what is persistent and can be manipulated, is not the sound: it is the material that produces the sound.

In fact, there is a well-known example in which the environment is probed using acoustical waves: the ultrasound hearing of bats. Bats produce loud ultrasound clicks or chirps and use the echoes to navigate in caves or localize small insects. In this case, acoustical waves are used to construct some sort of objects (the detailed shape of the cave), but I think this is really not what we usually mean by “hearing”, it seems rather closer to what we mean by seeing. I can of course only speculate about the phenomenological experience of bats, but I would guess that their experience is that of seeing, not of hearing.

To summarize: sounds are not like objects, which you can physically manipulate, i.e., have some control over the sensory inputs, in a way that is specific of the object. One possibility, perhaps, is to consider sounds as mental objects: things that you can manipulate in your mind, using your memory – but this is quite different from the notion of visual or tactile object.

On imitation

How is it possible to learn by imitation? For example, consider a child learning to speak. She reproduces a word produced by an adult, for example “Mom”. How is this possible? At first sight, it seems like there is an obvious answer: the child tries to activate her muscles so that the sound produced is similar. But that’s the thing: the sound is not similar at all. A child is much smaller than an adult, which implies that: 1) the pitch is higher, 2) the whole spectrum of the sound is shifted towards higher frequencies (the “acoustic scale” is smaller). So if one were to compare the two acoustic waves, she would find little similarity (both in the time domain and in the spectral domain). Therefore, learning by imitation must be based on a notion of similarity that resides at a rather conceptual level – not at all the direct comparison of sensory signals. Note that the sensorimotor account of perception (in this case the motor theory of speech) does not really help here, because it still requires explaining why the two vastly different acoustic waves should relate to similar motor programs. To be more precise: the two acoustic waves actually do relate to similar motor programs, but the adult’s motor program cannot be observed by the child: the child has to relate the acoustic result of the adult’s motor program with her own motor program, when the latter does not produce the same acoustic result. Could there be something in the acoustic wave that directly suggests the motor program?

This was the easy problem of imitation. But here’s a harder one: how can you imitate a smile? In this case, you can only see the smile you want to imitate on the teacher’s face, but you cannot see your own smile. In addition, it seems unlikely that the ability is based on prior practicing in front of a mirror. Thus, somehow, there is something in the visual signals that suggests the motor program. These are two completely different physical signals, therefore the resemblance must lie somewhere in the higher-order structure of the signals. This means that the perceptual system is able to extract an amodal notion of structure, and compare two structures independently of their sensory origin.

Memory as an inside world

A number of thinkers oppose the notion of pictorial representations, or even of any sort of representation, in the brain. In robotics, Rodney Brooks is often quoted for this famous statement: “the world is its own best model”. In a previous post, I commented on the fact that slime molds can solve complex spatial navigation problems without an internal representation of space – in fact, without a brain! It relies on using the world as a sort of outside memory: the slime mold leaves some extracellular trace on the floor, where it has previously been, so that it avoids being stuck in any one place.

This idea is also central in the sensorimotor theory of perception, and in fact Kevin O’Regan argued about “the world as an outside memory” in an early paper. This is related to a number of psychological findings about change blindness, but I will rephrase the argument from a more computational perspective. Imagine you are making a robot with a moveable eye that has a fovea. At any given moment, you only have a limited view of the world. You could obtain a detailed representation of the visual scene by scanning the scene with your eye and storing the images in memory. This memory would then be a highly detailed pictorial representation of the world. When you want to know some information about an object in any part of the visual scene, you can then look at the right place in the memory. But then why look at the memory if you can directly look at the scene? If moving the eye is very fast, which is the case for humans, then from an operational point of view, there is no difference between the two. It is then simply useless and inefficient to store the information in memory if the information is immediately available in the world. What might need to be stored, however, is some information about how to find the relevant information (what eye movements to produce), but this is not a pictorial representation of the visual scene.

Despite what the title of this post might suggest, I am not going to contradict this view. But we also know that visual memory exists: for example, we can remember a face, or we can remember what is behind us if we have seen it before (although it is not highly detailed). Now I am throwing an idea here, or perhaps an analogy, which might initially sound a bit crazy: how about if memory were like an inside world? In other words, how about interpreting the metaphor “looking at something in memory” in a literal way?

The idea of the world as an external memory implicitly relies on a boundary between mind and world that is put at the interface of our sensors (say, the retina). Certainly this is a conceptual boundary. Our brain interacts with the environment through an interface (sensors/muscles), but we could equally say that any part of the brain interacts with its environment, made of everything outside it, including other parts of the brain. So let us imagine for a while that we put the mind-world boundary in such a way that the hippocampus, which is involved in memory (working memory and spatial memory), is outside it. Then the mind can request information about the world from the sensory system (moving the eyes, observing the visual inputs), or in the same way from the hippocampus (making some form of action on the hippocampus, observing the hippocampal inputs).

Perhaps this might seem somehow like a homunculus thinking exercise, but I think there is something interesting in this perspective. In particular, it puts memory and perception at the same level of description, in terms of sensorimotor interaction. This is interesting because from a phenomenological point of view, there is a similarity between memory and perception: the memory of an image feels (a bit) like an image, or one can say that she “hears a melody in her head”. At the same time, memory has distinct phenomenal properties, for example one cannot interact with memory in the same way as with the physical world, it is also less detailed, and finally there are no “events” in memory (something unpredictable happening).

In other words, this view may suggest a sensorimotor account of memory (where “sensorimotor” must be understood in a very broad sense).

Robots and jobs

Are robots going to free us from the slavery of work, or are they going to steal people’s jobs?

As a computational neuroscientist, this is a question I sometimes think about. For a long time, I have followed a self-reassuring reasoning, which seems to make sense from a logical point of view, that having robots do the work for us means that either we get more products for the same amount of work or each person works less for the same quantity of products. So it has to be a good thing: ideally, robots would do the work we don’t want to do, and we would just do what we are interested in doing – maybe travel, write books, see our friends or play music.

This is a fine theoretical argument, but unfortunately it is also one that ignores the economy we live in. Maybe we could (or should) think of an economy that would make this work, but how about our current capitalist economy? Very concretely, if robots arrive on the market that are able to do the job that people previously did for cheaper, then these people would simply lose their job. If work can be outsourced to poorer countries, then in the same way it can also be outsourced to robots.

One counter-argument, of course, is that in a free market economy, people would temporarily lose their job but then they would be reassigned to other jobs and the whole economy would be more productive. This is a classical free-market fundamentalist argument. But there are at least two problems with this argument. The first is that it commits the mistake of thinking the economy as a quasi-static system: it changes, but it is always in equilibrium. It is implicitly assumed that it is easy to change job, that it has a negligible cost, that large scale changes in labor market has no significant impact on the rest of the economy (think for example of the effect on the financial system of thousands of sacked people being unable to pay their mortgage). Now if we think of a continuous progress, in which innovations regularly arrive and continuously change the structure of the labor market, then it is clear that the economy can never be in the ideal equilibrium state in which jobs are perfectly allocated. At any given time there would be a large fraction of the population that would be unemployed. In addition, anyone would then face a high risk of going through such a crisis in the course of their work life. This would then have major consequences for the financial system, as it would make loans and insurances riskier, and therefore more expensive. These additional costs to society (cost of unemployment and reconversion, financial risk, etc) are what economists call “externalities”: these are costs that have to be paid by society, but they are not supported by the ones that take the decisions that are responsible for these costs. For the company that replaces a human by a robot, the decision is based on the salary of the human vs. the cost of the robot, but it does not include the cost of the negative externalities. For this reason, it is possible that companies take decisions that seem beneficial for each one of them, and yet that have a negative impact on the global economy (not even considering the human factor).

A second problem is that the argument neglects a critical aspect of capitalist systems, which is the division between capital and work. When a human is replaced by a robot, what was previously the product of work is now the product of capital (investment in buying the robot) – see this blog post by Paul Krugman. Very concretely, this means that a larger part of the wealth goes to the owners rather than to the workers. As a thought experiment, we could imagine that the workforce is completely replaced by robots, and that the owner would only buy the robots and then get the money from customers without doing anything. Wealth would then be distributed according to how many robots one owns. This might seem far-fetched, but if you think about it, this is pretty much how real estate works.

So concretely, introducing robots in a capitalist economy means increasing productivity, but it also means that owners get an increasingly bigger part of the pie. In such an economy, the ideal robotic world is a dystopia in which wealth is distributed exclusively in proportion of what people own.

This thought is very bothering for scientists like me, who are more or less trying to make this ideal robotic world happen, with the utopia of the no-forced-work society in mind. How could one avoid the dystopian nightmare? I do not think that it is possible to just stop working on robots. I could personally decide not to work on robots, and maybe I would feel morally right and good about myself, having no responsibility in what happens next, but that would just be burying my head in the sand. The only way it will not happen is if all scientists in the world, in all countries, would stop working on robots or any sort of automation that would increase productivity (internet?). We don’t even seem to be able to stabilize our production of carbon dioxide even when we agree on the consequences, so I don’t think this is very realistic.

So if we can’t stop the scientific progress from happening, then the only other way is to adapt our economy to it. Imagine a society with robots doing all the work, entirely. Since there is no work at all in such a society, then in an unregulated free market economy wealth can only be distributed according to the amount of capital people have. There is simply no other way it could be distributed. Such an economy is bound to lead to the robotic nightmare.

Therefore, society has to take global measures to regulate the economy, and make the distribution of wealth fairer. I don’t have any magical answer, but we could throw a few ideas. For example, one could get rid of inheritance (certainly not easy in practice), and transmit capital from the deceased to the newborn in equal proportion. Certainly some people would get richer than others by the end of their lives, but it would be limited. As a transition policy, one could allow the replacement of people by robots, but the fired worker would own part of the robot. Alternatively, robots could only be owned by people and not by companies. A robot could then replace a worker only when a worker buys the robot and rents it to the company. Another alternative is that robot-making companies belong to the State and can only rent the robots to companies. The wealth would then be shared among citizens.

Certainly all these ideas come with difficulties, none of them is ideal, but one has to keep in mind that not implementing any regulation of this type can only lead to the robotic dystopia.

What is computational neuroscience? (VIII) Technical development and observations

In the previous posts, I have strongly insisted on the epistemological notion that theory precedes empirical observation, in the sense that experiments are designed to test theories. I insisted on this point because computational neuroscience seems to be understood by many scientists through the prism of naive inductivism: the view that theory derives more or less directly from observation (you make experimental measurements, and then you “make a model” from them). I do not need to insist again on why this view is flawed in many ways. But of course it would be absurd to advocate the opposite view, that is, that observation cannot provide knowledge unless it is designed to test a theory. This post is meant to nuance my previous arguments.

In fact, historically, science has progressed by two very different means: one is the introduction of radically new perspectives (“paradigm shifts”), another one is the development of new tools. A salient example in neuroscience is the development of patch-clamp, which allows recording currents flowing through single ionic channels. The technique led to the Nobel Prize of Neher and Sakmann in 1991. The discoveries they made with this technique were not revolutionary in Kuhn’s sense, that is, they did not fundamentally contradict the prevailing views and it was not a conceptual change of paradigm. It was already thought since the times of Hodgkin and Huxley that membrane currents came from the flow of ions through channels in the membrane, even though they could not directly observe it at the time. But still, the ability to make observations that were not possible before led to considerable new knowledge, for example the fact that channel opening is binary and stochastic.

At the present time, many think that the discoverers of optogenetics are on the shortlist to get the Nobel Prize in the coming years. Optogenetics is a very recent technique in which channelrhodopsin, a light-activated channel, is expressed in the membrane of target neurons through genetic manipulation. Using lasers, one can then control the firing of neurons in vivo at a millisecond timescale. It allows probing the causal role of different neurons in behavior, while most previous techniques, which relied mostly on recordings, could only measure correlates of behavior. Although it is probably too early to see it clearly, I anticipate that the technique will trigger not only new empirical knowledge, but also conceptually new theories. Indeed, there is a strong bias on the development of theories by what can be experimentally tested and observed. For example, many current theories in neuroscience focus on the “neural code”, that is, how neurons “represent” different types of information. This is an observer-centric view, which in my opinion stems from the fact that our current empirical view of the brain comes from recordings and imaging: we observe responses to stimuli. The neural coding view is a perspective that one has to adopt to explain such experimental data, rather than a hypothesis on what neurons do. But once we switch to different types of experimental data, in which we observe the effect of neural firing, rather than what they “encode”, not only does it become unnecessary to adopt the stimulus-response perspective, but in fact one has to adopt the opposite perspective to explain the experimental data: neurons act on postsynaptic neurons with spikes, rather than observe the firing of presynaptic neurons. This is a conceptual change of perspective, but one that is triggered by a new experimental technique. Note that it still requires the development of these new theories: by itself, the change in perspective is not a theory. But the new technique is responsible for this development in a sociological/historical sense.

Another way in which I anticipate new theories will arise from empirical observations is in the understanding of dendritic function. Almost all theories in computational neuroscience, at least those that address the functional or network level, are based on a view of synaptic integration based on isopotential neurons. That is, it is assumed that the location of synapses on the dendritic tree shapes postsynaptic potentials and perhaps total conductance, but that is otherwise irrelevant to synaptic integration. This is not exactly a hypothesis, because we know that it is not true, but rather a methodological assumption, an approximation. Why do we make this assumption if we know it is not true? Simply because removing this assumption does not give us an alternative theory, it leaves us with nothing: there are so many possibilities in which dendritic integration might work, we do not know where to start. But this will change (and certainly started changing in recent years) once we have a better general idea of how synapses are distributed on the dendritic tree, and perhaps the mechanisms by which this distribution arises. Indeed, one thing at least is clear from recent experimental work: this distribution is not random at all, and obeys different rules for excitation and inhibition. In other words: even though theory does not derive from observations, it needs a starting point, and therefore observations are critical.