What is computational neuroscience? (XXXV) Metaphors as morphisms

What is a metaphor? Essentially, a metaphor is an analogy that doesn’t say its name. We use metaphors all the time without even noticing it, as was beautifully demonstrated by Lakoff & Johnson (1980). When I say for example, “let me cast some light on this issue”, I am using a fairly sophisticated metaphor in which I make an analogy between understanding and seeing. In that analogy, an explanation allows you to understand, in the same way as light allows you to see. You might then reply: I see what you mean, it is clearer! Chances are that, in normal conversation, we would not have noticed that we both used a metaphor.

Metaphors are everywhere in neuroscience, and in biology more generally (see these posts). For example: evolution optimizes traits (see the excellent article of Gould & Lewontin (1979) for a counterpoint); the genome is a code for the organism (see Denis Noble (2011a; 2011b)); the brain runs algorithms, or is a computer (see also Paul Cisek (1999) or Francisco Varela); neural activity is a code.

These metaphors are so ingrained in neuroscientific thinking that many object to the very idea that they are metaphorical. The objection is that “evolution is optimization” or “brain runs algorithms” is not a metaphor, it is a theory. Or, for the more dogmatic, these are not metaphors, these are facts.

Indisputable truths belong to theology, not science, so any claim that a general proposition is a fact should be seen as suspect – it is an expression of dogmatism. But there is a case that we are actually talking about theories. In the case of neural codes or brains as computers, one might insist that the terms “code” or “computer” refer to abstract properties, not to concrete objects like a desktop computer. But this is a misunderstanding of what a metaphor, or more generally an analogy, is. When I am “casting light on this issue”, I am not referring to any particular lamp, but to an abstract concept of light which does not actually involve photons. The question is not whether words are actually some sort of photons, but whether the functional relation between light and seeing is similar to the functional relation between explanation and understanding. There is no doubt that these concepts are abstracted from actual properties of concrete situations (of light and perception), but so are the concepts of code and computer. In the metaphor, it is the abstract properties that are at stake, so the objection “it is not a metaphor, it is a theory” either misunderstands what metaphor is (a metaphor is a theory), or perhaps really means “the theory is correct” – again dogmatism.

For the mathematically minded, a mathematical concept that captures this idea is “morphism”. A morphism is a map that preserves structure. For example, a group homomorphism f from X to Y is such that f(a*b) = f(a) x f(b): the operation * defined on X is mapped to the operation x defined on Y (of course “metaphors are morphisms” is a metaphor!).

For example, in the “let me cast light on this issue” metaphor, I am mapping the domain of visual perception to the domain of linguistic discourse: light -> words; visual object -> issue ; seeing -> understanding. What makes the metaphor interesting is that some relations within the first domain are mapped to relations in the other domain: use of light on an object causes seeing; use of words on an issue causes understanding.

Another example in science is the analogy between the heart and a pump. Each element of the pump (e.g. valve, liquid) is mapped to an element of the heart, and the analogy is relevant because functional relations between elements of the pump are mapped to corresponding relations between elements of the heart. Thus, the analogy has explanatory power. What makes a metaphor or an analogy interesting is not the fact that the two domains are similar (they are generally not), but the richness of the structure preserved by the implied morphism.

In other words, a metaphor or an analogy is a theory that takes inspiration from another domain (e.g. computer science), by mapping some structure from one domain to the other. There is nothing intrinsically wrong with this, on the contrary. Why then is the term “metaphor” so vehemently opposed in science ? Because the term implies that the theory is questionable (hence, again, dogmatism). There are ways in which understanding is like seeing, but there are also ways in which it is different.

Let us consider the metaphor “the brain implements algorithms”, which I previously discussed. Some are irritated by the very suggestion that this might even be a metaphor. The rhetorical strategy is generally two-fold: 1) by “algorithm”, we mean some abstract property, not programs written in C++; 2) the definition of “algorithm” is made general enough that it is trivially true, in which case it is not a metaphor since it is literally true. As argued, (1) is a misunderstanding of linguistics because metaphor is about abstract properties. And if we follow (2), then nothing can be inferred from the statement. Thus, it is only to the extent that “the brain implements algorithms” is metaphorical that it is insightful (and it is to some extent, but in my view to a limited extent).

The key question, thus, is what we mean by “algorithm”. A natural starting point would be to take the definition from a computer science textbook. The most used textbook on the subject is probably Cormen et al., Introduction to algorithms. It proposes the following definition: “a sequence of computational steps that transform the input into the output”. One would need to define what “computational” means in this context, but it is not key for this discussion. With this definition, to say that the brain implements an algorithm means that there exists a morphism between brain activity and a sequence of computational steps. That is, intermediate values of the algorithm are mapped to properties of brain activity (e.g. firing rates measured over some time window) - this is the “encoding”. Then we claim that this mapping has the property that a computational step linking two values is mapped to the operation of the dynamics of the brain linking the two corresponding neural measurements. I explain in the third part of my essay on neural coding why this claim cannot be correct, at least not generally and only approximately (one reason is that a measurement of neural activity must be done on some time window, and thus cannot be considered as an initial state of a dynamical system, from which you could deduce the future dynamics). But this is not the point of this discussion. The point is that this claim, that there is a morphism between an algorithm and brain activity, is not trivial and it has explanatory value. In other words, it is interesting. This stems from the rich structure that is being mapped between the two domains.

Since it is not trivial (as in fact any metaphor), a discussion will necessarily arise about whether and to what extent the implied mapping does in fact preserve structure between the two domains. You could accept this state of affairs and provide empirical or theoretical arguments. Or you could dismiss the metaphorical nature entirely. But by doing so, you are also dismissing what is interesting about the metaphor, that is, the fact that there might be a morphism between two domains. We could for example redefine “algorithm” in a more general way as a computable function, even if it is not what is usually meant by that (as the Cormen textbook shows). But in that case, the claim loses all explanatory value because no structure at all is transported between the two domains. We are just calling sensory signals “input” and motor commands “output” and whatever happens in between “algorithm”. In mathematical terms, this is a mapping but not a morphism.

Thus, metaphors are interesting because they are morphisms between domains, which is what gives them scientific value (they are models). The problem, however, is that metaphor is typically covert, and failure to recognize them as such leads to dogmatism. When one objects to the use of some words like “code”, “algorithm”, “representation”, “optimization”, a common reaction is that the issue “is just semantic”. What this means is that it is just about arbitrary labels, and the labels themselves do not really matter. As if scientific discourse were essentially uninteresting and trivial (we just observe things and give them names). This reaction reveals a naïve view of language where words are mappings (between objects and arbitrary labels), when what matters is the structured concepts that words refer to through morphisms, not just mappings. This is what metaphor is about.

Is the coding metaphor relevant for the genome?

I have argued that the neural coding metaphor is highly misleading (see also similar arguments by Mark Bickhard in cognitive science). The coding metaphor is very popular in neuroscience, but there is another domain of science where it is also very popular: genetics. Is there a genetic code? Many scientists have criticized the idea of a genetic code (and of a genetic program). A detailed criticism can be found in Denis Noble’s book “The music of life” (see also Noble 2011 for a short review).

Many of the arguments I have made in my essay on neural coding readily apply to the “genetic code”. Let us start with the technical use of the metaphor. The genome is a sequence of DNA base triplets called “codons” (ACG, TGA, etc). Each codon specifies a particular amino-acid, and proteins are made of amino-acids. So there is a correspondence between DNA and amino-acids. This seems an appropriate use of the term “code”. But even it in this limited sense, it should be used with caution. The fact that a base triplet encodes an amino-acid is conditional on this triplet being effectively translated into an amino-acid (note that there are two stages, transcription into RNA, then translation into a protein). But in fact only a small fraction of a genome is actually translated, about 10% (depending on species); the rest is called “non-coding DNA”. So the same triplets can result in the production of an amino-acid, or they can influence the translation-transcription system in various ways, for example by interacting with various molecules involved in the production of RNA and proteins, thereby regulating transcription and translation (and this is just one example).

Even when DNA does encode amino-acids, it does not follow that a gene encodes a protein. What might be said is that a gene encodes the primary structure of proteins, that is, the sequence of amino-acids; but it does not specify by itself the shape that the protein will take (which determines its chemical properties), the various modifications that occur after translation, the position that the protein will take in the cellular system. All of those crucial properties depend on the interaction of the product of transcription with the cellular system. In fact, even the primary structure of proteins is not fully determined by the gene, because of splicing.

Thus, the genome is not just a book, as suggested by the coding metaphor (some have called the genome the “book of life”); it is a chemically active substance that interacts with its chemical environment, a part of a larger cellular system.

At the other end of the genetic code metaphor, genes encode phenotypes, traits of the organism. For example, the gene for blue eyes. A concept that often appears in the media is the idea of genes responsible for diseases. One hope behind the human genome project was that by scrutinizing the human genome, we might be able to identify the genes responsible for every disease (at least for every genetic disease). Some diseases are monogenic, i.e., due to a single gene defect, but the most common diseases are polygenic, i.e., are due to a combination of genetic factors (and generally environmental factors).

But even the idea of monogenic traits is misleading. There is no single gene that encodes a given trait. What has been demonstrated in some cases is that mutations in a single gene can impact a given trait. But this does not mean that the gene is responsible by itself for that trait (surprisingly, this fallacy is quite common in the scientific literature, as pointed out by Yoshihara & Yoshihara 2018). A gene by itself does nothing. It needs to be embedded into a system, namely a cell, in order to produce any phenotype. Consequently, the expressed phenotype depends on the system in which the gene is embedded, in particular the rest of the genome. There cannot be a gene for blue eyes if there are no eyes. So no gene can encode the color of eyes; this encoding is at best contextual (in the same way as “neural codes” are always contextual, as discussed in my neural coding essay).

So the concept of a “genetic code” can only be correct in a trivial sense: that the genome, as a whole, specifies the organism. This clearly limits the usefulness of the concept, however. Unfortunately, even this trivial claim is also incorrect. An obvious objection is that the genome specifies the organism only in conjunction with the environment. The deeper objection is that the immediate environment of the genome is the cell itself. No entity smaller than the cell can live or reproduce. The genome is not a viable system, and as such it cannot produce an organism, nor can it reproduce. An interesting experiment is the following: the nucleus (and thus the DNA) from an animal cell is transferred to the egg of an animal of another species (where the nucleus has been removed) (Sun et al., 2005). The “genetic code” theory would predict that the egg would develop into an animal of the donor species. What actually happens (this was done in related fish species) is that the egg develops into some kind of hybrid, with the development process closer to that of the recipient species. Thus, even in the most trivial sense, the genome does not encode the organism. Finally, since no entity smaller than the cell can reproduce, it follows that the genome is not the unique basis of heritability – the entire cell is (see Fields & Levin, 2018).

In summary, the genome does not encode much except for amino-acids (for about 10% of it). It should be conceptualized as a component that interacts with the cellular system, not as a “book” that would be read by some cellular machinery.

What is computational neuroscience? (XXVI) Is optimization a good metaphor of evolution?

Is the brain the result of optimization, and if so, what is the optimization criterion? The popular argument in favor of the optimization view goes as follows. The brain is the result of Darwinian evolution, and therefore is optimally adapted to its environment, ensuring maximum survival and reproduction rates. In this view, to understand the brain is primarily to understand what “adapted” means for a brain, that is, what is the criterion to be optimized.

Previously, I have pointed out a few difficulties in optimality arguments used in neuroscience, in particular the problem of specification (what is being optimized) and the fact that evolution is a history-dependent process, unlike a global optimization procedure. An example of this history dependence is the fascinating case of mitochondria. Mitochondria are organelles in all eukaryotes cells that produce most of the cellular energy in the form of ATP. At this date, the main view is that these organelles are a case of symbiosis: mitochondria were once prokaryote cells that have been captured and farmed. This symbiosis has been selected and conserved through evolution, but optimization does not seem to be the most appropriate metaphor in this case.

Nonetheless, the optimization metaphor can be useful when applied to circumscribed problems that a biological organism might face, for example the energy consumption of action potential propagation. We can claim for example that, everything else being equal, an efficient axon is better than an inefficient one (with the caveat that in practice, not everything else can be made equal). But when applied at the scale of an entire organism, the optimization metaphor starts facing more serious difficulties, which I will discuss now.

When considering an entire organism, or perhaps an organ like the brain, then what criterion can we possibly choose? Recently, I started reading “Guitar Zero” by Gary Marcus. The author points out that learning music is difficult, and argues that the brain has evolved for language, not music. This statement is deeply problematic. What does it mean that the brain has evolved for language? Language does not preexist to speakers, so it cannot be that language was an evolutionary (“optimization”) criterion for the brain, unless we have a more religious view of evolution. Rather, evolutionary change can create opportunities, which might be beneficial for the survival of the species, but there is no predetermined optimization criterion.

Another example is the color visual system of bees (see for example Ways of coloring by Thompson et al.). A case can be made that the visual system of bees is adapted to the color of flowers they are interested in. But conversely, the color of flowers is adapted to the visual system of bees. This is a case of co-evolution, where the “optimization criterion” changes during the evolutionary process.

Thus, the optimization criterion does not preexist to the optimization process, and this makes the optimization metaphor weak.

A possible objection is that there is a preexisting optimization criterion, which is survival or reproduction rate. While this might be correct, it makes the optimization metaphor not very useful. In particular, it applies equally to all living species. The point is, there are species and they are different even though the optimization criterion is the same. Not all have a brain. Thus, optimization does not explain why we have a brain. Species that have a brain have different brains. The nervous system of a nematode is not the same as that of a human, even though they are all equally well adapted, and have evolved for exactly the same amount of time. Therefore, the optimization view does not explain why we speak and nematodes don’t, for example.

The problem is that “fitness” is a completely contextual notion, which depends both on the environment and on the species itself. In a previous post where I discussed an “existentialist” view of evolution, I proposed the following thought experiment. Imagine a very ancient Earth with a bunch of living organisms that do not reproduce but can survive for an indefinite amount of time. By definition, they are adapted since they exist. Then at some point, an accident occurs such that one organism starts multiplying. It multiplies until it occupies the entire Earth and resources become scarce. At this point of saturation, organisms start dying. The probability of dying being the same for both non-reproducing organisms and reproducing ones, at some point there will be only reproducing organisms. Thus in this new environment, reproducing organisms are adapted, whereas non-reproducing ones are not. If we look at the history of evolution, we note that the world of species constantly changes. Species do not appear to converge to some optimal state, because as they evolve, the environment changes and so does the notion of fitness.

In summary, the optimization criterion does not preexist to the optimization process, unless we consider a broad existentialist criterion such as survival, but then the optimization metaphor loses its usefulness.

A comment on neural representations

I noticed by chance that my previous blog post on the metaphor of neural representations has been commented on reddit. It appeared that my argument that the representation metaphor is often misused in neuroscience was not fully understood. I will try to respond to those comments here. Here is one comment:

The brain is a representational system because it takes stimuli from the world, transduces it to a neural signal, and then acts on it.”

What is meant here is that perception is indirect in the sense that it is mediated by the activity of neurons. Certainly, this is obviously true if perception arises from the activity of the nervous system. It is also adequate to call, say, retinal activity a representation, but only in the sense that it is a representation for an external observer. For the brain, that activity is just everything it will ever “see” from the world, so it is not a representation, it is the actual input. The problem is that the case for neural representations is generally made (as in the above quote) from the point of view of the external observer, in which it is a trivial statement (outside world and neural firing are different), but then there is a semantic shift in which neural activity is assumed to form representations for the brain, which is an entirely different claim, and a much more difficult one to back up or even make sense of.

Another comment actually illustrates this point:

Suppose I'm looking at dots on a radar screen for things which are underwater. If I can never actually go underwater to compare the dots with the original stimuli, are the dots merely a "presentation" rather than a "representation? I don't think so...

Well actually: if all you ever had the chance to see in your life were those dots, then indeed they would not be representations for you, they would just be dots on the screen. They become representations once you know they can stand for submarines or whales.

There is another sense of representations that is a bit less trivial, and which was posted as a comment to my post:

Abilities like speech perception would be impossible without representation, as each instantiation of a word is unique (noisy).”

What is meant here is that representations are needed for the formation of perceptual categories. But here the term “representation” is inadequate. A sculpture of a man is not a category of man, it's just a piece of stone that looks like a man. What is meant here is rather abstraction, not representation.

Metaphors in neuroscience (IV) - Plasticity

The next metaphor I examine is the brain is plastic. In particular, synapses are plastic: they change with activity. But this is not the same thing as saying that synapses are dynamic. Synapses are plastic means that synapses are objects that can change shape while keeping the same substance. Specifically, they can be manipulated into different shapes. Plasticity is a possibility for change that is 1) limited in that only the shape and not the substance is changed, 2) persistent, 3) reversible, 4) mediated by an external actor. For example, cell death is a change but it is not plasticity; developmental changes are also not considered as plasticity even though they can be activity-dependent. These two examples are irreversible changes and therefore not cases of plasticity. Internal changes entirely mediated by intrinsic events would not normally be called plasticity. Transient changes would also not be called plasticity: for example a change in spike threshold after firing is called adaptation or accommodation, not plasticity.

This is quite clearly a metaphor, which carries a particular view on how neural structures change. For example, part of what we describe as synaptic plasticity actually corresponds to the elimination of synapses or of receptors (synaptic pruning), and therefore might be better described by the sculpting metaphor. The metaphor also hides the fact that the substance that makes all those structures is continually renewed (protein turn-over), and this is quite different from a plastic object. This is in fact quite different from an object. The persistence of shape (e.g. of a synapse) is mediated by active processes (which involve gene expression), as opposed to passive persistence of a plastic object. Changes of shape then involve interaction with those processes, rather than direct manipulation.

Metaphors in neuroscience (III) - Neural representations

A commonplace in neuroscience is to say that the brain represents the world. In particular, sensory systems are thought to represent images or sounds. A representation is an object, for example a picture or a sculpture, that is meant to be recognized as another object. Both the original object and its representation are to be presented to the same perceptual system and recognized by that system as perceptually similar. The idea that the perceptual system itself might represent an external object seems quite peculiar: it seems to entail that there is a second perceptual system that sees both the external object and the representation made by the first perceptual system. The metaphor cannot apply in this way. But then what do people mean when they say that the brain represents the world? A clue might be provided by this quote from David Marr's book “Vision” (1982):

If we are capable of knowing what is where in the world, our brains must somehow be capable of representing this information.”

Here “knowing” cannot simply mean acting as a function of the external world, because it has been well argued that representations are not necessary for that – simple control systems can be sufficient (see e.g. Rodney Brooks and Braitenberg's vehicles). Thus “knowing” must be meant in a stronger sense, the ability of manipulating concepts and relating them to other concepts. If it is assumed that anything mental is produced by the brain, then somehow those concepts must be grounded in the activity of neurons. In what sense does that activity form a “representation” of external objects? For this metaphor to make sense, there must be a system that sees both the original object and its mental representation, where the representation is an object that can be manipulated. The possibility of mental manipulation entails working memory. So for the brain, representing the world means producing persistent neural activity, structured in such a way that it can be compared with the direct sensory flow coming from the external world.

What is surprising is that Marr and many others do not generally use the representational metaphor in this proper way. Instead, the activity of sensory systems, for example the primary visual cortex, is seen as representing the external world, in the same way as a photograph represents the external world. But unless it is meant that the rest of the brain sees and compares retinal activity and cortical activity, cortical activity is a presentation, not a representation. It might be a representation for an external observer (the one that sees both the world and the photographs), but not for the brain. Thus the metaphorical idea that mental/neural representations mediate perception is somewhat self-contradictory, but unfortunately it is one of the dominant metaphors in neuroscience.

P.S.: see this later comment

Metaphors in neuroscience (II) - Neural firing

Neurons communicate by means of short electrical impulses. We say that neurons fire, spike, or discharge. We speak of spikes, impulses, discharges or action potentials. What concepts do these metaphors convey?

“Spike” seems to refer to the shape of the action potential when looked at on a voltage trace, ie, there is an abrupt rise and fall of the potential. The action potential is quite a different notion: it is a particular form of potential that allows some form of action. That is, the terms action potential convey the notion that those potentials, unlike other ones (subthreshold potentials), have an effect on other neurons. This is a notion that is strikingly not representational.

Firing, impulse and discharge add another aspect: an action potential releases energy. The metaphor is rather accurate as energy is stored in electrochemical gradients across the membrane and the opening of ionic channels releases some of that energy. Firing also conveys a notion of movement: the energy is targeted to some particular place, the axonal terminals. The metaphor is only partially accurate, because when firing a gun, energy is only released at firing time and then the bullet moves to its target. But in neurons, propagation is active and energy is released all along the axon. Thus a better metaphor would be that the neuron ignites, where the axon progressively burns. On the other hand, in myelinated axons, energy is released at discrete locations (Ranvier nodes), so the neuron could be seen as firing in sequence towards the next node: between two nodes, there is a movement that does not use additional energy, as in firing a bullet (dominoes could also be an adequate metaphor). So perhaps a mylienated axon fires (repeatedly), but an unmyelinated axon ignites!

“Discharge” is an interesting term because it relates to a former theory of action potential. The metaphor suggests that the membrane is an electrically charged capacitor, and it gets discharged during the action potential. This seems to correspond to Bernstein's theory (beginning of the twentieth century), according to which the negative resting potential is due to a gradient of potassium concentration across the membrane and the action potential corresponds to a non-selective increase in membrane permeability, resulting in a decrease of the membrane potential (in absolute value). But in 1939, Hodkgin and Huxley made the first intracellar recording of an action potential in an animal and they found out that the membrane potential did not go to 0 mV but actually exceeded it quite substantially. So the discharge metaphor entails a particular model, but one that is now outmoded.

Finally, related to the concept of firing is the notion of threshold. When the membrane potential reaches a threshold, the neuron fires. A threshold is a spatial delimitation between two rooms, or between the outside and the inside of a house. It conveys the notion of a qualitative change. Before threshold, you are outside; after the threshold, you are inside. So the threshold metaphor entails the all-or-none law of neural activity: there is a spike or there is no spike.

In the integration metaphor, inputs and outputs are seen as objects (things that can be manipulated). Specifically, neural output (membrane potential) is a container (inputs are integrated into the membrane potential). In contrast, in the firing metaphor (and related metaphors), neural outputs are seen not as objects but as discrete, timed actions on other neurons (the action potential), which release energy. Thus the integration metaphor and the firing metaphor convey somewhat different views on neural function. Perhaps speculatively, I would suggest that the disonance between these two metaphors is the deep source of the firing rate vs. spike timing debate. In the integration metaphor, the neuron is a container and what matters for a container is what it contains, i.e. the number of inputs. When exactly those inputs come into the container is relatively unimportant. The integration metaphor conveys a representational view of the brain and is consistent with the rate-based view. In the firing metaphor, what is emphasized is the fact that neurons spend energy to act on each other. Actions are events, and therefore time is possibly important. This view is not representational but rather interactional or dynamicist.

An important question is how empirically accurate these metaphors are, especially when some are inconsistent. I have discussed this question indirectly in my series on the firing rate vs. spike timing debate. I will simply point out that the firing metaphor is fairly accurate, as briefly discussed above, possibly if firing is replaced by ignition. There is a release of energy that propagates and acts on other neurons, which occurs discretely when some condition is met. The integration metaphor, on the other hand, is rather loose. It cannot be accurate without substantial qualifications. The main effect of a presynaptic spike is generally short-lived, so an input could be said to be integrated, but only with the qualification that it gets out quickly. The effect of several input spikes on the membrane potential also depends on their relative time of arrival, and this fact does not fit the container metaphor very well.

Metaphors in neuroscience (I) – Neural integration

I have just read Lakoff & Johnson's “Metaphors we live by” (see this summary), which is a classic brilliant book about the use of metaphor in both language and the way we think. The thesis is that we constantly use metaphors when we speak, especially about abstract concepts, and it actually structures the way we think about those concepts, even though we normally don't even notice we speak in metaphors. Typically we tend to speak of abstract concepts by using metaphors that are more concrete. For example, a theory is a building (it has foundations, it requires support and it can fall apart), understanding is seeing (a clear argument; a point of view; an eye opener). Time is an abstract concept that is very difficult to speak of without using metaphors, for example time as space and duration as distance (an hour long, a point in time, etc).

Metaphors are also widespread in science. Because they structure our thought and often go unnoticed, it is quite interesting to identify them: what ways of thinking do scientific metaphors convey, and other ways are possible? For example, neurons code and neurons compute. These are two quite distinct metaphors. In the coding metaphor, the neuron hears about the world and speaks to other neurons, in the language of electrical impulses. In the computing metaphor, the neuron is a sort of factory that processes inputs. But you could also say that neurons push each other, where axonal terminals are hands and spikes are mechanical energy. This is just as true as the metaphor neurons speak to each other (coding metaphor). Each of these metaphors conveys a different way of seeing the activity of neurons (bonus: check the metaphors in this sentence!).

In this series, I want to explore some of the metaphors in neuroscience. I start with two concepts: “integration” (this post) and “firing” (next post). Those two words are found in the integrate-and-fire model, but clearly they appear throughout the neurophysiological literature. Neuron function is generally described as a two-stage process: synaptic integration followed by firing.

The neuron integrates its synaptic inputs: the neuron is a container, and the inputs come into the container. The inputs are objects, they accumulate within the neuron, so after integration they are in the neuron – possibly in a “processed” form (neuron = factory). So in integration there is a concept of summation: inputs add up. Of course there can be qualifications: the integration could be “nonlinear”. But the very fact that “nonlinear” is a qualification means that the basic concept is that of summing up inputs that come in. What is important to realize here is that it is a metaphor, that is, you could imagine other metaphors to think about the propagation of electricity in neurons. For example, you could think of neurotransmitters as pushing dominoes placed all along the neuron; specific things could happen at branch points depending on when the dominoes get pushed on each branch. In both metaphors we are saying that the output depends on the inputs, but each metaphor emphasizes or hides certain aspects of that relation. For example, in the first metaphor, time is deemphasized and is replaced by counting. In the second metaphor, the notion of “summing up” doesn't even make sense because activity is transient.

Importantly, the two metaphors convey very different models of neural function. The integration metaphor entails a model in which the neuron's membrane is gradually charged up and down by synaptic currents. It deemphasizes space by hiding the notion of electrical propagation; it deemphasizes time by seeing inputs as objects (which can be counted) rather than activity (which is transient). In terms of mathematical models, the integration metaphor corresponds to the “perfect integrator” (i.e., a capacitor plus input currents). Of course there are variations around that model, but the prototype is the integrator. The domino model cannot be understood as a variation of an integrator. The domino metaphor views neural activity as intrinsically transient, and there is a clear relation between the timing of inputs and the timing of outputs. A coincidence detection model, in which an output spike is generated when synchronous inputs arrive at the cell, might fit the domino metaphor better than the integration metaphor.

Thus, to talk about synaptic integration or to say that the neuron integrates inputs is not a neutral statement. It entails a particular way of seeing neural activity, specifically a way that deemphasizes the notion of activity and views inputs and outputs as objects. This metaphor is consistent with other dominant metaphors in neuroscience, in particular the notion of representations, which sees neural activity as objects that can be manipulated (specifically, as pictures). Thus, the integration metaphor refers to the more general metaphor neural activity = object. A weakness of this dominant metaphor is that there is a strong disonance between the notion of activity, which is transient, and the notion of object, which is persistent. This disonance appears with the next metaphor: neurons fire.