What is a metaphor? Essentially, a metaphor is an analogy that doesn’t say its name. We use metaphors all the time without even noticing it, as was beautifully demonstrated by Lakoff & Johnson (1980). When I say for example, “let me cast some light on this issue”, I am using a fairly sophisticated metaphor in which I make an analogy between understanding and seeing. In that analogy, an explanation allows you to understand, in the same way as light allows you to see. You might then reply: I see what you mean, it is clearer! Chances are that, in normal conversation, we would not have noticed that we both used a metaphor.
Metaphors are everywhere in neuroscience, and in biology more generally (see these posts). For example: evolution optimizes traits (see the excellent article of Gould & Lewontin (1979) for a counterpoint); the genome is a code for the organism (see Denis Noble (2011a; 2011b)); the brain runs algorithms, or is a computer (see also Paul Cisek (1999) or Francisco Varela); neural activity is a code.
These metaphors are so ingrained in neuroscientific thinking that many object to the very idea that they are metaphorical. The objection is that “evolution is optimization” or “brain runs algorithms” is not a metaphor, it is a theory. Or, for the more dogmatic, these are not metaphors, these are facts.
Indisputable truths belong to theology, not science, so any claim that a general proposition is a fact should be seen as suspect – it is an expression of dogmatism. But there is a case that we are actually talking about theories. In the case of neural codes or brains as computers, one might insist that the terms “code” or “computer” refer to abstract properties, not to concrete objects like a desktop computer. But this is a misunderstanding of what a metaphor, or more generally an analogy, is. When I am “casting light on this issue”, I am not referring to any particular lamp, but to an abstract concept of light which does not actually involve photons. The question is not whether words are actually some sort of photons, but whether the functional relation between light and seeing is similar to the functional relation between explanation and understanding. There is no doubt that these concepts are abstracted from actual properties of concrete situations (of light and perception), but so are the concepts of code and computer. In the metaphor, it is the abstract properties that are at stake, so the objection “it is not a metaphor, it is a theory” either misunderstands what metaphor is (a metaphor is a theory), or perhaps really means “the theory is correct” – again dogmatism.
For the mathematically minded, a mathematical concept that captures this idea is “morphism”. A morphism is a map that preserves structure. For example, a group homomorphism f from X to Y is such that f(a*b) = f(a) x f(b): the operation * defined on X is mapped to the operation x defined on Y (of course “metaphors are morphisms” is a metaphor!).
For example, in the “let me cast light on this issue” metaphor, I am mapping the domain of visual perception to the domain of linguistic discourse: light -> words; visual object -> issue ; seeing -> understanding. What makes the metaphor interesting is that some relations within the first domain are mapped to relations in the other domain: use of light on an object causes seeing; use of words on an issue causes understanding.
Another example in science is the analogy between the heart and a pump. Each element of the pump (e.g. valve, liquid) is mapped to an element of the heart, and the analogy is relevant because functional relations between elements of the pump are mapped to corresponding relations between elements of the heart. Thus, the analogy has explanatory power. What makes a metaphor or an analogy interesting is not the fact that the two domains are similar (they are generally not), but the richness of the structure preserved by the implied morphism.
In other words, a metaphor or an analogy is a theory that takes inspiration from another domain (e.g. computer science), by mapping some structure from one domain to the other. There is nothing intrinsically wrong with this, on the contrary. Why then is the term “metaphor” so vehemently opposed in science ? Because the term implies that the theory is questionable (hence, again, dogmatism). There are ways in which understanding is like seeing, but there are also ways in which it is different.
Let us consider the metaphor “the brain implements algorithms”, which I previously discussed. Some are irritated by the very suggestion that this might even be a metaphor. The rhetorical strategy is generally two-fold: 1) by “algorithm”, we mean some abstract property, not programs written in C++; 2) the definition of “algorithm” is made general enough that it is trivially true, in which case it is not a metaphor since it is literally true. As argued, (1) is a misunderstanding of linguistics because metaphor is about abstract properties. And if we follow (2), then nothing can be inferred from the statement. Thus, it is only to the extent that “the brain implements algorithms” is metaphorical that it is insightful (and it is to some extent, but in my view to a limited extent).
The key question, thus, is what we mean by “algorithm”. A natural starting point would be to take the definition from a computer science textbook. The most used textbook on the subject is probably Cormen et al., Introduction to algorithms. It proposes the following definition: “a sequence of computational steps that transform the input into the output”. One would need to define what “computational” means in this context, but it is not key for this discussion. With this definition, to say that the brain implements an algorithm means that there exists a morphism between brain activity and a sequence of computational steps. That is, intermediate values of the algorithm are mapped to properties of brain activity (e.g. firing rates measured over some time window) - this is the “encoding”. Then we claim that this mapping has the property that a computational step linking two values is mapped to the operation of the dynamics of the brain linking the two corresponding neural measurements. I explain in the third part of my essay on neural coding why this claim cannot be correct, at least not generally and only approximately (one reason is that a measurement of neural activity must be done on some time window, and thus cannot be considered as an initial state of a dynamical system, from which you could deduce the future dynamics). But this is not the point of this discussion. The point is that this claim, that there is a morphism between an algorithm and brain activity, is not trivial and it has explanatory value. In other words, it is interesting. This stems from the rich structure that is being mapped between the two domains.
Since it is not trivial (as in fact any metaphor), a discussion will necessarily arise about whether and to what extent the implied mapping does in fact preserve structure between the two domains. You could accept this state of affairs and provide empirical or theoretical arguments. Or you could dismiss the metaphorical nature entirely. But by doing so, you are also dismissing what is interesting about the metaphor, that is, the fact that there might be a morphism between two domains. We could for example redefine “algorithm” in a more general way as a computable function, even if it is not what is usually meant by that (as the Cormen textbook shows). But in that case, the claim loses all explanatory value because no structure at all is transported between the two domains. We are just calling sensory signals “input” and motor commands “output” and whatever happens in between “algorithm”. In mathematical terms, this is a mapping but not a morphism.
Thus, metaphors are interesting because they are morphisms between domains, which is what gives them scientific value (they are models). The problem, however, is that metaphor is typically covert, and failure to recognize them as such leads to dogmatism. When one objects to the use of some words like “code”, “algorithm”, “representation”, “optimization”, a common reaction is that the issue “is just semantic”. What this means is that it is just about arbitrary labels, and the labels themselves do not really matter. As if scientific discourse were essentially uninteresting and trivial (we just observe things and give them names). This reaction reveals a naïve view of language where words are mappings (between objects and arbitrary labels), when what matters is the structured concepts that words refer to through morphisms, not just mappings. This is what metaphor is about.