Metaphors in neuroscience (III) - Neural representations

A commonplace in neuroscience is to say that the brain represents the world. In particular, sensory systems are thought to represent images or sounds. A representation is an object, for example a picture or a sculpture, that is meant to be recognized as another object. Both the original object and its representation are to be presented to the same perceptual system and recognized by that system as perceptually similar. The idea that the perceptual system itself might represent an external object seems quite peculiar: it seems to entail that there is a second perceptual system that sees both the external object and the representation made by the first perceptual system. The metaphor cannot apply in this way. But then what do people mean when they say that the brain represents the world? A clue might be provided by this quote from David Marr's book “Vision” (1982):

If we are capable of knowing what is where in the world, our brains must somehow be capable of representing this information.”

Here “knowing” cannot simply mean acting as a function of the external world, because it has been well argued that representations are not necessary for that – simple control systems can be sufficient (see e.g. Rodney Brooks and Braitenberg's vehicles). Thus “knowing” must be meant in a stronger sense, the ability of manipulating concepts and relating them to other concepts. If it is assumed that anything mental is produced by the brain, then somehow those concepts must be grounded in the activity of neurons. In what sense does that activity form a “representation” of external objects? For this metaphor to make sense, there must be a system that sees both the original object and its mental representation, where the representation is an object that can be manipulated. The possibility of mental manipulation entails working memory. So for the brain, representing the world means producing persistent neural activity, structured in such a way that it can be compared with the direct sensory flow coming from the external world.

What is surprising is that Marr and many others do not generally use the representational metaphor in this proper way. Instead, the activity of sensory systems, for example the primary visual cortex, is seen as representing the external world, in the same way as a photograph represents the external world. But unless it is meant that the rest of the brain sees and compares retinal activity and cortical activity, cortical activity is a presentation, not a representation. It might be a representation for an external observer (the one that sees both the world and the photographs), but not for the brain. Thus the metaphorical idea that mental/neural representations mediate perception is somewhat self-contradictory, but unfortunately it is one of the dominant metaphors in neuroscience.

P.S.: see this later comment

Metaphors in neuroscience (II) - Neural firing

Neurons communicate by means of short electrical impulses. We say that neurons fire, spike, or discharge. We speak of spikes, impulses, discharges or action potentials. What concepts do these metaphors convey?

“Spike” seems to refer to the shape of the action potential when looked at on a voltage trace, ie, there is an abrupt rise and fall of the potential. The action potential is quite a different notion: it is a particular form of potential that allows some form of action. That is, the terms action potential convey the notion that those potentials, unlike other ones (subthreshold potentials), have an effect on other neurons. This is a notion that is strikingly not representational.

Firing, impulse and discharge add another aspect: an action potential releases energy. The metaphor is rather accurate as energy is stored in electrochemical gradients across the membrane and the opening of ionic channels releases some of that energy. Firing also conveys a notion of movement: the energy is targeted to some particular place, the axonal terminals. The metaphor is only partially accurate, because when firing a gun, energy is only released at firing time and then the bullet moves to its target. But in neurons, propagation is active and energy is released all along the axon. Thus a better metaphor would be that the neuron ignites, where the axon progressively burns. On the other hand, in myelinated axons, energy is released at discrete locations (Ranvier nodes), so the neuron could be seen as firing in sequence towards the next node: between two nodes, there is a movement that does not use additional energy, as in firing a bullet (dominoes could also be an adequate metaphor). So perhaps a mylienated axon fires (repeatedly), but an unmyelinated axon ignites!

“Discharge” is an interesting term because it relates to a former theory of action potential. The metaphor suggests that the membrane is an electrically charged capacitor, and it gets discharged during the action potential. This seems to correspond to Bernstein's theory (beginning of the twentieth century), according to which the negative resting potential is due to a gradient of potassium concentration across the membrane and the action potential corresponds to a non-selective increase in membrane permeability, resulting in a decrease of the membrane potential (in absolute value). But in 1939, Hodkgin and Huxley made the first intracellar recording of an action potential in an animal and they found out that the membrane potential did not go to 0 mV but actually exceeded it quite substantially. So the discharge metaphor entails a particular model, but one that is now outmoded.

Finally, related to the concept of firing is the notion of threshold. When the membrane potential reaches a threshold, the neuron fires. A threshold is a spatial delimitation between two rooms, or between the outside and the inside of a house. It conveys the notion of a qualitative change. Before threshold, you are outside; after the threshold, you are inside. So the threshold metaphor entails the all-or-none law of neural activity: there is a spike or there is no spike.

In the integration metaphor, inputs and outputs are seen as objects (things that can be manipulated). Specifically, neural output (membrane potential) is a container (inputs are integrated into the membrane potential). In contrast, in the firing metaphor (and related metaphors), neural outputs are seen not as objects but as discrete, timed actions on other neurons (the action potential), which release energy. Thus the integration metaphor and the firing metaphor convey somewhat different views on neural function. Perhaps speculatively, I would suggest that the disonance between these two metaphors is the deep source of the firing rate vs. spike timing debate. In the integration metaphor, the neuron is a container and what matters for a container is what it contains, i.e. the number of inputs. When exactly those inputs come into the container is relatively unimportant. The integration metaphor conveys a representational view of the brain and is consistent with the rate-based view. In the firing metaphor, what is emphasized is the fact that neurons spend energy to act on each other. Actions are events, and therefore time is possibly important. This view is not representational but rather interactional or dynamicist.

An important question is how empirically accurate these metaphors are, especially when some are inconsistent. I have discussed this question indirectly in my series on the firing rate vs. spike timing debate. I will simply point out that the firing metaphor is fairly accurate, as briefly discussed above, possibly if firing is replaced by ignition. There is a release of energy that propagates and acts on other neurons, which occurs discretely when some condition is met. The integration metaphor, on the other hand, is rather loose. It cannot be accurate without substantial qualifications. The main effect of a presynaptic spike is generally short-lived, so an input could be said to be integrated, but only with the qualification that it gets out quickly. The effect of several input spikes on the membrane potential also depends on their relative time of arrival, and this fact does not fit the container metaphor very well.

Metaphors in neuroscience (I) – Neural integration

I have just read Lakoff & Johnson's “Metaphors we live by” (see this summary), which is a classic brilliant book about the use of metaphor in both language and the way we think. The thesis is that we constantly use metaphors when we speak, especially about abstract concepts, and it actually structures the way we think about those concepts, even though we normally don't even notice we speak in metaphors. Typically we tend to speak of abstract concepts by using metaphors that are more concrete. For example, a theory is a building (it has foundations, it requires support and it can fall apart), understanding is seeing (a clear argument; a point of view; an eye opener). Time is an abstract concept that is very difficult to speak of without using metaphors, for example time as space and duration as distance (an hour long, a point in time, etc).

Metaphors are also widespread in science. Because they structure our thought and often go unnoticed, it is quite interesting to identify them: what ways of thinking do scientific metaphors convey, and other ways are possible? For example, neurons code and neurons compute. These are two quite distinct metaphors. In the coding metaphor, the neuron hears about the world and speaks to other neurons, in the language of electrical impulses. In the computing metaphor, the neuron is a sort of factory that processes inputs. But you could also say that neurons push each other, where axonal terminals are hands and spikes are mechanical energy. This is just as true as the metaphor neurons speak to each other (coding metaphor). Each of these metaphors conveys a different way of seeing the activity of neurons (bonus: check the metaphors in this sentence!).

In this series, I want to explore some of the metaphors in neuroscience. I start with two concepts: “integration” (this post) and “firing” (next post). Those two words are found in the integrate-and-fire model, but clearly they appear throughout the neurophysiological literature. Neuron function is generally described as a two-stage process: synaptic integration followed by firing.

The neuron integrates its synaptic inputs: the neuron is a container, and the inputs come into the container. The inputs are objects, they accumulate within the neuron, so after integration they are in the neuron – possibly in a “processed” form (neuron = factory). So in integration there is a concept of summation: inputs add up. Of course there can be qualifications: the integration could be “nonlinear”. But the very fact that “nonlinear” is a qualification means that the basic concept is that of summing up inputs that come in. What is important to realize here is that it is a metaphor, that is, you could imagine other metaphors to think about the propagation of electricity in neurons. For example, you could think of neurotransmitters as pushing dominoes placed all along the neuron; specific things could happen at branch points depending on when the dominoes get pushed on each branch. In both metaphors we are saying that the output depends on the inputs, but each metaphor emphasizes or hides certain aspects of that relation. For example, in the first metaphor, time is deemphasized and is replaced by counting. In the second metaphor, the notion of “summing up” doesn't even make sense because activity is transient.

Importantly, the two metaphors convey very different models of neural function. The integration metaphor entails a model in which the neuron's membrane is gradually charged up and down by synaptic currents. It deemphasizes space by hiding the notion of electrical propagation; it deemphasizes time by seeing inputs as objects (which can be counted) rather than activity (which is transient). In terms of mathematical models, the integration metaphor corresponds to the “perfect integrator” (i.e., a capacitor plus input currents). Of course there are variations around that model, but the prototype is the integrator. The domino model cannot be understood as a variation of an integrator. The domino metaphor views neural activity as intrinsically transient, and there is a clear relation between the timing of inputs and the timing of outputs. A coincidence detection model, in which an output spike is generated when synchronous inputs arrive at the cell, might fit the domino metaphor better than the integration metaphor.

Thus, to talk about synaptic integration or to say that the neuron integrates inputs is not a neutral statement. It entails a particular way of seeing neural activity, specifically a way that deemphasizes the notion of activity and views inputs and outputs as objects. This metaphor is consistent with other dominant metaphors in neuroscience, in particular the notion of representations, which sees neural activity as objects that can be manipulated (specifically, as pictures). Thus, the integration metaphor refers to the more general metaphor neural activity = object. A weakness of this dominant metaphor is that there is a strong disonance between the notion of activity, which is transient, and the notion of object, which is persistent. This disonance appears with the next metaphor: neurons fire.

Connectomics bashing (III) - Connectomics vs genomics

In my first post, I argued that 1) to understand a complex organized system, one needs to understand the organization rather than have a detailed description of the structure, 2) an incomplete or imprecise measurement of structure, as is necessarily the case with any physical measurement, does not in general convey the functionality of the system. In his reply, Bernhard Englitz rightly argued that the knowledge of the connectome is nonetheless useful. I do not disagree with that claim, and I do believe that connectomics is certainly a great neuroanatomical tool. So the question is: what can and what cannot offer the knowledge of the connectome?

First of all, a quick comment. Bernhard complained that connectomics is often portrayed as an extreme claim. I was only referring to published texts by supporters of connectomics. One is Seung's book, Connectome. Arguably, this is a popular science book so I agree that the claims in the book may be overstated for the sake of communication. But there are also articles in peer-reviewed journals, which I think make quite similar claims, for example “Structural neurobiology: missing link to a mechanistic understanding of neural computation”, Nature Rev Neuro (2012), by Denk, Briggman and Helmstaedter. The basic claim in that text and in Seung's book is that there is a strong relationship between structure (specifically, connectome) and function, and implicitly that connectionism provides an adequate account of neural function. For example, both the paper above and Seung's book envisage the reading of memories from the connectome as a distinct possibility (I wouldn't say that they make a strong claim, but they certainly consider it as a reasonable possibility). I wouldn't necessarily qualify this view as extreme, but I simply do not think that the structure-function relationship is so strong in this case.

In this post, I want to address a particular comparison that is often used to emphasize the potential of connectomics: the comparison between connectomics and genomics, in particular the human genome project. This comparison is also used by other large initiatives, for example the Human Brain Project. There is no doubt that the human genome project was useful, and that sequencing entire genomes is a very useful tool in genomics. But what about the relationship between structure and function? How do you know, for example, the function of a particular gene that you have sequenced? You would know by manipulating the gene (e.g. knock-out) and looking at functional changes in the organism; or you would examine a large database of genomes and look for correlates between that gene and function (for example pathologies). In both cases, function is observed in the organism, it is not derived from the knowledge of the genome. As far as I know, we don't know how to predict the function of a gene from its DNA sequence. So even if there were a one-to-one relationship between structure (DNA) and function, the knowledge of the structure would not tell us what that relationship is and so it would not tell us the function. In addition to this, we know that the relationship between structure and function is not one-to-one in the case of the genome: this is what epigenetics is all about (and so the example of genomics is completely in line with the arguments of my first post).

So, if the structure-function relationship is similarly strong in connectomics as in genomics, then 1) the connectome itself will provide little direct insight into brain function, 2) insight might come from correlating connectomes and the function of brains (in a yet to be specified way), 3) the connectome will not determine brain function. In particular, point (3) makes it quite unlikely that memories can be inferred from the connectome. I would also like to point out that a complete comparison with genomics regarding point (2) requires the possibility not only to measure the connectome but also to finely manipulate the connectome and observe functional changes in living organisms. I do not see how current connectomics technologies (electron microscopy) could make it possible. There is a critical limitation, at least for the foreseeable future, which is that the connectome can only be measured on dead organisms, in contrast with DNA sequencing, which greatly limits the possibilities of connectome manipulation or diagnosis based on (detailed) connectome.

Finally I want to point out that the structure-function relationship is likely to be weaker in connectomics than in genomics. First, there is a fundamental difference: the DNA is a discrete structure (4 bases), the connectome is not, if you consider synapse strength. So it should be possible to exactly measure the graph of connectivity in the same way as you can sequence DNA, but measuring the extended connectome (with synaptic strength or delays) can only be measured with limited precision. A second, probably more serious difference, is that while there is a concept of gene that has some functional unity and correspond to a well-identified portion of the genome, there is no equivalent concept in the connectome. In genomics, one can knock-out a gene or look for structure-function correlates for different versions of the same gene. In connectomics, there is in general no similarly well-defined portion of the connectome that can be identified across individuals. This might be partially possible when considering connections made onto well-identified sensors and effectors (say, in the retina), but comparing cortical connectomes across individuals is another story.

So connectomics suffers from the same problems about structure-function relationship as genomics, but quite a bit worse. Again I am not saying that it is not a useful set of tools. But it is just this: additional measurement tools in the hands of neuroscientists, not fundamental data that would specify or in general even suggest brain function. One example where it might be quite useful is in finding new markers of neurological disorders. It has been hypothesized that some neurological diseases are “connectopathies”, ie are due to abnormal connections. Here as in genomics, one could compare the connectomes of subjects that have suffered from those diseases and of control subjects and perhaps identify systematic differences. Whether those differences are correlates of the disease or have a more causal role in the disease, such an identification could certainly help understand the origins and mechanisms of the pathologies. This is a completely different claim than saying that brain function can be understood from the connectome: in this case brain function is observed at the functional level, and simply correlated with some particular element of structure, it is not the structure itself that informs us of function.

In summary: 1) the structure-function relationship is not that strong in genomics and is certainly weaker in connectomics, and more importantly the structure-function relationship is unknown, 2) connectomics is more limited as a set of tools than genomics, 3) what we should expect from connectomics is not so much the possibility to infer brain function from connectome as to correlate some aspects of brain function or dysfunction with connectomes.

Connectomics bashing (II) - Invited post by Bernhard Englitz

My friend and colleague Bernhard Englitz reacted to my previous post on connectomics. What follows is his defense of connectomics:

The connectomics viewpoint is often portrayed as an extreme claim, namely that knowing the connectome (with or without weights) is the sole key to understanding the brain. Certainly, some of its proponents are guilty of generating this viewpoint, however, from a plain funding level, it can be understood why claims are exaggerated (the Human Brain Project being a good example where extreme claims can lead to extreme payoffs). I contend however, that even the strongest proponents of connectomics are far from believing their claims. Much rather I claim that the value of the connectome would not lie in a final explanation, but a solid basis for further exploration.

Across all domains of life, availability of data/resources enables new studies, the genome probably being the best example in this respect. The availability of the genome did not solve many fundamental questions that existed before, but it opened the space for new question, such as systematic interorganism comparison, genome wide searches for homology etc. The value of the availability of one or multiple genomes in these particular questions was that one could suddenly make certain statements with safety, rather than with a large amount of speculation.

Also, proponents of connectomics do not emphasize the value of the structural connectome, because they think that functional data/neural activity are unimportant, but they are simply not as available with the current technology. Science has always been opportunistic and profited from new techniques to address or open certain questions from a new perspective (which does not exclude profiting from ideas equally). Blockface imaging is available and can be applied today, whereas a lot of functional methods are on the verge of scaling to large systems, but not achieving a similar resolution or range as methods in connectomics.

However, new methods always come at a cost, which was again demonstrated well by the human genome projects, whose cost must seem like a gigantuan waste of money in retrospect, given the steep decline in sequencing costs. From my perspective, drastic shifts in the funding landscape are maybe the biggest dangers to fear, in the light of the potential of a much lower price tag just a few years from now. On the other hand, methods development and scientific discovery often go hand in hand, and some of the saving are only realized by public investment, thus creating a market for new techonologies. Hence, hardly anyone doubts that the availability of connectomical data would be useful for a range of questions, the main question is ‘now’ or ‘tomorrow’ (although this decision is clearly complex).

But what are the domains in which the availability of a connectome could really make a difference in neuroscience? Before addressing this question I would like to caution against the use of analogies, since the differences are subtle and may not transfer. Taking Romain’s example of the membrane, it is clear that the question posed about the membrane’s basic nature does not profit from the knowledge of all lipids and proteins that form the membrane, but many essential questions relating to compartmentalization of a cell, trafficking, expression of proteins, would profit. Undoubtedly we would have a much harder time thinking about the space of possibilities in a cell without precise knowledge of the cell’s structure. I interpret Romain’s example to illustrate that ‘certain questions’ will not profite from overly detailed knowledge, and in some cases too much data can even inhibit conceptual development.

However, I contend that many questions in neuroscience are of a different kind and will profit strongly from the availability of a connectome. These questions reach from subcellular structure to network structure. For example, current modeling approaches of cortical neurons are usually based on detailed reconstructed cells, or simplified estimates of individual cells. Both approaches - the first to mimick reality , the second to make it minimal and thus tractable - suffer from the lack of the RIGHT reductions and the RIGHT level of detail. Connectomics can provide a tool to arrive at this middle ground, by offering the possibility to quantify typical structures, their relative occurrence and variability. Such a family of model neurons, will be free from the particularities of the single precise neuron, and will be less dependent on the creative reduction previously performed, based on small data-sets (clearly this has been realized before, and pre-connectomics attempts have tried to close this gap, but a publicly available connectome would open the possibilities to put a wide range of modelling efforts on safer ground, rather than single labs, making insular, small, guarded advances). Similar arguments hold for the statistics of connectivity in neural networks, where we are currently witnessing a wide range of ‘random-connected’ network studies, which may ultimately be of little use, depending on how well their assumptions are matched. Again, to emphasize, the goal has to be finding the right statistics, not replicating the exact structure.

In a nutshell, I think increased accountability and the arbitrariness of assumptions could be widely reduced in this case, thus opening a new era of midlevel modelling, i.e. quite the opposite of what the Human Brain Project is trying to achieve, and Romain is criticizing.

What is computational neuroscience? (XXIII) On optimality principles in neuroscience

The notion of optimality of biological structures is both quite popular as a type of explanation and highly criticized by many scientists. It is worth understanding why exactly.

In a previous post, I observed that there are different types of explanations, one of which is final cause. Final cause would be for example the minimization of energy in physics or survival and reproduction in biology. Evolutionary theory makes final causes very important in biology. However, I find that such explanations are often rather weak. Such explanations generally take the form: we observe such biological structure because it is optimal in some sense. What exactly is explained or meant here is not always so clear. That a biological structure is optimal means that we consider a set of possible structures, and among that set the observed structure maximizes some criterion. But what is this set of possible structures? Surely not all possible molecular structures. Indeed evolutionary theory does not say that biological organisms are optimal. It says that changes in structure that occur from one generation to the next tend to increase “adaptability” of the organism (there are variations around this theme, such as gene-centered theories). Evolution is a process and biological organisms result from an evolutionary history: they are not absolute optima among all possible molecular structures (otherwise there would not be many species).

To see this, consider the following analogy. From the postulate that people tend to maximize their own interest, I propose the following explanation of social structure: rich people are rich because they want to be rich. Why is this explanation not satisfying? Because both poor and rich people tend to maximize their own interest (by assumption), and yet only the latter are rich. The problem is that we have specified a process that has a particular trend (increasing self interest), but there is no necessity that this process reaches a general optimal of some sort. It is only optimal within a particular individual history. Maybe the poor people have always acted in their own interest, and maybe they are richer than they would be otherwise, but that doesn’t mean they end up rich. In the same way, evolution is a process and it only predicts an optimum within a particular evolutionary history.

Thus, the first remark is that optimality must be understood in the context of a process, both phylogenetic (species history) and ontogenetic (development), not as a global property. Optimality can only be local with respect to that process – after all, there are many species, not a single “optimal” one. That is to say, the fitness criterion (which has to defined more precisely, see below) tends to increase along the process, so that, at equilibrium (assuming there is such a thing – see the theory of punctuated equilibria), the criterion is locally maximized with respect to that process (i.e., cannot increase by the process).

This is the first qualification. There are at least two other types of criticisms that have been raised, which I want to address now, one empirical and one theoretical. The empirical criticism is that biological organisms are not always optimal. The theoretical criticism is that biological organisms do not need to be optimal but only “good enough”, and there might be no evolutionary pressure when organisms are good enough.

I will first address the empirical criticism: biological organisms are not always optimal. First, they are not expected to be, because of the above qualification. But this is not the deep point. This criticism raises the problem of specification: optimal with respect to what? The Darwinian argument only specifies (local) optimality with respect to survival and reproduction. But optimality is generally discussed with respect to some particular aspect of structure or behavior. The problem is that it is generally not obvious at all how the evolutionary fitness criterion should translate to in terms of structure. This is the problem of specification.

For example, I have heard the argument that “people are not optimal”. I take it that it is meant that people are not rational. This is indeed a very well established fact of human psychology. If you haven’t read it yet, I invite you to read “Thinking, fast and slow” by Daniel Kahneman. There are all sorts of cognitive biases that make us humans not very rational in general. To give you a random example, take the “planning fallacy”: if you try to plan the duration of a substantial project (say, building a house or writing a book), then you will almost always underestimate it by an order of magnitude. The reason is that when planning, you imagine a series of steps that are necessary to achieve the project but you don’t imagine all the possible accidents that might happen (say the contractor dies). Any specific accident is very unlikely so you don’t or can’t think about it, but it is very likely that one accident of this type happens, and so you seriously underestimate the completion time. Annoyingly, you still do if you know about the fallacy (at least I still do). This is the problem of epistemic uncertainty (events that are not part of your probabilistic model, as opposed to probabilistic uncertainty, as in rolling a die – see e.g. the Black Swan by Taleb). So humans are not optimal with respect to the rationality criterion. Why is that? Perhaps rationality does not give you an evolutionary advantage. Or perhaps it would by itself, but it would also come with a very large cost in terms of maintaining the adequate structure. Or perhaps it would require such a different brain structure from what humans currently have that no evolutionary step could possibly take us there. Or perhaps it is just impossible to be rational, because the problem of epistemic uncertainty is so fundamental. I am not trying to give an answer, but simply pointing out that the evolutionary argument does not imply that structure and behavior should be optimal with respect to all criteria that seem desirable. Evolutionary “fitness” is a complex notion that encompasses a set of contradicting subcriteria and history effects.

With this important qualification in mind, it should be noted that there are many aspects of biological structure and behavior that have been shown quite convincingly to be optimal or near-optimal with respect to appropriately chosen criteria. It would be sad to discard them, because those explanations give parsimonious accounts of large sets of empirical data. For example, while people are generally not rational or consistent in their reasoning and decisions, when it comes to perceptual or motor tasks it is well documented that humans tend to be near optimal, as accounted by the Bayesian framework. There are of course important qualifications, but it is the case that many aspects of perception are well predicted by the Bayesian framework, at a quantitative (not just qualitative) level (note that I don’t mean perception in the phenomenological sense, but simply in the sense of sensory-related tasks). One big difference with the preceding example is that there is no epistemic uncertainty in these tasks; that is, when perceptual systems have a good statistical model of reality, then they seem to use it in a Bayesian-optimal way.

There are also convincing cases of optimality for biological structures. Robert Rosen discusses some of them in his book “Optimality principles in biology”, in particular the structure of the vascular system (which also seems to apply to lungs). Many geometrical aspects of the vascular system, such as angle and diameter at branching points and even the number of blood vessels can be accounted for by optimality principles with respect to appropriately chosen (but importantly, simple) criteria. This latter point is critical, as is pointed out in that book. Two criteria are simultaneously considered in this case: maximizing the surface of contact and minimizing the resistance to flow (and thus the required energy).

Another well documented case, this time in neurophysiology, is in the geometrical and molecular properties of axons. There is a short paper by Hodgkin that in my opinion shows pretty good optimality reasoning, including some of the qualifications I have mentioned: “The Optimum Density of Sodium Channels in an Unmyelinated Nerve” (1975). He starts by noting that the giant squid axon mediates the escape reflex, and it is critical for survival that this reflex is fast. Therefore speed of conduction along the axon is a good candidate for an optimality analysis: it makes sense, from an evolutionary viewpoint, that the structure of the axon is optimized for speed. Then he tries to predict the density of sodium channels that would maximize speed. As it turns out, this simple question is itself quite complex. He argues that each channel also increases the membrane capacitance, in a way that depends on voltage because the geometrical conformation of the channels is voltage-dependent. Nonetheless he manages to estimate that effect and derives an optimal channel density, which turns out to be of the right order of magnitude (compared with measurements). He also notes that the relation between channel density and velocity has a “flat maximum”, so the exact value might also depend on other aspects than conduction speed.

He then discusses those other aspects at the end of the text. He notes that in other cases (different axons and species), the prediction based on speed does not work so well. His argument then is that speed may simply not be the main relevant criterion in those other cases. It was in the case of the squid axon because it mediates a time-critical escape reflex, but in other cases speed may not be so important and instead energy consumption might be more relevant. Because the squid axon mediates an escape reflex, it very rarely spikes and so energy consumption is presumably not a big issue – compared to being eaten alive because you are too slow. But energy consumption might be a more important criterion for axons that fire more often (say, cortical neurons in mammals). There is indeed a large body of evidence that tends to show that many properties of spike initiation and propagation are adapted for energy efficiency (again, with some qualifications, e.g. fast spiking cells are thought to be less efficient because it seems necessary to fire at high firing rates). There are other structures where axon properties seem to be tuned for isochrony, yet another type of criterion. Isochrony means that spikes produced by different neurons arrive at the same time at a common projection. This seems to be the case in the optic tract (Stanford 1987, “Conduction Velocity Variations Minimizes Conduction Time Differences Among Retinal Ganglion Cell Axons”) and many other structures, for example the binaural system of birds. Thus many aspects of axon structure seem to show a large degree of adaptation, but to a diversity of functional criteria, and it often involves trade-offs. This concludes this discussion of the problem of specification.

The second criticism is not empirical but theoretical: biological organisms do not need to be optimal but only “good enough”, and there might be no evolutionary pressure when organisms are good enough. There is an important sense in which this is true. This is highlighted by Hodgkin in the paper I mentioned: there is a broad range of values for channel density that leads to near-optimal (“good enough”) velocities, and so the exact value might depend on other, less important, criteria, such as energy consumption. But note that this reasoning is still about optimality; simply, it is acknowledged that organisms are not expected to be optimal with respect to any single criterion, since survival depends on many aspects. A related point is that of robustness and redundancy. It appears that there is a lot of redundancy in biological systems. You could lose a kidney and still be fine, and this is also true at cellular level. This again can be thought of in terms of epistemic uncertainty: you could build something that is optimal with respect to a particular model of the world, but it might make that something very fragile to unexpected perturbations, events that are not predicted by the model. Thus redundancy or more generally robustness is desirable, even though it makes organisms suboptimal with respect to any specific model.

But note that we have not left the framework of evolutionary fitness, since we have described redundancy as a desirable feature (as opposed to a random feature). We have simply refined the concept of optimality, which it should be clear now is quite complex, as it must be understood with respect to a constellation of possibly contradictory subcriteria as well as with respect to epistemic uncertainty. But we are still making all those qualifications within the framework of adaptive fitness of organisms. This does not mean that biological organisms can be suboptimal because of lack of strong evolutionary pressure. More precisely, it means that they can be suboptimal with respect to a particular criterion for which there is a lack of strong evolutionary pressure, if the same structure is also subjected to evolutionary pressure on another criterion. These two criteria could be for example conduction speed and energy consumption.

Yet it could be (and has been) argued that even if a structure were subjected to a single criterion, it might still not be optimal with respect to that criterion if evolutionary pressure is weak. For example, it is often stated that spikes of the squid giant axon are not efficient, as in the Hodgkin-Huxley model they are about 3-4 times more energetically expensive than strictly necessary. Because those axons fire very rarely, it makes little difference whether spikes are efficient or not. Considering this fact, spike efficiency is “good enough”.

I find this theoretical argument quite weak. First, let me note that the 3-4 factor applies to the Hodgkin-Huxley model, which was calibrated mainly for action potential shape, but since then it has been refined and the factor is actually smaller (see e.g. work by Benzanilla). But it’s not the important point. From a theoretical viewpoint, even if evolutionary pressure is weak, it is not inexistent. By a simple argument I exposed before, biological organisms must live in environments where resources are scarce, and so there is strong pressure for efficient use of energy and resources in general. Thus even if the giant axon’s spike is a small proportion of that use, there is still some pressure on its efficiency. Squids are a very old species and there seems to be no reason why that pressure might not have applied at some point. But again this is not the most important point. In my view, the most important point is that evolutionary pressure does not apply at the level of individual elements of structure (e.g., on each axon). It applies at the level of genes, which have impact on the entire organism, or at least a large part of it. So the question is not whether the giant squid axon is energy efficient, but rather whether spike conduction along axons is efficient. It is also quite possible that mechanisms related to metabolism are more generic. Thus, while there might be little evolutionary pressure on that particular axon, there certainly is on the set of all axons.

Why then is the squid giant axon inefficient? (I’m assuming it actually is, although to a lesser degree than usually stated) Here is a possible explanation. Efficiency of spike propagation and initiation depends on the properties of ionic channels. In particular, to have spikes that are both fast and efficient you need sodium channels to inactivate very fast. There is likely a limit to how fast it can be, since proteins are discrete structures (which might explain that fast spiking cortical neurons are relatively inefficient). In mammals, fast inactivation is conveyed not by the main protein of the sodium channel (alpha subunit) but by so called beta subunits, which are distinct proteins that modulate channel properties. This comes with a cost, since all those proteins need to be actively maintained (the resting cost). If the neuron spikes often, most of the energetic cost is incurred by spikes. If it spikes very rarely, most of the energetic cost is the resting cost. When that resting cost is taken into account, it might well be that spikes in the squid giant axon is actually quite efficient. The same line of reasoning might explain why such a big axon is not myelinated (or doesn’t show a similar kind of adaptation): myelination decreases the energetic cost of spike propagation for large diameter axons, but it increases the resting cost (you need glial cells producing myelin).

To conclude: optimality principles are important in biology because these are principles that are specific of living organisms (i.e., they are somehow adapted) and they do explain a large body of empirical data. However, these must be applied with care, keeping in mind the problem of specification (optimality with respect to what; i.e. what is actually important for survival), the problem of history effects (optimality is local relative to phylogenetic and ontogenetic processes) and the problem of epistemic uncertainty (leading to robustness principles).

 

Update: I noticed that in Hodgkin’s “Sherrington Lectures” (1964), the author estimates that mean firing frequency of the giant axon in the life of a squid does not exceed a few impulses per minute, which should produce an amount of sodium intake due to spikes of about 1/300 the amount without spike (leak). Thus the cost of spikes is indeed a negligible proportion of the total cost of axon maintenance.

Connectomics bashing (I) - Can we understand the brain by measuring its structure?

I have been asked to debate on the subject of connectomics next summer, against a supporter of the connectomics enterprise. I am starting this series of posts to develop some arguments.

Connectomics is the systematic, large-scale mapping of the connections in the nervous system (the “connectome”), and their analysis. It involves the development of new technologies and a considerable amount of financial and human resources. Let me clarify briefly my personal position. Recently there was a debate involving Sebastian Seung, an advocate of connectomics, and Tony Movshon, a skeptical of the whole entreprise. I would qualify myself also as a skeptical of connectomics, but it turns out that I ended up leaning towards Seung. Seung considers that we won't be able to understand much about the brain if we don't even know how neurons are wired. Movshon argues that detailed information about the connectome is not necessary, because in his view only statistical information is relevant as only that information is reproducible across individuals. Simply put, Movshon is a computationalist in the vein of David Marr (“implementation doesn't matter”), while Seung is a classical connectionnist. I certainly strongly disagree with Movshon in his view that detailed structure is noise or irrelevant. But I am also skeptical about Seung's hopes. Seung considers that studying the connectome will give us the keys to the computations that the brain is doing. I will explain in this post why I don't think this will happen. But before, I just want to mention that I do not consider the connectomics enterprise a useless enterprise. It probably will bring a useful resource. It might even bring some conceptual changes, for example by exposing the limits of classical connectionnism. What I disagree with is the general claim that knowing the detailed structure of the brain is the main limiting factor in our basic understanding of the brain.

To a large extent, my criticism is the criticism of overly reductionist accounts of biological systems. So I will start this series of posts with this general question: can we understand the brain by measuring its structure? (replace “understand” by “simulate” for a criticism of the Human Brain Project).

In his book, Seung sums up the connectionnist view in this sentence: “You are your connectome”. Indeed, in connectionnism, memories and computations (input-ouput operations) are stored in the pattern of connections between neurons, and their strength. There is indeed quite a bit of evidence for activity-dependent changes in synaptic connections, in relationship with learning. There is also a large body of theoretical work that develops that idea. But of course, we also know other aspects of structure in neural networks that change and have a subsequent effect on their dynamics. For example, the expression of ionic channels. Or their phosphorylation. Or changes in the morphology of dendrites and axons, or in the conduction delays. There are also ephaptic connections, that do not involve synapses at all. There might be long-term changes in glia too. Essentially, every bit of structure in the brain seems to be plastic. Of course this is not news to connectionnists. The counter-argument is that connections are what matter most.

In a similar vein, supporters of bottom-up approaches to simulations of the brain would argue: if you simulate the most important aspects of the brain with enough detail, then there is no reason why your simulation should not resemble the activity of the brain. This reductionist logic seems superficially convincing. If one points out that the simulation misses a crucial element, then it could simply be replied that the element can just be incorporated into the simulation. Problem solved.

The general problem is that any description of structure involves ignoring a number of elements. Thus the reductionist logic implicit assumes that the whole (=brain) can be approximated by an appropriate subset of structural elements. As you add more elements, the approximation “converges” to the real thing.

Now I will give a simple example that exposes the fallacy. Consider a membrane, made of a structure of lipids. Or just consider a table tennis ball. The membrane is topologically equivalent to a sphere, in that it defines an inside and an outside. In particular, it traps molecules inside it (assuming the membrane is not permeable to it). Make a hole and the set of lipids loses this property. Now the problem is: if you consider a subset of the membrane, one that comprises almost all of the membrane, then that gives you something that is not a membrane. As long there is a hole, it is not a membrane. A membrane is not defined by the fact that it is composed of lipids; it is not even defined by the fact that it is composed of elements that are bound to their neighbors: it is defined by the global property that it has no hole. In that sense, you cannot approximate a membrane by a subset of a membrane. To understand the membrane functionally, you need to understand its global logic and how the elements participate to that logic. Therefore, if the relationship between brain structure and brain function is of this kind, then measuring its structure will not be sufficient. It will in fact miss the important stuff, which is the organizational logic (say, the plasticity rules or developmental rules).

Secondly, to understand what a membrane is, you would probably not find it very useful to measure every element of structure with an electron microscope. Rather, you might perhaps try to see what happens if you make a hole in the structure. Or you might try to build a membrane with some other type of element. Or you might study very small membranes. In this example, everything is important in the sense that you cannot remove something and leave the rest unchanged without making a hole, but nothing is important in particular, in the sense that all elements are identical and you could make a membrane with any subset of the elements by changing the structure a little bit (ie binding elements around a hole together). Maybe ephaptic interactions are of this kind: they might have no specific function, but certainly if you remove them without adapting the rest of the structure, the neural networks will have different dynamics, perhaps introducing some pathological misbalance.

Thirdly, to relate to Movshon's argument: that a diversity of structures have the property to be a membrane (e.g. of different sizes or made of a different basic element) does not mean that structure is random. Actually quite the opposite is true here: structure is highly organized at a detailed level, otherwise there would be holes. It just means that function results from the organization of the structure, from relations between structural elements, and those relations could be instantiated in different ways.

Now is it the case that the problem of the connectome is of this kind? A classical counter-argument to connectomics is the case of C Elegans. That worm has 307 neurons and its connectome has been completely mapped in the 80s. While it's certainly a useful resource in that field, it is fair to stay that we still don't understand much how that nervous system works. One classical reply is that in C. Elegans, it is notoriously difficult to do electrophysiology and so we don't even know the sign of the connections (excitation or inhibition). Fair enough. But let us say we do know the sign. Would that solve the problem? Let us move the mammalian cortex. A pyramidal cell has on the order of 10,000 synapses. Is it possible to infer the dynamics of a cortical network from the connections? In fact, it has been shown that in pyramidal cells, there is a tight dynamic balance between excitatory input and inhibitory input, and without it neurons would fire regularly, instead of irregularly as they do. It has also been found that there are mechanisms that maintain this balance (synaptic and homeostatic plasticity). But by just looking at the connectome, you would not see that. You would see that a pyramidal cell has 4 times more excitatory inputs than inhibitory inputs. If you tried to measure synaptic strengths, this would also not be sufficient, because you need a very high precision in the balance, which needs to be achieved for in vivo situation that takes into account all elements of structure. This is just an example, but the point is that these principles of organization do not emerge from just looking at the structure (and in this case it would not even help), but by directly investigating those principles as such.

In summary, the limiting factor in our understanding of the brain is not so much a more detailed description of structure, but rather a deeper understanding of the organizational principles of the nervous system, and this won't be provided by electron microscopy.

Notes on consciousness. (VI) The hard problem of consciousness explained with sex

By using the word “sex” in the title, I am obviously trying to increase the number of my readers. But not only. I noticed that many people, in particular scientists, do not seem to get why there is a problem at all. Philosophers like to refer to the experience of color, and the question is then: why is it that red feels like red, and not like something else, and why does it feel like anything at all? Why is it not just information as it is for a computer, why does the experience have a particular quality? Somehow this example does not speak to everyone, so I came with another example: why is it that sex is associated with pleasure, in particular, why is male ejaculation generally associated with orgasm?

You will hear several kinds of explanation. One might tell you: your body secretes some particular molecules at that moment, or some particular neuron gets excited, and that's what produces the sensation of pleasure. But of course, this explanation just pushes the problem a little further, by replacing “ejaculation” by “molecule” or “neuron”. Why is it that a particular molecule produces pleasure? Certainly, when it is not in the brain, the molecule does not provide pleasure to a Petri dish.

Another kind of explaining away is the functionalist or behaviorist view: if everything appears the same from an external observer point of view, then it is the same; there is nothing more than what can be seen. In other words, male orgasm is the act of ejaculating, end of the story. But sex does not fit so well with that view. First, it is well known that female orgasm can be faked; second, male orgasm can be dissociated from ejaculation (in tantric sex).

And finally there is the evolutionary explanation: we feel pleasure because it motivates us for sex, which is necessary for reproduction. But the logic is flawed: we only need a mechanism to make us have sex, but there is no intrinsic reason why that mechanism should be accompanied with any feeling. Why are we not reproducing machines without feelings?

Here comes the example. The axolotl is a kind of salamander that retains its larval traits throughout its adult life. It's basically a cute baby salamander. Nevertheless, it is able to reproduce. The way it reproduces is interesting (probably not different from other salamanders, but cuter). The male lays a sort of jelly full of spermatozoids on the floor. And then later the female comes and inserts the jelly in her belly. After a few days, the female produces eggs.

Now the big question: does the male axolotl have an orgasm when it lays its eggs on the floor?

Notes on consciousness. (V) 4 key questions about consciousness and the mind-body problem

It is fair to say that we have little idea about how neural activity gives rise to consciousness, and about the relationship between neural activity and conscious states (i.e., what you are experiencing). This is the mind-body problem. In my opinion, there has been relatively little fundamental progress on this question because it has been addressed mainly within the computationalist framework (ie in terms of information processing), which is very inappropriate for this question (this is partly Chalmers' criticism). So below I am listing a number of unanswered questions on this matter, which I believe requires a very different kind of approach. First of all, let me remark that because being conscious is always being conscious of something, understanding consciousness is largely about understanding perception at the phenomenal level (perception in the broadest sense, e.g., perceiving your thoughts).

1) How can perception be stable?

Why is it that a pure tone feels like a stable percept when 1) the acoustic wave is time-varying, 2) the activity of neurons everywhere in the brain is dynamic? The same can be said of all senses; in vision, the eyes move at high frequency even when fixating an object, and there is no visual percept if they are forced to be still. More generally: if there is a mapping between states of the brain and percepts, then why is it that percepts are not changing all the time?

A thought experiment. Imagine the state of the brain is held fixed. Someone scratches her nose and time is stopped. Would you still experience something? Any conscious experience seems to require a change, not just a state. This suggests that the relevant mapping is actually not from brain states to percepts, but from brain activity to percepts. This immediately raises a problem, because a conscious state can be defined at any point in time, but it is not immediate that brain activity can (as this would reduce activity to state). This is not a fatal problem, though, for there is a precedent in physics: a gas is composed of individual particles, but the pressure of a gas at a given instant cannot be defined as a function of the state of the particles at that moment, because pressure corresponds to the force exerted by the particles impacting a surface. It might be that the relation between neural activity and conscious states is of a similar kind as the relation between mechanics and thermodynamics.

Two more thoughts experiments. 1) Record the firing of all neurons in the brain, then play them on a set of unconnected light diodes, does that set feel the same experience? 2) (adapted from Chalmers) Replace randomly every other neuron in the brain by an artificial neuron that interacts with other neurons in exactly same way as the neuron it replaces, would there be a conscious experience? My personal answers would be: (1) no and (2) yes, and this suggests to me that the right substrate to look at is not neural activity as a state (e.g. firing rates of all neurons) but neural activity as an interaction between neurons.

 

2) What is time for a conscious brain?

A fundamental property of consciousness is its unity: a single conscious entity sees, hears and thinks. If visual and auditory areas where independent and, say, control speech, then one conscious entity would report visual experience and another conscious entity would report auditory experience. It could not be a single conscious entity since the two relevant parts are physically disconnected. Thus the unity of consciousness requires an interdependence between all the elements that compose it. This is, as I understand it, the issue that is addressed by a number of biological theories of consciousness, for example Edelman's “reentrant loops” or Tononi's integrated information theory.

However, as far as I know, there is another crucial aspect to this problem, which is the unity of consciousness, or lack of it, in time. There is no general unity of consciousness across time: two things that happen at, say, 1 minute of interval produce distinct percepts, not a single one. Clearly, consciousness is dynamic. But the big question is: how can there be a unique conscious state at any given moment in time when all the elements of the conscious network interact with some delay (since they are physical elements), typically of a few milliseconds? And what is time for such a network? Imagine there is a (physical) visual event at time t1 and an auditory event at time t2. At what time do they occur for the network, as they are sensed at different times by all its elements?Why is it that electricity changes on a millisecond timescale in the brain but conscious states seem to change at a much slower rate?

 

3) How can there be an intrinsic relation between neural activity and percepts?

Why is it that a particular pattern of neural activity produces the experience of redness? Most biological explanations are of this kind: I experience redness because when some red object is presented, neurons fire in that specific way. This is the coding perspective. The problem in the coding perspective is of course: who decodes the code? Ultimately, this kind of explanation is strongly dualist: it is implicitly assumed that, at some point, neural activity is transformed into the redness experience by some undetermined process that must be of a very different nature.

I would like to point out that proposals in which perception lies in the interaction between the organism and the environment (e.g. the sensorimotor theory) do not solve this problem either. I can close my eyes and imagine something red. It could be that redness corresponds to a particular way in which visual inputs change when I move my eyes or the surface, which I am anticipating or imagining, but this does not explain what is intrinsically red about the pattern of neural activity now. If we cannot explain it without referring to what happened before, then we are denying that the pattern of neural activity itself determines experience, and again this is a strong dualist view.

An experiment of thought. Consider two salamanders, and each of them has only one neuron, which is both a sensory neuron and motor neuron; say, its firing produces a particular movement. The salamanders are very similar, but their visual receptors are tuned to different wavelengths. In the first salamander, the neuron reacts to red stimuli; in the second salamander, the neuron reacts to blue stimuli. What might happen in terms of visual experience when the neuron fires? Does the first salamander see red and the other see blue? If we think that neural activity alone determines experience, then in fact the two salamanders should experience exactly the same thing – and this is also independent of the sensorimotor contingencies in this case.

 

4) What is the relationship between the structure of experience and the structure of neural activity?

Subjective experience is highly structured. There might be some dispute about how rich it actually is, but it is at least as rich as what you can describe with words. A striking fact about language is that the meaning of sentences is not only implied by the words but also by the relations between them, i.e., the syntax. For example, a visual scene is composed of objects with spatial relations between them, and with attributes (a red car in front of a small house). In fact, there must be more to it than syntax, there must also be semantics: if neural activity completely determines subjective experience, it must not only specify that there is a car, but also what a car is. A useful notion in psychology of perception is the concept of “affordance” introduced by James Gibson: the affordance of an object is what it allows you to do (e.g. a car affords driving). Affordances are potentialities of interaction, and they gives some meaning (rather than labels) to perceptual objects. This brings an inferential structure to experience (if I did that, this would happen).

This stands in sharp contrast with the central perceptual concept in neuroscience, the notion that “cell assemblies” represent particular percepts. A cell assembly is simply a set of neurons, and their co-activation represents a particular percept (say, a particular face). Let us say that one neuron represents “red”, another represents “car”, then the assembly of the two neurons represents the red car. The problem with this concept is that it is very poorly structured. It cannot represent relations between objects, for example. This type of representation is known as the “bag-of-words” model in language processing: a text is represented by its set of words, without any syntactic relationship; clearly, the meaning of the text is quite degraded. The concept of cell assembly is simply too unstructured to represent experience.

If we are looking for a mapping between neural activity and percepts, then 1) we must find a way to define some structure on neural activity, and 2) the mapping must preserve that structure (in mathematical terms, we are looking for a morphism, not a simple mapping).

I can summarize this discussion by pointing out that to make progress on the mind-body problem, there are two crucial steps: 1) to understand the articulation between physical time and the time of consciousness, 2) to understand the articulation between the structure of neural activity and the structure of phenomenal experience.

Notes on consciousness. (IV) The phenomenal content of neural activity

This post is about the mind-body problem. Specifically, what is the relationship between the activity of the brain and the phenomenal content of conscious experience? It is generally thought that experience is somehow produced by the electrical activity of neurons. The caricatural example of this idea is the concept of the “grandmother cell”: a neuron lights up when you think of your grandmother, or conversely the activation of that neuron triggers the experience of, say, the vision of your grandmother's face. The less caricatural version is the concept of cell assemblies, where the single cell is replaced by a set of neurons. There are variations around this theme, but basically, the idea is that subjective experience is produced by the electrical activity of neurons. There actually is some experimental evidence for this idea, coming from the electrical stimulation of the brain of epileptic patients (read any book by Oliver Sacks). Electrical stimulation is used to locate the epileptic focus in those patients, and depending on where the electrode is in the brain, electrical stimulation can trigger various types of subjective experiences. Epileptic seizures themselves can produce such experiences, for example auditory experiences of hearing specific musics. Migraines can also trigger perceptual experiences (called “aura”), in particular visual hallucinations. So there is some support for the idea of a causal relationship between neural activity and subjective experience.

The obvious question, of course, is: why? At this moment, I have no idea why neural activity should produce any conscious experience at all. We do not believe that the activity of the stomach causes any subjective experience for the stomach, or the activity of any set of cells, including cardiac cells, which also have an electrical activity (but of course, maybe we are wrong to hold this belief).

I propose to start with a slighly more specific question: why does neural activity cause subjective experience of a particular quality? Any conscious experience is an experience of something (a property called intentionality in philosophy), for example the vision of your grandmother's face. Why is it that a particular spatio-temporal pattern of activity in a neural network produces, for that neural network, the experience of seeing a face? One type of answer is to say that this particular pattern has been associated with the actual visual stimulus of the face, ie, it “encodes” the face, and so the meaning of those neurons lighting up is the presence of that visual stimulus. This is essentially the “neural coding” perspective. But there is a big logical problem here. What if the visual stimulus is not present, but the neurons that “encode” the face light up either naturally (memory, dream) or by electrode stimulation? Why would that produce a visual experience rather than anything else? If experience is produced by neural activity alone, then it should not matter what external stimulus might cause those neurons to fire, or what happened in the past to those neurons, or even what world the neurons live in, but only which neurons fire now. Which neurons fire now should entirely determine, by itself, the content of subjective experience. Again the problem with the neural coding perspective is that it is essentially dualist: at some stage, there is some other undefined process that “reads the code” and produces subjective experience. The problem we face here is that the firing of neurons itself must intrinsically specify the experience of seeing a face, independent of the existence of an outside world.

I will try to be more specific, with a very simple example. Imagine there is just one neuron, and two stimuli in the world, A and B. Now suppose, by conditioning or even simply by anatomical assumption, that stimulus A makes the neuron fire. A neural coder would say: this neuron codes for stimulus A, and therefore this neuron's firing causes the experience of A. But you could also assume a different situation, maybe a different organism or the same organism conditioned in a different way, where stimulus B, and not A, makes the neuron fire. If neural activity is what causes subjective experience, then this neuron's firing should produce exactly the same experience in both cases, even though different stimuli cause them to fire. This example can be vastly generalized, and the implication is that any two patterns of neural activity that are identical up to a permutation of neurons should produce the same subjective experience for that set of neurons.

As if all this were not puzzling enough, I will now end on a disturbing experiment of thought. Imagine we measure the entire pattern of neural activity of someone experiencing the vision of his grandmother. Then we build a set of blinking red lights, one for each neuron, programmed so as to light up at the same time as the neurons did. The red lights don't even need to be connected to each other. The electrical activity of this set of lights is thus the same as the activity of the neural network. Therefore, by the postulate that electrical activity is what causes subjective experience, the set of lights should experience the sight of the grandmother, with the impression of being the grandson. Would it?