Perceiving and knowing

Perceiving space is knowing where things are in the world. Or is it?

I am sitting in my living room, and there are big windows on a courtyard. The windows are sound-proof and so if I open just one, acoustical waves mostly enter the room through that window. Now someone enters the courtyard on the right, walks across it and arrives at the door on the left. If I close my eyes, I know that the person is walking from right to left. However, what I hear is the sound of someone walking, always coming from the same direction, that of the window. If someone asks me where the person is at a given moment time, I could point to the more or less correct direction, by inference. But this is not what I perceive. I always perceive the sound coming from the same direction. There is a difference between perceiving (phenomenological) and knowing (conceptual). And there is a difference between phenomenology and behavior.

Another striking example is referred pain. Referred pain is a pain that one feels at a location away from the cause of the injury. For example, in a heart attack, one may feel pain in the arm rather than in the chest. This is a known phenomenon and if you know it, you may correctly identify the location of injury in the heart when you feel pain in the arm. But it doesn't change the fact that you feel pain in the arm. You may entirely convince yourself that the injury is in the heart, and all your behavior might be consistent with that belief, but still you will feel the pain in the arm.

There are several interesting conclusions we can draw from these remarks. First, perception is not entirely reducible to behavior. Here we are touching the hard problem of consciousness (qualia): you could observe a cat turning its head to a sound source and you would think that the cat perceives that the sound came from the source, but in reality you don't know. Maybe the cat perceives it somewhere else but it corrects its movement because it knows its perception tends to be biased. With humans, you could perhaps distinguish between these possibilities because humans speak. But without this option, a purely functionalist approach to perception (in terms of relationships between sensory stimuli and behavior) misses part of the phenomenon.

Second, inference is not the same as perception. Spatial perception is not just the process of inferring where something is from sensory inputs. There is also the experience of perception, which is not captured by the objectivist view.

Spatial perception of pain (III) How can we feel pain inside the body?

I will first start with a summary of the different propositions I made in the previous post about where it hurts.

- Proposition A (independent channels): there are two independent channels, one that provides pain information (intensity or quality of pain, through pain receptors) and another one that provides spatial information (through tactile receptors or vision). The two channels are bound by co-occurrence.
- Proposition B (sensorimotor): you feel pain at a particular location because specific movements that you make produce that pain.
- Proposition B2 (sensorimotor): you feel pain at a particular location because whenever this particular activation pattern of pain receptors is present, you can manipulate this pattern or the intensity of pain by specific movements or actions.
- Proposition C (learned association): the localization of pain is inferred from the activation pattern of pain receptors (which must be spatially selective), by association with another channel that carries spatial information (e.g. tactile receptors).

Note that in A and C, I have moved the problem of spatial information to another modality, either touch or vision. We may consider that spatial information in touch and vision is constituted by sensorimotor contingencies, but it is not an important assumption here. The puzzle is the following: we can only touch our skin, the surface of our body, and we cannot see inside our body. If touch is central to the spatial perception of pain, then how is it possible that we can feel pain inside the body (say, in the stomach or in the head)?

I have discussed a similar example in spatial perception: when one hears music or speech through headphones, it usually feels like the sound comes from “inside the head”. First of all, there is a simple argument why sounds should feel as coming from your body in this case: when you move the head, the sound is unaffected, which means the source is part of your head – either on the surface (skin) or inside the head. The same argument applies to pain felt inside the body: rigid displacements of the body do not change the pain or any information associated with it. Therefore the pain is in you, not in the external world. However, this remark does not explain why pain feels inside the body and not on the skin.

I mentioned another possibility for sounds, inside as a default hypothesis: if you cannot identify the source as coming from somewhere outside, then the sound feels located inside. The default hypothesis raises a question: why does it feel located inside rather than not located at all? There is also another problem here: pain does not simply feel inside, it feels at a particular place inside the body (e.g. the stomach).

A first answer is proposition B2. Perhaps you feel a headache in the head and not in the stomach because the pain is only affected by movements of the head. In the same way, touching your stomach may alter the intensity of pain but not touching other parts. This explanation is a combination of default hypothesis (it's not on the skin so it's inside) and sensorimotor theory (B2). It is appealing but let's see how it applies to the perception of sounds inside the head. Here again, sounds do not simply feel inside the head, but at a particular place inside the head (say on the left or on the right). But no movement that you make has any impact on the sound, and so proposition B2 only explains why the sound is inside the head, but not where in the head it is.

Let us formalize the problem more precisely. Your stomach hurts. There is a pattern of activation of receptors that is characteristic of this condition, but no movement that you can make generates this pattern. In addition, in the case of auditory perception inside the head, no movement may alter this pattern. The default hypothesis is logical inference: since it is a new pattern, it must be located where I cannot produce it: in my body. But as we saw, this not sufficiently precise. To make some progress, I will start with an experiment of thought.

Imagine that in your life, you have touched only two points on your skin, points A and B. When something touches point A, you feel it located at A because you recognize the activation pattern of the tactile receptors. But what if something touches a point between A and B? One possibility would be that you don't feel it located at all, you just feel that something touches you. But it contradicts the fact that you feel sounds inside the head or pain inside the body. Another possibility is the default hypothesis: since you have never encountered the activation pattern, then you know it is neither A nor B, so you feel the touch somewhere outside of A and B. But this logical inference does not produce anything more precise. It seems to contradict the fact that we can hear sounds in our head on the left or on the right. To feel the touch somewhere between A and B requires some form of interpolation: if the new activation pattern resembles the pattern that is characteristic of A, then the touch was probably located somewhere near A; if it resembles both A and B, then it was probably located between A and B.

More generally, we can only have a finite number of experiences, and so t is unlikely that the exact activation pattern of receptors is encountered twice. Even if physical stimuli were identical, the body changes over time. Thus, it appears that we could not have any perceptual experience at all unless there is some form of interpolation. A natural proposition is then that detailed perception inside our body results from perceptual interpolation. This is not the same as logical inference, as in the case of the default hypothesis, because it necessary involves some arbitrariness: there is no way you can logically know where exactly between A and B your skin was touched if you have never encountered the activation pattern before, so the perceived location is a guess.

Now let us go back to our specific problem. How can pain be located inside our body? The idea of interpolation seems to imply that the pattern of receptor activation induced by such pains should resemble that of pains induced on the skin at opposite locations on the body. For example, pain in a joint, say the knee, should produce activation patterns resembling those of pains induced at the skin all around the knee.

There are two interesting points to note about the interpolation idea:
1) Sounds and pains located inside the body tend be less precisely localized, the location is “vague”. This means that the concept of interpolation as in picking a particular point between two points is incorrect: somehow the process of perceptual interpolation also affects the uncertainty of the location, or perhaps the perceived size.
2) How specifically are perceptual locations interpolated? In other words, what is the topology of spatial perception?

Spatial perception of pain (II)

Where does it hurt? A common answer to this question is: it hurts at the location of the pain receptors in the body. I will discuss three counter-arguments to this proposition, starting with the simple one. The simple argument is that there is a discrete set of pain receptors on the skin, but the spatial perception of pain is not discrete (the same argument applies to touch): we do not feel pain at discrete locations on our skin. It seems that the phenomenal space of pain is continuous, and this observation does not match anatomy. Then there are the anecdotical arguments: we have headaches but there is no pain receptor in the head; when you lose a limb (say a hand), the non-existent limb can hurt (phantom limb pain); there are systematic mislocalizations of the causes of pain, for example heart pains are felt in the arm. Finally there is a more sophisticated argument. Let us assume that we do feel pain at the location of our pain receptors. But then how do you know where your pain receptors are? One answer would be: we know it because it is somehow encoded in our genes. The “somehow” would deserve some precise explanation, but this is not necessary for this argument. This proposition requires that there is a systematic mapping between our genes and our body at a fine level of detail. That is, the precise location of pain receptors should depend only on our genes. But we know that this is not true. For example the size of our body depends on the diet we had when we were kids, and therefore so does the location of nerves. Therefore, even if we felt pain at the location of the receptors, we would still need to find out where these receptors are.

Another common answer to the question “where does it hurt?” is the objectivist answer: it hurts where the cause of the pain is, or where the injury is. An important difference with the previous one is that it does not imply discrete spatial perception. From our experience, this proposition seems to be correct most of the time but a simple objection is that there are cases of mislocalizations of pain (e.g. heart pains). The same argument as above leads us to the same question: how do you know where the injury or cause of pain is?

If genes are not sufficient, then it must be based on experience. Let us imagine that you hit your knee against a wall. You can see the knee hitting the wall; you also have a tactile experience; you feel an intense pain at the moment of contact, which perhaps gradually fades out. I start with proposition A: there are two independent channels, one that provides pain information (intensity of pain, through pain receptors) and another one that provides spatial information (through tactile receptors or vision). The same question now applies to the spatial channel: how do you know where something touches you? This is simpler to answer because you can touch yourself: you can associate your movements with activation patterns of tactile receptors when you touch your skin. You know where a tactile stimulus is in the sense that you know how to make movements to touch it. An objection to proposition A is: what if there is no external stimulus that activates the tactile receptors? For example, your stomach could hurt because of acidity or a tooth could hurt because of bacteria. There is nothing you can see, and all the tactile receptors are on the skin, so there is no independent source of spatial information, and yet the pain feels precisely localized. The only way to save proposition A is to assume that there actually is another source of spatial information. For example, in the case of tooth pain, maybe tactile receptors (or the nerves) are actually activated. In the case of stomach ache, it is harder to imagine that these receptors on the skin are activated (but I leave it as an open question), and in this case you would need to hypothesize that there are other types of receptors, perhaps acidity receptors, that carry spatial information. But then we are back to the same problem as before: how do these receptors get to carry any spatial information at all? (how do you know where these neurons are?) You would then need to assume that these receptors inside your body can also be activated with your own movements. I leave this as an open possibility. There is still one difficulty, which I will address later because it is shared with other propositions: how can pain be localized inside the body?

I will now discuss two other related propositions. Proposition B is purely sensorimotor: you feel pain at a particular location because specific movements that you make produce that pain. This explanation only requires pain receptors, but these receptors must be activated in a way that is spatially specific (i.e., which varies systematically with the location of the pain stimulus). For example, by pinching yourself, you associate the pattern of pain receptor activation with the location of the skin where you pinched yourself. This proposition implies that you cannot feel any localized pain unless you have previously produced it yourself. But what about when a tooth aches? It seems that you could feel tooth ache without having hurt your tooth yourself before. To save proposition B, it seems necessary to assume that the pain receptors can be activated at a “subthreshold” level that does not produce pain. In this case, to feel pain at a particular location requires previously producing specific movements that produce a similar (but possibly less intense) pattern of activation of the pain receptors.

There is a variation of B, which I will call proposition B2, which goes as follows. You feel the pain at a particular location because whenever this particular activation pattern of pain receptors is present, you can manipulate this pattern or the intensity of pain by specific movements or actions. For example, you hurt your knee and then you know you will feel a bit better if you put your hand on it, maybe because of the heat. Proposition B2 is slightly different from B by the fact that it is how you can manipulate pain, rather than how you can cause pain, that provides spatial information. The example of the tooth would then be: your tooth aches, and you know where it aches because by moving your tongue on your teeth you alter the intensity of pain.

Proposition C is learned association: the localization of pain is inferred from the activation pattern of pain receptors (which must be spatially selective), by association with another channel that carries spatial information (e.g. tactile receptors). For example: you hit your knee against the wall, the tactile receptors carry information about the location of pain, which you associate with the activation pattern of pain receptors. Later, you knee hurts but there is no mechanical contact with anything: you still feel the pain in the knee because it is the same activation pattern as when it was hit by the wall. In proposition C, you could not experience the location of a pain unless you have previously experienced the same pain in conjunction with an independent cue of location. So we have the same problem as in proposition A: what if a tooth aches for the first time? The proposition can be saved in the same way by assuming that pain receptors can be activated at a subthreshold level that does not induce pain.

There are now two questions I want to address: 1) How can we feel pain inside the body? 2) Why do we make systematic errors in localizing pain?

Spatial perception of pain (I)

Pain is a great example of many important themes in philosophy of perception. Here I want to focus on spatial perception, but I will start with a few general comments.

First of all, why do we feel pain in the first place? Scientists tend to offer two types of explanations. One is expressed in terms of efficient causes: you hit your knee against a wall, it activates receptors in your skin, these receptors make some neurons in a particular region of the brain fire, and then these neurons produce a substance (molecules) that is characteristic of pain experience (material cause). Such an explanation has some value, for example it might suggest pharmacological targets for pain relief. However, it does not explain the experience of pain at all. Why is it that a particular molecule induces the experience of pain? This does not seem to be a much better explanation than to say that the contact of the wall induces pain (it is somewhat better because it is more universally associated with pain – i.e., pain caused by other events). This problem is what philosophers call the problem of “qualia”: to explain how pain feels like, why pain hurts rather than just being some information about the state of your body. It is notoriously difficult to explain qualia in terms of efficient causes (see Thomas Nagel's famous paper, “What is it like to be a bat?”). It is much easier to explain the informative content of pain (what is going on in your body), than to explain the phenomenal content of pain (how it feels like).

A second type of explanation is in terms of final cause: it hurts because it is an experience you should avoid. It is useful for you to feel pain because then you will learn to avoid dangerous stimuli: pain has a survival value, which is why it has been selected by evolution. But again this type of explanation fails to address the phenomenal content of pain, because what it requires is a modification of behavior with dangerous stimuli, not necessarily an emotional experience. You could well imagine that when your knee is hit, you get the information that this is something that you should avoid in the future, without carrying an unpleasant feeling. You could also imagine that the event triggers a series of cognitive responses (e.g. negative conditioning) without producing any feeling at all. You could imagine that you hit your knee while sleeping, without being conscious of the event, and that your body reacts to it, perhaps even with negative conditioning (e.g. avoiding to turn in the same direction again), without you actually experiencing pain. So why does it hurt?

I do not know why it hurts. So in this series I want to address another question: where does it hurt? This is also quite an interesting question, because although it sounds obvious to us what is meant by the location of pain, we are really asking about the perceived location in the world of a feeling. What kind of weird question is this?

What is computational neuroscience? (XXII) The whole is greater than the sum of its parts

In this post, I want to come back on methodological reductionism, the idea that the right way, or the only way, to understand the whole is to understand the elements that compose it. A classical rebuttal of methodological reductionism is that the “whole is greater than the sum of its parts” (Aristotle). I feel that this argument is often misunderstood, so I have thought of a simple example from biology.

Cells are enclosed by membranes, which are made of lipids. A membrane is a closed surface that defines an interior and an exterior. No part of a membrane is a membrane, because it is not a closed surface. You could study every single lipid molecule that forms a membrane in detail, and you would still have no understanding of what a membrane is, despite the fact that these molecules are all there is in the membrane (ontological reductionism), and that you have a deep understanding of every single one of them. This is because a membrane is defined as a particular relationship between the molecules, and therefore is not contained in or explained by any of them individually.

There is another important epistemological point in this example. You might want to take a “bottom-up” approach to understanding what a membrane is. You would start by looking at a single lipid molecule. Then you could take a larger patch of membrane and study it, building on the knowledge you have learned from the single molecule. Then you could look at larger patches of membrane to understand how they differ from smaller patches; and so on. However, at no stage in this incremental process do you approach a better understanding of what a membrane is, because the membrane only exists in the whole, not in a part of it, even a big part. “Almost a membrane” is not a membrane. In terms of models, a simple model of a cell membrane consisting of only a small number of lipid molecules arranged as a closed surface captures what a membrane is much better than a large-scale model consisting of almost all molecules of the original cell membrane.

This criticism applies in particular to purely data-driven strategies to understand the brain. You could think that the best model of the brain is the one that includes as much detailed empirical information about it as possible. The fallacy here is that no part of the brain is a brain. An isolated cortex in a box, for example, does not think or behave. A slice of brain is also not a brain. Something “close to the brain” is still not a brain. A mouse is a better model of a human than half a human, which is bigger and physically more similar but dead. This is the same problem as for understanding a membrane (a much simpler system!): the methodologically reductionist strategy misses that it is not the elements themselves that make the whole, it is the relationship between the elements. So the key to understand such systems is not to increase the level of detail or similarity, but to capture relevant higher-order principles.

What is computational neuroscience? (XXI) Lewis Carroll and Norbert Wiener on detailed models

The last published novel of Lewis Carroll, Sylvie and Bruno (1893 for the second volume), contains a passage that explains that a high level of detail is not necessarily what you want from a model. I quote it in full:

“What a useful thing a pocket-map is!” I remarked.

“That’s another thing we’ve learned from your Nation,” said Mein Herr, “map-making. But we’ve carried it much further than you. What do you consider the largest map that would be really useful?”

“About six inches to the mile.”

“Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”

“Have you used it much?” I enquired.

“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.”

In other words: if the model is nearly as complex as the thing it applies to, then it is no more useful than the thing itself. This theme also appears in a 1945 essay by Arturo Rosenblueth and Norbert Wiener, “The Role of Models in Science”:

“The best material model for a cat is another, or preferably the same cat. In other words, should a material model thoroughly realize its purpose, the original situation could be grasped in its entirety and a model would be unnecessary. […] This ideal theoretical model cannot probably be achieved. Partial models, imperfect as they may be, are the only means developed by science for understanding the universe. This statement does not imply an attitude of defeatism but the recognition that the main tool of science is the human mind and that the human mind is finite.”

The last sentence is the most important: a model is not something that is meant to mimic reality; it is something that is constructed by and for the human mind to help it grasp complex aspects of reality.

Subjective physics

I just finished writing a text about "subjective physics": a term I made up to designate the description of the laws that govern sensory signals and their relationships with actions. It is relevant to systems computational neuroscience, embodiment theories and psychological theories of perception (in particular Gibson's ecological theory and the sensorimotor theory). Here is the abstract:

Imagine a naive organism who does not know anything about the world. It can capture signals through its sensors and it can make actions. What kind of knowledge about the world is accessible to the organism? This situation is analog to that of a physicist trying to understand the world through observations and experiments. In the same way as physics describes the laws of the world obtained in this way by the scientist, I propose to name subjective physics the description of the laws that govern sensory signals and their relationships with actions, as observed from the perspective of the perceptual system of the organism. In this text, I present the main concepts of subjective physics, illustrated with concrete examples.

What is computational neuroscience? (XX) What is a realistic model?

What is a realistic neuron model? There is a hierarchy among neuron models, which goes like this: least realistic model is the integrate-and-fire model, which is phenomenological; then the single-compartment Hodgkin-Huxley model; then multicompartmental Hodgkin-Huxley models (this hierarchy is questioned by a recently accepted paper that I wrote, but I will discuss it when the paper is out).

But what is meant exactly by “realistic”? Take two models of a plane: a toy plane made of wood, and a simple paper plane. The first model certainly looks more like a plane. It has different recognizable elements of a plane: wings, helixes, a cockpit. One might say that this model is more realistic. The second model doesn’t have a cockpit, and in fact doesn’t really look like a plane. However, unlike the first model, it flies – definitely an important characteristic of planes. So which one is more realistic?

There are generally two types of answers to justify the fact that the Hodgkin-Huxley model (HH) is more realistic than the integrate-and-fire model (IF). One is: the HH model has ionic channels, the IF model doesn’t. Another one is: the HH model has been proven right with experiments.

Let us start with the first type of answer. Strictly speaking, the HH model does not have ionic channels. Ionic channels are proteins. The HH model is a set of equations. There are parts of these equations that we identify with properties of proteins, but they are not the real things. Saying that the HH model has ionic channels is like saying that the wooden plane has a helix: there is something we call a “helix”, yes, but functionally it is not a helix, it is a nice-looking piece of wood. Specifically, in the HH model, the sodium gating variable (m) has no biophysical counterpart in the actual sodium channel. The sodium current in the HH model corresponds to something that can be physically measured, but it is described as proportional to the third power of gating variable m, only because exponent 3 was the best fit to their data. We call it “gating” variable only because it is part of a story in which it is a gating variable: the story that there are three independent gates that must all be open for the channel to be open. It is an attractive story, but we now know that this is not what happens with the sodium channel. So the model is consistent with a story in which there is a neuron with sodium channels, but the story is not an accurate description of reality. We might call this “wooden plane realism”.

The second of type of answer is more scientific in its expression. However, it is a bit ambiguous. What Hodgkin and Huxley proved is that their model was an accurate description of the electrical behavior of a giant squid axon, which was space-clamped with a metal wire. But when we claim that the HH model is realistic, we mean something more general than that. We mean that the same “kind” of model would successfully account for electrical behavior of other neurons. It would not be exactly the same model, because parameters and ionic channels would be different, and would have to be properly adjusted. So in fact it is rather the HH theory or formalism that is meant to be more realistic. However, for a given neuron, the HH “model” is only more realistic if the structure and parameters of the model are properly adjusted for that given neuron.

These remarks touch on several epistemological concepts that have been described by Karl Popper (The logic of scientific discovery, 1935). The first one is the notion of “empirical content” of a theory, which is defined as the set of possible falsifiers of the theory. In short, for a model, it is the type of (non-tautological) predictions that a model can make. For example, the integrate-and-fire model can make predictions about the membrane potential and the spike times, as a function of the input current. The HH model can additionally make predictions about the sodium and potassium currents. This is just about the logical structure of the models, in their articulation with empirical data, not about whether the models are accurate or not. We can consider greater empirical content as a more satisfying way to rephrase the idea that the HH model is more realistic because it “has” ionic channels. But it is a mistake to identify realism with empirical content: a theory can have a very large empirical content and make predictions that turn out to be all completely wrong.

Related to this notion is the “levels of universality”. Consider these two statements (taken from Popper): all orbits of planets are ellipses; all orbits of heavenly bodies are ellipses. The second statement is more universal, because planets are heavenly bodies. So in this sense it is a better theory. HH theory has this quality of being quite universal: it is meant to apply to spiking and non-spiking neurons, for example.

Finally, a theory can be characterized by its “degree of precision”. Taking again an example from Popper: all orbits of planets are circles; all orbits of planets are ellipses. Independently of the empirical validity of these two statements, the first one is more precise than the second one, because all circles are ellipses. Applied to models, this is related to the number of parameters that are left unspecified. For example, multicompartmental models have a greater empirical content than single-compartment models, because they can make predictions about membrane potential at different locations on the dendritic tree. However, they are not necessarily more realistic because they are less precise: there are many unspecified parameters, and the additional empirical content is only accurate if these parameters are properly set.

So in fact there are two aspects of realism that can be discussed about models. One has to do with the logical structure of the model: what cases it is meant to apply to (empirical content, universality), how precise it is in its predictions (precision); in other words: the ambition of the model. On this dimension, one seeks models with greater universality, greater empirical content, greater precision. Another way to phrase it is to say that a useful model is one that has many opportunities to be wrong. It is less easy than we might think to compare HH and IF models on this dimension: on one hand the HH model is more universal, but on the other hand it is less precise than the IF model (for example, a HH model does not necessarily spike).

This first aspect has nothing to do with how accurate the model is, with respect to empirical observations. It only has to do with the logical structure of the model. The second aspect has to do with empirical validity: how accurate the model predictions are. For example, we could well imagine that a phenomenological model produces more accurate predictions than a biophysical model, which has a greater empirical content. In this case the biophysical model makes more predictions, but they do not match empirical observations as well as the phenomenological model. Which model is more realistic?

What is sound? (XVI) On the spatial character of tones

An intriguing fact about the pitch of tones is that we tend to describe it using spatial characteristics such as “high” and “low”. In the same way, we speak of a rising intonation when the pitch increases. A sequence of notes with increasing frequency played on a piano scale is described as going “up” (even though it is going right on a piano, and going down on a guitar). Yet there is nothing intrinsically spatial in the frequency of a tone. Why do we use these spatial attributes? An obvious possibility is that it is purely cultural: “high” and “low” are just arbitrary words that we happen to use to describe these characteristics of sounds. However, the following observations should be made: - We use the terms low and high, which are also used for spatial height, and not specific words such as blurp and zboot. But we don’t use spatial words for colors and odors. Instead we use specific words (red, green) or sometimes words used for other senses (a hot color). Why use space and not something else? - All languages seem to use more or less the same type of words. - In an experiment done in 1930 by Caroll Pratt (“The spatial character of high and low tones”), subjects were asked to locate tones of various frequencies on a numbered scale running from the floor to the ceiling. The tones were presented through a speaker behind a screen, placed at random height. It turned out that the judgment of spatial height made by subjects was very consistent, but was entirely determined by tone frequency rather than actual source position. High frequency tones were placed near the ceiling, low frequency tones near the floor. The result was later confirmed in congenitally blind persons and in young children (Roffler & Butler, JASA 1968). Thus, there is some support for the hypothesis that tones are perceived to have a spatial character, which is reflected in language. But why? Here I will just speculate widely and make a list of possibilities. 1. Sensorimotor hypothesis related to vocal production: when one makes sounds (sings or speaks), sounds of high pitch are felt to be produced higher than low pitch sounds. This could be related to the spatial location of tactile vibrations on the skin depending on fundamental frequency or timbre. Professional singers indeed use spatial words to describe where the voice “comes from” (which has no physical basis as such). This could be tested by measuring skin vibrations. In addition, congenitally mute people would show different patterns of tone localization. 2. Natural statistics: high frequency sounds tend to come from sources that are physically higher than low frequency sounds. For example, steps on the ground tend to produce low frequency sounds. Testing this hypothesis would require an extensive collection of natural recordings tagged with their spatial position. But note that the opposite trend is true for sounds produced by humans: adults have a lower voice than children, which are lower in physical height. 3. Elevation-dependent spectral cues: to estimate the elevation of a sound source, we rely on spectral cues introduced by the pinnae. Indeed the circumvolutions of the pinnae introduce elevation-dependent notches in the spectrum. By association, the frequency of a tone would be associated with the spectral characteristics of a particular elevation. This could be tested by doing a tone localization experiment and comparing with individual head-related transfer functions.

What is computational neuroscience? (XIX) Does the brain process information?

A general phrase that one reads very often about the brain in the context of perception is that it “processes information”. I have already discussed the term “information”, which is ambiguous and misleading. But here I want to discuss the term “process”. Is it true that the brain is in the business of “information processing”?

“Processing” refers to a procedure that takes something and turns it into something else by a sequence of operations, for example trees into paper. So the sentence implies that what the brain is doing is transforming things into other things. For example, it transforms the image of a face into the identity of the face. The coding paradigm, and more generally the information-processing paradigm, relies on this view.

I will take a concrete example. Animals can localize sounds, based on some auditory cues such as the level difference between the two ears. In the information processing view, what sound localization means is a process that takes a pair of acoustic signals and turns it into a value representing the direction of the sound source. However, this not literally what an animal does.

Let us take a cat. The cat lives and, most of the time, does nothing. Through its ears, it receives a continuous acoustic flow. This flow is transduced into electrical currents, which triggers some activity in the brain, that is, electrical events happening. At some moment in time, a mouse scratches the ground for a second, and the cat turns its eyes towards the source, or perhaps crawls to the mouse. During an extended period of time, the mouse is there in the world, and its location exists as a stable property. What the cat “produces”, on the other hand, is a discrete movement with properties that one can relate to the location of the mouse. Thus, sound localization behavior is characterized by discrete events that occur in a continuous sensory flow. Behavior is not adequately described as a transformation of things into things, because behavior is an event, not a thing: it happens.

The same remark applies to neurons. While a neuron is a thing that exists, a spike is an event that happens. It is a transient change in electrical properties that triggers changes in other neurons. As the terms “neural activity” clearly suggest, a spike is not a “thing” but an event, an action on other neurons or muscles. But the notion of information processing implies that neural activity is actually the end result of a process rather than the process itself. There is a confusion between things and events. In a plant that turns trees into paper, trees and papers are the things that are transformed; the action of cutting trees is not one of these things that are transformed. Yet this is what the information processing metaphor says about neural activity.

There are important practical implications for neural models. Traditionally, these models follow the information-processing paradigm. There is an input to the model, for example a pair of acoustical signals, and there is an output, for example an estimate of sound location (I have worked on this kind model myself, see e.g. Goodman & Brette, PLoS Comp Biol 2010). The estimate is generally calculated from the activity of the neurons over the course of the simulation, which corresponds to the time of the sound. For example, one could select the neuron with the maximum firing rate and map its index to location; or one could compute estimate based on population averages, etc. In any case, there is a well-defined input corresponding to a single sound event, and a single output value corresponding to the estimated location.

Now try to embed this kind of model into a more realistic scenario. There is a continuous acoustic flow. Sounds are presented at various locations in sequence, with silent gaps between them. The model must estimate the locations of these sounds. We have a first problem, which is that the model produces estimates based on total activity over time, and this is clearly not going to work here since there is a sequence of sounds. The model could either produce a continuous estimate of source location (the equivalent of continuously pointing to the source), or it could produce an estimate of source location at specific times (the equivalent of making a discrete movement to the source), for example when the sounds stop. In either case, what is the basis for the estimate, since it cannot be the total activity any more? If it is a continuous estimate, how can it be a stable value if neurons have transient activities? More generally, how can the continuous flow of neural activity produce a discrete movement to a target position?

Thus, sound localization behavior is more than a mapping between pairs of signals and direction estimates. Describing perception as “information processing” entails the following steps: a particular time interval of sensory flow is selected and considered as a thing (rather than a flow of events); a particular set of movements is considered and some of its properties are extracted (e.g. direction); what the brain does is described as the transformation of the first thing into the second thing. Thus, it is an abstract construction by an external observer.

Let me summarize this post and the previous one. What is wrong about “information processing”? Two things are wrong. First (previous post), the view that perception is the transformation of information of some kind into information of another kind is self-contradictory, because a signal can only be considered “information” with respect to a perceptual system. This view of perception therefore proposes that there are things to be perceived by something else than the perceptual system. Second (this post), “processing” is the wrong term because actions produced by the brain are not things but events: it is true at the scale of the organism (behavior) and it is true at the scale of neurons (spikes). Both behavior and causes of behavior are constituted by events, not things. It is also true of the mind (phenomenal consciousness). A thing can be transformed into another thing; an event happens.