Why does a constant stimulus feel constant? (I)

What is the relationship between neural activity and perception? That is, how does the quality of experience (qualia) relate with neural activity? For any scientist working in sensory neuroscience, this should be a central question – perhaps THE question. Unfortunately the obsession of the community for the question of “neural coding”, which is about relating neural activity with externally defined properties of sensory stimuli, does not help much in this regard. In fact, very naïve philosophical assumptions about this question seem to pervade the field. A popular one is that the perceived intensity of a stimulus corresponds to the firing rate of the neurons that are sensitive to it, in particular sensory receptor neurons. This was indeed an idea proposed by Lord Adrian in the 1920s, and the basic argument is that the firing rate of sensory neurons generally increases when the strength of the stimulus is increased. Clearly the argument is very weak (only an observed correlation), but I will try to refute it explicitly, because it triggers some interesting remarks. To refute it, I will turn the perspective around: why is it that a constant stimulus feels constant? In fact, what is a constant stimulus? This is a very basic question about qualia, but it turns out that it is a surprisingly deep one.

Let us start by listing a few sensory stimuli that feel constant, in terms of perceptual experience. A pure tone (e.g. the sound produced by a diapason or a phone) feels constant. In particular its intensity and pitch seem to be constant. Another example could be a clear blue sky, or any object that you fixate. In the tactile modality, a constant pressure on a finger. For these stimuli, there is a perceived constancy in their qualities, ie, the color of the sky does not seem to change, the frequency of the tone does not seem to change. Attention to the stimulus might fade, but one does not perceive that the stimulus changes. In contrast, neural activity is not constant at all. For a start, neurons fire spikes, and that means that their membrane potential always changes, but we do not feel this change. Secondly, in sensory receptor neurons but also in most sensory neurons in general, the frequency of those spikes changes in response to a constant stimulus: it tends to decrease (“adapt”). But again, a blue sky still feels the same blue (and not darker) and the pitch and intensity of a pure tone do not decrease. There appears to be no such simple connection between neural activity and the perceived intensity of a stimulus. Why is it that the intensity of a pure tone feels constant when the firing rate of every auditory nerve fiber decreases?

In response to this question, one might be tempted to propose a homunculus-type argument: the brain analyzes the information in the responses of the sensory neurons and “reconstructs” the true stimulus, which is constant. In other words, it feels constant because the brain represents the outside world and so it can observe that the stimulus is constant. As I noted a number of times in this blog, there is a big conceptual problem with this kind of argument (which is vastly used in “neural coding” approaches), and that is circular logic: since the output of the reconstruction process is in the external world (the stimulus), how can the brain know what that output might be, as it is precisely the aim of the reconstruction process to discover it? But in fact in this case, the fallacy of this argument is particularly obvious, for two reasons: 1) whereever the representation is supposed to be in the brain, neural activity is still not constant (in particular, made of spikes); 2) even more importantly, in fact what I called “constant stimulus” is physically not constant at all.

Physically, a pure tone is certainly not constant: air vibrates at a relatively high rate, the acoustic pressure at the ear fluctuates. The firing of auditory nerve fibers actually follows those fluctuations, at least if the frequency is low enough (a phenomenon called phase locking), but it certainly doesn't feel this way. Visual stimuli are also never constant, because of eye movements – even when one fixates an object (these are called fixational eye movements). In fact, if the retinal image is stabilized, visual perception fades away quickly. In summary: the sensory stimulus is not constant, neurons adapt, and generally neural activity is dynamic for any stimulus. So the question one should ask is not: how does the brain know that the stimulus is constant, but rather: what is it that make those dynamic stimuli feel perceptually constant?

Rate vs. timing (XXII) What Robert Rosen would say about rates and spikes

Robert Rosen was an influential theoretical biologist, who worked in particular on the nature of life. He made the point that living organisms are very special kinds of natural systems, in that they are anticipatory systems, which is the name of one of his most important books. He spends a substantial part of that book on epistemological issues, in particular on what a model is. The following figure is in my opinion a brilliant illustration of what a model is:

modeling relation

The natural system is what is being modeled. For example, the brain or the solar system. The formal system is the model, for example sets of differential equations describing Newtonian mechanics. The model has state variables that represent observables of the natural system. The mapping from the natural system to those state variables is called “encoding” here – it corresponds to measurement.  Decoding is the converse process. Causality describes changes occurring in the natural system, while implication describes changes in the formal system. A good model is one that is such that this diagram commutes. That is, causality in the natural system corresponds to implication in the formal system.

Let us apply this framework to the debate at hand: rate-based vs. spike-based theories of the brain. The question then is: can there be a good rate-based model of the brain, i.e., a model in which observables are rates, or is it necessary to include spikes in the set of observables? The question has little to do with the question of coding (how much information there is in either spike timing or rates about some other observable). It has to do with whether rates, as observables, sufficiently characterize the natural system so that the evolution of a formal system based on them can be mapped to the evolution of the natural system. In other words: do rates have a causal value in the dynamics of neural networks? It is easy to imagine how spikes might, because neurons (mainly) communicate with spikes and there are some biophysical descriptions of the effect of a single spike on various biophysical quantities. It is not so easy for rates. The problem is that in our current biophysical understanding of neurons, spikes are observables that have a causal role (e.g. the notion of a postsynaptic potential), but rates are primarily described as observables (averages of some sort) with no causal nature. To support rate-based theories is to demonstrate the causal nature of rates. As far as I know, it has not been done, and I have heard no convincing reason why rates might have a causal nature.

In fact, given that a rate is an observable that is defined on top of another observable, spikes, the question reduces to a more formal question, about relations between two formal systems: can a spike-based model of the brain be approximated by a rate-model? (in the same sense as depicted on the figure above) This is an interesting remark, because now the question is not primarily empirical but formal, and therefore it can be addressed theoretically. In fact, this question has already been addressed. This is precisely the goal of all studies trying to derive mean-field descriptions of spiking neural networks. So far, the results of those studies are that 1) it is not possible in the general case; 2) it is possible under some specific assumptions about the structure of the spiking model, which are known not to be empirically valid (typically: random sparse connectivity, independent external noise to all neurons).

Spontaneous firing in auditory nerve fibers might reflect mechanical noise

One thing has always puzzled me in auditory nerve physiology. Most auditory nerve fibers saturate quickly when sound level is increased. This implies that, at least for tones, firing rate carries no information at high level. But then it is also known that above a few kHz (depending on species), neural firing in response to tones also becomes imprecise (loss of phase locking), so that spike timing does not carry information. So in other words: at high levels, high frequency auditory nerve fibers spend a lot of energy firing spikes that carry close to zero information. It is odd.

Now in a recent study in J Neurosci, Temchin and Ruggero show some data that suggest (although it remains a bit speculative) that spontaneous firing of fibers may reflect mechanical vibrations rather than neural noise. In my mind, this implies that there is shared variability in the firing of neighboring fibers, and therefore, the loss in spike timing precision in high frequency may hide precise relative timing between neighoring fibers, and this would carry information about the temporal fine structure of sounds.

A neurological case supporting the sensorimotor theory of perception

I just read chapter 5 of Oliver Sacks' “The man who mistook his wife for a hat”, “Hands”. He describes the case of a 60 year old lady with cerebral palsy, who could not do anything with her hands, and more interestingly, with a profound impairment of manual perception – inability to recognize any object by touch. Yet her tactile senses were intact, and she actually could move her hands – although she did not use them. Sacks reasoned that perhaps she had not developed her manual abilities by exploration as a child would, because she had been looked after so much due to her congenital disease. Then as a therapeutic experiment, he tricked her into grasping things with her hands (e.g. putting food slightly out of reach). In a matter of weeks, she developed complete functional use of her hands (after 60 years!), and even started to make remarkable sculptures. Simultaneously, she also developed normal tactile perception (instant object recognition).

This is quite an interesting case for psychological theories of perception. As the sensorimotor theory of perception predicts, it shows that normal perception requires action. Normal sensory neural structures do not by themselves give rise to perception.

On the expanding volume of experimental data – (I) So... where is the data?

One of the major motivations of the HBP, as I have heard from Henry Markram, is that we already produce tons of experimental data, and yet we still don't understand much about the brain. So what we need is not more data, but rather to do something with all that data. I don't think that the specific way in which the HBP proposes to deal with the data necessarily follows from this remark, but I do think that the remark is generally correct. There are so many more papers than one can read, and so much more data that one can analyze or comprehend. I would like to give my view on this problem as a theoretical neuroscientist.

First of all, as a theoretical neuroscientist, I have to ask the question that many of my colleagues probably have in mind: yes, there is an enormous amount of data, but where is it? Most theoreticians crave data. They want data. Where is it: in the hard drives of the scientists who produce the data, and they usually stay there. So there is a very simple reason why this enormous amount of data is not exploited: it is not shared in the first place. In my view, this would be the first problem to solve. How is it that the data are not shared? If you carefully read the instructions to authors of journals, you generally find that they explicitly ask the authors to share all the data analyzed in the paper with anyone who would ask. And authors have to sign the statement. I can hear some theoreticians laughing in the back! Let's face it: it almost never happens, unless you personally know the authors, and even then it is complicated. Why? I have heard mainly two reasons. There is the “I have to dig out the data” type of reason. That is, the data lie in a disorganized constellation of files and folders, with custom formats etc: basically, it's a mess. This is probably the same reason why many modelling papers also don't share the code for their models: they are not proud of their code! There are some efforts underway to address this issue by developing standardized formats – for example NeuroML for neuron models, and there's a similar effort for experimental data that the Allen Institute is leading. I doubt that this will entirely solve the problem, but it is something.

In any case, I think this first issue is solvable (see below). The second reason why people don't share their data is much more profound: they want to own the data they have produced. They want to own it because it has value. In the academic world of neuroscience, data is wealth. It takes effort and resources to produce data, and you don't want someone taking your data, analyzing it in a different way and publishing it. Many scientists have in mind a few ideas to use their data in different ways, and so they want to protect it. Yet this attitude directly contradicts the statement they have signed when submitting their papers.

There is an additional factor that has more to do with the epistemological biases of the community. Compare these two abstracts. (1) “We wanted to test hypothesis A, and so we did the following experiment: it works”. Here the authors used data that they previously acquired, and confronted them with some hypothesis. (2) “We wanted to test hypothesis A, and so we checked previously published data: it works”. Here the authors used data that other authors previously acquired, and confronted them with some hypothesis. Exactly the same methodology. But certainly you know that abstract (2) will probably get rejected in the highest impact journals. It is not “new data”. It happened to me a number of times that reviewers complained that we actually used “old data”. Apparently the timestamp of the data was a problem. This might mean two things. One is that the value of a paper is essentially in the experimental data, and so if the raw data is the same, then the paper has no additional value. I find this statement philosophically inept, but in any case it is rather hypocritical as it does not actually distinguish between abstracts (1) and (2). A second interpretation is that data is only meaningful if it was acquired after rather than before a hypothesis was expressed. From a logical standpoint, this is absurd. But I think it stems from the wide-spread Popperian fantasy that science is about testing theories with critical experiments. That is, at a given moment in time, there are a number of theories that are consistent with current facts, and you distinguish between them by a carefully chosen experiment. This is certainly not true from a historical point of view (see Thomas Kuhn), and it is naïve from a philosophical point of view (see Imre Lakatos). I have discussed this elsewhere in detail. More trivially, there is the notion in the mind of many scientists (reviewers) that to have the data beforehand is sort of “cheating”, because then it is trivial to come with a hypothesis or theory that is consistent with it. I would argue that it is trivial only if a small amount of data is considered and the explanation is allowed to be as complex as the data, but otherwise it is not trivial at all – rather, it is actually what science is all about. But in any case, it is again hypocritical because you don't know when you read abstract (1) whether the data was actually produced after or before the hypothesis was expressed.

So what can we do about it? First, there is a trivial way to make a lot of progress on data sharing. Instructions to authors already specify that the data should be public. So let's just stop being hypocritical and make the submission of data mandatory at the time of paper submission, or acceptance. This could be dealt with by journals, or by archiving initiatives (such as Arxiv for preprints or ModelDB for models). I believe this will partly solve the issue of “I have to dig out the data” because authors will know in advance that they will have to submit the data. So they will do what is necessary – let's be honest for a minute: this is probably not the most complicated thing scientists are supposed to deal with. Quite possibly, standardized formats might help, but the first significant step is really to make the data accessible.

For the epistemological issue, the problem is more profound. Scientists have to accept that producing data is not the only valuable thing in science, and that making sense of the data is also valuable. I will address this specific point in the next post of this short series.

Update (20.08.2014). I just realized Neuron actually makes it mandatory that some types of datasets are submitted along with the paper (but not all types).

The challenge of retrograde amnesia to theories of memory

I am reading Oliver Sacks' “The man who mistook his wife for a hat”. On chapter 2, he describes a case of retrograde amnesia. Around 1970, the patient went into an episode of alcoholism, which resulted in the loss of 25 years of his most recent memories (declarative memory). As a result, the patient thought he was 19 and lived in 1945, as if time had stopped. So not only could he not transfer short-term memory to long-term memory, but a large part of his previously stored memory got erased. In addition, it is not a random fraction of his memories that were erased: it was exactly the most recent ones. He seemed to perfectly remember old memories, and have absolutely no memory of the more recent events.

This is quite a challenge for current neural theories of memory. The main theoretical concept about memory is the notion of neural assembly supporting associative memory. Imagine a memory is made of a number of elements that are associated together, then the substrate of this memory is a connected network of neurons that “code” for those elements, in some structure in the brain, with connections to relevant parts of the brain (say, the visual cortex for visual features, etc). This conceptual framework can be extended with sequential activation of neurons. Now in this framework, how do you erase the most recent memories? Note that by “most recent”, I am talking of 25 years, not of short-term memory.

One trivial possibility would be that each memory has a timestamp, encoded as part of the neural assembly supporting that memory. Then some mechanism erases all memories that have a timestamp more recent than a particular date. Why and how this would happen is mysterious. In addition, “reading the timestamp” would entail activating those memories (all of them), which would then need to exist at that time, and then erasing them. It simply sounds absurd. A more plausible explanation is that, for some reason, recent memories are more fragile than old ones. But why is that?

This is a very interesting point, because in current neural theories of memory, it is the old memories that are more fragile than the recent ones. The reason is that memories are imprinted by modifications of synaptic connections according to a Hebbian mechanism (neurons that are co-activated strengthen their connections), and then these connections get degraded over time because of the activation of the same neurons in other contexts, by ongoing activity. So in current theories of memory, memory traces decay over time. But what retrograde amnesia implies is exactly the opposite: memory traces should strengthen over time. How is it possible that memories strengthen over time?

One possibility is that memories are replayed. If you recall a memory, the neurons supporting that memory activate and so the corresponding connections strengthen. But conscious recollection will probably not do the trick, because then there would not be a strict temporal cut-off: i.e., some recent memories might be recalled more often than some older ones. So what seems to be necessary is a continuous subconscious replay of memories, independent of emotional or attentional states. Clearly, this is quite a departure from current neural theories of memory.

Rate vs. timing (XXI) Rate coding in motor control

Motor control is sometimes presented as the prototypical example of rate coding. That is, muscle contraction is determined by the firing rate of motoneurons, so ultimately the “output” of the nervous system follows a rate code. This is a very interesting example, precisely because it is actually not an example of coding, which I previously argued is a problematic concept.

I will briefly recapitulate what “neural coding” means and why it is a problematic concept. “Coding” means presenting some property of things in the world (the orientation of a bar, or an image) in another form (spikes, rates). That a neuron “codes” for something means nothing more than its activity co-varies with that thing. For example, pupillary diameter encodes the amount of light captured by the retina (because of the pupillary contraction reflex). Or blood flow in the primary visual cortex encodes local visual orientation (this is what is actually measured by intrinsic optical imaging). So coding is really about observations made by an external observer, it does not tell much about how the system works. It is a common source of confusion because when one speaks of neural coding, there is generally the implicit assumption that the nervous system “decodes it” somehow. But presumably the brain does not “read-out” blood flow to infer local visual orientation. The coding perspective leaves the interesting part (what is the “representation” for?) largely unspecified, which is the essence of the homunculus fallacy.

The control of muscles by motoneurons does not fit this framework, because each spike produced by a motoneuron has a causal impact on muscle contraction: its activity does not simply co-vary with muscle contraction, it causes it. So first of all, motor control is not an example of rate coding because it is not really an example of coding. But still, we might consider that it conforms to rate-based theories of neural computation. I examine this statement now.

I will now summarize a few facts about muscle control by motoneurons, which are found in neuroscience textbooks. First of all, a motoneuron controls a number of muscle fibers and one fiber is contacted by a single motoneuron (I will only discuss α motoneurons here). There is indeed a clear correlation between muscle force and firing rate of the motoneurons. In fact, each single action potential produces a “muscle twitch”, i.e., the force increases for some time. There is also some amount of temporal summation, in the same way as temporal summation of postsynaptic potentials, so there is a direct relationship between the number of spikes produced by the motoneurons and muscle force.

Up to this point, it seems fair to say that firing rate is what determines muscle force. But what do we mean by that exactly? If we look at muscle tension as a function of a time, resulting from a spike train produced by a motoneuron, what we see is a time-varying function that is determined by the timing of every spike. The rate-based view would be that the precise timing of spikes does not make a significant difference to that function. But it does make a difference, although perhaps small: for example, the variability of muscle tension is not the same if the spike train is regular (small variability) or if it is random, e.g. Poisson (larger variability). Now this gets interesting: during stationary muscle contraction (no movement), those motoneurons generate constant muscle tension and they fire regularly, unlike cortical neurons (for example). Two remarks: 1) this does not at all conform to standard rate-based theory where rate is the intensity of a Poisson process (little stochasticity); 2) regularly firing is exactly what motoneurons should be doing to minimize variability in muscle tension. This latter remark is particularly significant. It means that, beyond the average firing rate, spikes occur at a precise timing that minimizes tension variability, and so spikes do matter. Thus motor control rather seems to support spike-based theories.

Spatial perception of pain (IV) Empirical evidence

In this post I confront the propositions I previously described about where it hurts with experimental evidence. There is a recent review about spatial perception of pain, which contains a lot of relevant information (Haggard et al., 2013).

First of all, proposition A (spatial information is independently provided by tactile receptors) is quite clearly ruled out by empirical evidence. There are three types of nerve fibers innervating the skin. Aβ fibers mediate tactile sensations, while Aδ and C fibers mediate pain and temperature sensations. There is a lot of evidence that these are clearly separated, and the type of sensation does not depend on stimulation frequency (i.e., stimulating Aβ fibers never elicits pain). In addition, spatial localization does not seem to rely on Aβ fibers. For example, it is possible to block conduction in Aβ and Aδ fibers, leaving only C fibers. In this case, noxious stimuli can be localized on the skin with about the same resolution as when nerve conduction is normal (Koltzenburg et al., 1993). This implies that the pattern of activations of nociceptors conveys all the necessary information for spatial localization.

Now these patterns could be given a spatial meaning in different ways. One is learned association with tactile or visual stimuli (proposition C). In primary somatosensory cortex (S1), there are nociceptive somatotopic maps of single digits that are highly aligned with maps of responses to Aβ stimuli; there are also neurons that are sensitive to both mechanical stimulation and temperature. But this is only suggestive of a common spatial frame for both modalities. More specifically, if spatial information in pain-specific fibers were acquired from the tactile modality, then the spatial resolution of pain could never be better than that of touch – we would expect that it is similar and perhaps slightly worse. A systematic mapping on the whole body surface shows that this is the case in most locations, but not all (Mancini et al., 2014). Specifically, spatial resolution for pain is more accurate than for touch at the shoulder. In addition, the gradients of spatial resolution from shoulder to hand are opposite for touch and pain, in accordance with the gradients of innervation density of the corresponding fibers. Finally, there was one case of a subject completely lacking Aβ fibers that had normal spatial resolution. These observations rule out the proposition that spatial information in pain is acquired from the tactile modality (at least that it is entirely acquired from it).

It could be that spatial information in pain is acquired from vision. Although I have not seen such an experimental study, I would be very surprised to learn that blind people cannot localize pain. Finally, we could then postulate that spatial localization of pain is acquired from either touch or vision, whichever one is present. The best way to test it would be to map the spatial resolution of a blind subject lacking Aβ fibers. Without this test, the possibility is still quite implausible. Indeed, the subject lacking Aβ fibers had similar spatial resolution as other subjects. But the map includes the lower back, which cannot be seen directly (however the data is not shown for that specific location – I am assuming it follows the same pattern since the authors don't comment on it). Therefore, in that region there is neither vision nor touch for that subject. All these observations tend to reject proposition C.

There is an interesting observation about the relation between innervation density and spatial resolution. As I mentioned above, along the arm there are different gradients of spatial resolution between and touch and they agree with gradients of innervation density of the corresponding fibers. But in the fingertips, the relation does not hold: spatial resolution for pain is high but innervation density of pain-related fibers is low (Mancini et al., 2013). How is it possible? One possibility is cognitive factors, for example we use our hand a lot, perhaps attention is focused on the fingers or we have more experience grasping things and thus developing our spatial discrimination abilities. Another possibility (which I haven't seen mentioned in these studies) is that the patterns of activation may be intrinsically more discriminable, because of the shape, tissue composition or the presence of the fingerprints.

We are then left with proposition B (and variation B2): you feel pain at a particular location because specific movements that you make produce that pain. I noted earlier that this proposition raises a problem, which is that you cannot localize pain that you have not produced yourself in the past. It seems a bit implausible, when we think for example of tooth ache. I argued then that to solve this problem, one would need to postulate that nociceptors can be activated at a “subthreshold” level that does not produce pain. In this case, to feel pain at a particular location requires previously producing specific movements that produce a similar (but possibly less intense) pattern of activation of the pain receptors. The subthreshold activity of these fibers should reach the central nervous system and induce plastic changes supporting future localization of noxious stimuli, without producing any conscious sensation. Finally, I note that there is a potential problem in the fact that intensity and spatial information are carried through the same channel (pain-related fibers), and therefore intensity of pain changes the pattern of activation from which spatial information is extracted. If spatial localization is learned at subthreshold levels, then there is a potential issue of generalizing to pain-inducing levels, with possibilities for biases in pain localization.

Haggard P, Iannetti GD, Longo MR (2013) Spatial sensory organization and body representation in pain perception. Curr Biol CB 23:R164–176.
Koltzenburg M, Handwerker HO, Torebjörk HE (1993) The ability of humans to localise noxious stimuli. Neurosci Lett 150:219–222.
Mancini F, Bauleo A, Cole J, Lui F, Porro CA, Haggard P, Iannetti GD (2014) Whole-body mapping of spatial acuity for pain and touch. Ann Neurol 75:917–924.
Mancini F, Sambo CF, Ramirez JD, Bennett DLH, Haggard P, Iannetti GD (2013) A Fovea for Pain at the Fingertips. Curr Biol 23:496–500.

Notes on consciousness. (II) Perceiving and knowing

Perceiving space is knowing where things are in the world. Or is it?

I am sitting in my living room, and there are big windows on a courtyard. The windows are sound-proof and so if I open just one, acoustical waves mostly enter the room through that window. Now someone enters the courtyard on the right, walks across it and arrives at the door on the left. If I close my eyes, I know that the person is walking from right to left. However, what I hear is the sound of someone walking, always coming from the same direction, that of the window. If someone asks me where the person is at a given moment time, I could point to the more or less correct direction, by inference. But this is not what I perceive. I always perceive the sound coming from the same direction. There is a difference between perceiving (phenomenological) and knowing (conceptual). And there is a difference between phenomenology and behavior.

Another striking example is referred pain. Referred pain is a pain that one feels at a location away from the cause of the injury. For example, in a heart attack, one may feel pain in the arm rather than in the chest. This is a known phenomenon and if you know it, you may correctly identify the location of injury in the heart when you feel pain in the arm. But it doesn't change the fact that you feel pain in the arm. You may entirely convince yourself that the injury is in the heart, and all your behavior might be consistent with that belief, but still you will feel the pain in the arm.

There are several interesting conclusions we can draw from these remarks. First, perception is not entirely reducible to behavior. Here we are touching the hard problem of consciousness (qualia): you could observe a cat turning its head to a sound source and you would think that the cat perceives that the sound came from the source, but in reality you don't know. Maybe the cat perceives it somewhere else but it corrects its movement because it knows its perception tends to be biased. With humans, you could perhaps distinguish between these possibilities because humans speak. But without this option, a purely functionalist approach to perception (in terms of relationships between sensory stimuli and behavior) misses part of the phenomenon.

Second, inference is not the same as perception. Spatial perception is not just the process of inferring where something is from sensory inputs. There is also the experience of perception, which is not captured by the objectivist view.

Spatial perception of pain (III) How can we feel pain inside the body?

I will first start with a summary of the different propositions I made in the previous post about where it hurts.

- Proposition A (independent channels): there are two independent channels, one that provides pain information (intensity or quality of pain, through pain receptors) and another one that provides spatial information (through tactile receptors or vision). The two channels are bound by co-occurrence.
- Proposition B (sensorimotor): you feel pain at a particular location because specific movements that you make produce that pain.
- Proposition B2 (sensorimotor): you feel pain at a particular location because whenever this particular activation pattern of pain receptors is present, you can manipulate this pattern or the intensity of pain by specific movements or actions.
- Proposition C (learned association): the localization of pain is inferred from the activation pattern of pain receptors (which must be spatially selective), by association with another channel that carries spatial information (e.g. tactile receptors).

Note that in A and C, I have moved the problem of spatial information to another modality, either touch or vision. We may consider that spatial information in touch and vision is constituted by sensorimotor contingencies, but it is not an important assumption here. The puzzle is the following: we can only touch our skin, the surface of our body, and we cannot see inside our body. If touch is central to the spatial perception of pain, then how is it possible that we can feel pain inside the body (say, in the stomach or in the head)?

I have discussed a similar example in spatial perception: when one hears music or speech through headphones, it usually feels like the sound comes from “inside the head”. First of all, there is a simple argument why sounds should feel as coming from your body in this case: when you move the head, the sound is unaffected, which means the source is part of your head – either on the surface (skin) or inside the head. The same argument applies to pain felt inside the body: rigid displacements of the body do not change the pain or any information associated with it. Therefore the pain is in you, not in the external world. However, this remark does not explain why pain feels inside the body and not on the skin.

I mentioned another possibility for sounds, inside as a default hypothesis: if you cannot identify the source as coming from somewhere outside, then the sound feels located inside. The default hypothesis raises a question: why does it feel located inside rather than not located at all? There is also another problem here: pain does not simply feel inside, it feels at a particular place inside the body (e.g. the stomach).

A first answer is proposition B2. Perhaps you feel a headache in the head and not in the stomach because the pain is only affected by movements of the head. In the same way, touching your stomach may alter the intensity of pain but not touching other parts. This explanation is a combination of default hypothesis (it's not on the skin so it's inside) and sensorimotor theory (B2). It is appealing but let's see how it applies to the perception of sounds inside the head. Here again, sounds do not simply feel inside the head, but at a particular place inside the head (say on the left or on the right). But no movement that you make has any impact on the sound, and so proposition B2 only explains why the sound is inside the head, but not where in the head it is.

Let us formalize the problem more precisely. Your stomach hurts. There is a pattern of activation of receptors that is characteristic of this condition, but no movement that you can make generates this pattern. In addition, in the case of auditory perception inside the head, no movement may alter this pattern. The default hypothesis is logical inference: since it is a new pattern, it must be located where I cannot produce it: in my body. But as we saw, this not sufficiently precise. To make some progress, I will start with an experiment of thought.

Imagine that in your life, you have touched only two points on your skin, points A and B. When something touches point A, you feel it located at A because you recognize the activation pattern of the tactile receptors. But what if something touches a point between A and B? One possibility would be that you don't feel it located at all, you just feel that something touches you. But it contradicts the fact that you feel sounds inside the head or pain inside the body. Another possibility is the default hypothesis: since you have never encountered the activation pattern, then you know it is neither A nor B, so you feel the touch somewhere outside of A and B. But this logical inference does not produce anything more precise. It seems to contradict the fact that we can hear sounds in our head on the left or on the right. To feel the touch somewhere between A and B requires some form of interpolation: if the new activation pattern resembles the pattern that is characteristic of A, then the touch was probably located somewhere near A; if it resembles both A and B, then it was probably located between A and B.

More generally, we can only have a finite number of experiences, and so t is unlikely that the exact activation pattern of receptors is encountered twice. Even if physical stimuli were identical, the body changes over time. Thus, it appears that we could not have any perceptual experience at all unless there is some form of interpolation. A natural proposition is then that detailed perception inside our body results from perceptual interpolation. This is not the same as logical inference, as in the case of the default hypothesis, because it necessary involves some arbitrariness: there is no way you can logically know where exactly between A and B your skin was touched if you have never encountered the activation pattern before, so the perceived location is a guess.

Now let us go back to our specific problem. How can pain be located inside our body? The idea of interpolation seems to imply that the pattern of receptor activation induced by such pains should resemble that of pains induced on the skin at opposite locations on the body. For example, pain in a joint, say the knee, should produce activation patterns resembling those of pains induced at the skin all around the knee.

There are two interesting points to note about the interpolation idea:
1) Sounds and pains located inside the body tend be less precisely localized, the location is “vague”. This means that the concept of interpolation as in picking a particular point between two points is incorrect: somehow the process of perceptual interpolation also affects the uncertainty of the location, or perhaps the perceived size.
2) How specifically are perceptual locations interpolated? In other words, what is the topology of spatial perception?