Neural correlates of perception (what's wrong with them)

Broadly speaking, neural correlates of a percept, e.g. seeing a face, are what happens with neurons when we see a face. For example, a bunch of neurons would fire when we see Jennifer Aniston. What do neural correlates teach us about perception, or more generally about the mind-body problem?

The interest in neural correlates of perception is largely subtended by the implicit belief that there must be a mapping between perception and brain state seen as a physical system. That is, the percept of seeing Jennifer Aniston's face corresponds to a particular brain state; the percept of a sound being at a particular spatial position corresponds to another one. There are considerable conceptual difficulties with this belief. Consider these two thoughts experiments.

1) Imagine we can instantaneously freeze the brain so that its components, ions, tissues, etc are preserved in the same state (it's a thought experiment!). Does the brain still experience seeing Jennifer Aniston's face?

2) Imagine we record the precise spatiotemporal activity of all neurons in response to Jennifer Aniston's face. Then we inactivate all synapses, and we replay the activity pattern optogenetically. Would the brain still experience seeing Jennifer Aniston's face?

Intuitively, our answer to both questions is negative. If you answered the second question positively, then consider this one:

2b) Imagine we record the precise spatiotemporal activity of all neurons in response to Jennifer Aniston's face. Then we replay the activity pattern on a set of light diodes. Would the diodes experience seeing Jennifer Aniston's face?

If our intuition is correct, then brain states, even understood more broadly as “firing patterns” are not constitutive of percepts. It appears that whatever a percept is, it must involve not just the state or even the activity of neurons, but the interaction between neurons. Therefore when we describe neural correlates of perception in terms of neural “activity”, we appear to be missing a crucial ingredient, which has to do with interactional properties of neurons. To be honest, I must admit here that “interactional properties of neurons” is a loosely defined concept, but apparently there seems to be a need for a concept that goes beyond the concept of activity “pattern”, a concept to be clarified (see afterthought below).

Underlying the problematic concept of neural correlates of perception is the representational view of perception; the idea that whatever we perceive must somehow be “represented” in the brain, like neural paintings of the world. I have pointed out the deep problems with the representational view on this blog (for example here and there) – and obviously I am not the first one to do so (see e.g. Gibson, Brooks, Merleau-Ponty, O'Regan etc). Let us simply reflect on the following one. When we look at Jennifer Anniston's face, we experience the percept of seeing her face at different moments. It seems as if at any instant, we are experiencing the same percept (along with others, of course). Possibly this is an illusion and experience is actually discrete in time, but in any case the perceptual “grain of time” is no more than a few tens of ms. Therefore when looking for neural correlates of the percept, then we cannot be happy to rely on average activity, over time or over trials. We do not experience percepts “on average”, but at any instant (this is related to my points on the rate vs. spike debate). What we should be looking for is something in the interactional properties of neurons that is invariant through the entire time during which the percept is experienced. The concept is quite different from the more traditional “neural paintings” concept.

So, in the current state of research, what neural correlates of perception tell us about perception, more specifically about the mind-body problem, is disappointingly: not so much.

 

Afterthought: an interesting analogy is the concept of temperature in physics. Since temperature corresponds to the movement of particles, you cannot really define the temperature of a physical object at any given time. Temperature corresponds to the activity, not the position or nature of the particles. What's more, the concept of temperature makes no sense except when considering the interaction between agitated particles. Temperature is perhaps an example of “interactional property” of a set of particles.

Some propositions for future spatial hearing research (III) - The coding problem

In the previous posts, I have proposed that we should look at the ecological problem of sound localization, and that in terms of physiology we should go beyond tuning curves. However, if all of this is addressed, we still have a big problem. We are looking for “neural representations” or “codes”, but neural representations are observer-centric concepts that make little sense from the viewpoint of the organism, as I have discussed a few times before (for example there). Neural responses are not there to be read by some little homunculus, they are just neurons that are exciting other neurons, which you are not recording. Those other neurons are not “reading the code”, you are. Those neurons are just reacting instantly to the electrical stimulation of the neurons that constitute what we like to call “neural representation”.

Not everyone is receptive to the philosophical points, so let me just give one example. You could look at the reponses of lots of binaural neurons and realize they have lots of different tunings. So you could suggest: maybe sound location is represented by the most active neurons. But someone else realizes that the average response of all those neurons varies gradually with sound location, so maybe sound location is actually encoded in the average response? Wait a minute: why throw all this information away? maybe sound location is represented by the entire pattern of activity? The problem we are facing here is not that we don't know how to determine which one is true, but rather that all of these are true (the last one being trivially true). Yes, sound location is represented by the identity of most active neurons, the average response and the pattern of activity: there is a mapping between sound location and those different features. That is, you, the external observer, can look at those features and guess what the sound location was. What is this supposed to prove?

By focusing on neural representations, we are not looking at the right problem. What we want to know in the end is not so much how neural activity varies with various parameters of an experiment, but how neural activity constitutes the spatial percept, or perhaps more modestly, how it drives behavioral orientation responses. Now certainly looking at neural responses is a necessary first step, but we can't answer the interesting question if we stop there. So how can we answer the interesting question?

Well, I won't claim that I have a good answer because I think that's one of the major conceptual problems in systems neuroscience today. But one proposition that I think goes in the right direction is to do stimulations instead of, or in conjunction with recordings. Ideally, those stimulations should be such as to trigger behaviors. Is average activity the important feature? Stimulate neurons at different places and you should see the same orientation response. Is the identity of active neurons important? With the same experiment, you should see different responses, varying systematically with stimulated neurons.

It's possible: it has actually been done 45 years ago (Syka and Straschill, 1970). Electrical stimulation of the inferior colliculus with microelectrodes can trigger specific orienting responses. These days one could also probably do optogenetic stimulation. It's not going to be simple, but I think it's worth it.

Some propositions for future spatial hearing research (II) - Tuning curves

In the previous post, I proposed to look at the ecological problem of sound localization, rather than the artificial and computationally trivial problem that is generally addressed. As regards physiology, this means that a neural representation of sound location is a property of collective neural responses that is unchanged for the class of stimuli that produce the same spatial percept. This is not a property that you will find at a single neuron level. To give a sense of what kind of property I am talking about, consider the Jeffress model, a classic model of sound localization. It goes as follows: each neuron is tuned to a particular location, and there are a bunch of neurons with different tunings. When a sound is presented, you identify the most active neuron, and that tells you where the sound comes from. If it is the same neuron that is most active for different sounds coming from the same location, then you have the kind of representation I am talking about: maximally active neuron is a representation of (specifically) sound location.

The Jeffress model actually has this kind of nice property (unlike competitors), but only when you see it as a signal processing model (cross-correlation) applied to an idealized acoustical situation where you have no head (ie two mics with just air between them). What we pointed out in a recent paper in eLife is that it loses that property when you consider sound diffraction introduced by the head; quite intriguingly, it seems that binaural neurons actually compensate for that (ie their tunings are frequency-dependent in the same way as interaural time differences are frequency-dependent).

But I want to discuss a more fundamental point that has to do with tuning curves. By “tuning curve”, I am referring to a measurement of how the firing rate of a neuron varies when one stimulus dimension is varied. Suppose that indeed you do have neurons that are tuned to different sound locations. Then you present a stimulus (of the same kind) and you look for the maximally active neuron. The tuning of that neuron should match the location of the presented stimulus. Right? Well, actually no. At least not in principle. That would be true if all tuning curves had exactly the same shape and peak value and only differed by a translation, or at least if the shape and magnitude were not correlated with tuning. But otherwise it's just an incorrect inference. If you don't see what I mean look at this paper on auditory nerve responses. Usually one would show selectivity curves of auditory nerve fibers, ie firing rate vs. sound frequency for a bunch of fibers (note that auditory scientists also use “tuning curve” to mean something else, which is minimum sound level that elicits a response vs. frequency). Here the authors show the data differently on Fig. 1: responses of all fibers along the cochlea for a bunch of frequencies. I bet that it is not what you would expect from reading textbooks on hearing. Individually, fibers are tuned to frequency. Yet you can't really pick the most active fiber and tell what sound frequency was presented. Actually there are different frequencies at which the response peaks at the same place. It's basically a mess. But that is what the auditory gets when you present a sound: the response of the entire cochlea for one sound, not the response of one neuron to lots of different stimuli.

So, what about sound localization and binaural neurons, do we have this kind of problem or not? Well I don't know for sure because no one actually shows whether the shape of tuning curves vary systematically with tuning or not. Most of the time, one shows a few normalized responses and then extracts a couple of features of the tuning curves for each cell (ie the tuning in frequency and ITD) and shows some trends. The problem is we can't infer the population response from tunings unless we know quite precisely how the tuning curves depend on tuning. That is particularly problematic when tuning curves are broad, which is the case for the rodents used in many physiological studies.

I see two ways to solve this problem. One is to prove that there is no problem. You look at tuning curves, and you show that there is no correlation between tuning and any other characteristic of tuning curves (for examples, calculate average tuning curves with the same tuning, and compare across tunings). That would be quite reassuring. My intuition: that will work in high frequency, maybe, or in the barn owl perhaps (quite narrow curves), but not in low frequency, and not for most cells in rodents (guinea pigs and gerbils).

If it doesn't work and there are correlations, then the problem will get quite complicated. You could think of looking for a parametric representation of the responses. It's a possibility and one might make some progress this way, but it might become quite difficult to do when you add extra stimulus dimensions (level etc). There is also the issue of gathering data from several animals, which will introduce extra variability.

The only clean way I see of dealing with this problem is to actually record the entire population response (or a large part of the structure). It sounds very challenging, but large-scale recording techniques are really progressing quite fast these days. Very dense electrode arrays, various types of imaging techniques; it's difficult but probably possible at some point.

Some propositions for future spatial hearing research (I) – The ecological situation and the computational problem

In these few posts, I will be describing my personal view of the kind of developments I would like to see in spatial hearing research. You might wonder: if this is any good, then why would I put it on my blog rather than in a grant proposal? Well, I have hesitated for a while but there are only so many things you can do in your life, and in the end I would just be glad if someone would pick up some of these ideas and made some progress in an interesting direction. Some of them are pretty demanding both in terms of efforts and expertise, which is also a reason why I am not likely to pursue all of these myself. And finally I believe in open science, and it would be interesting to read some comments or have some discussions. All this being said, I am open to collaboration on these subjects if one is motivated enough.

The basic question is: how do we (or animals) localize sounds in space? (this does not cover all of spatial hearing)

My personal feeling is that the field has made some real progress on this question but has now exploited all there is to exploit in the current approaches. In a nutshell, those approaches are: consider a restricted set of lab stimuli, typically a set of sounds that are varied in one spatial dimension, and look at how physiological and behavioral responses change when you vary that spatial parameter (the “coding” approach).

Let us start with what I think is the most fundamental point: the stimuli. For practical reasons, scientists want to use nice clean reproducible sounds in their experiments, for example tones and bursts of white noise. There are very good reasons for that. One is that if you want to make your results reproducible by your peers, then it's simpler to write that you used a 70 dB pure tone of frequency 1000 Hz than the sound of a mouse scratching the ground, even though the latter is clearly a more ecologically relevant sound for a cat. Another reason is that you want a clean, non noisy signal both for reproducibility reasons and because you don't want to do lots of experiments. Finally, you typically vary just one stimulus parameter (e.g. azimuthal angle of the source) because that already makes a lot of experiments.

All of this is very sensible, but it means that in terms of the computational task of localizing a sound, we are actually looking at a really trivial task. Think about it as if you were to design a sound localization algorithm. Suppose all sounds are going to be picked up from a set of tones that vary along a spatial dimension, say azimuth. How would you do it? I will tell you how I would do it: measure the average intensity at the left ear, and use a table to map it to sound direction. Works perfectly. Obviously that's not what actual signal processing techniques do, and probably that's not what the auditory system does. Why not? Because in real life, you have confounding factors. With my algorithm, you would think loud sounds come from the left and soft sounds from the right. Not a good algorithm. The difficulty of the sound localization problem is precisely to locate sounds despite all the possible confounding factors, ie all the non-spatial properties of sounds. There are many of them: level, spectrum, envelope, duration, source size, source directivity, early reflections, reverberation, noise, etc. That's why it's actually hard and algorithms are not that good in ecological conditions. That is the ecological problem, but there is actually very little research on it (in biology). As I argued in two papers (one about the general problem and one applied to binaural neurons), the problem that is generally addressed is not the ecological problem of sound localization, but the problem of sensitivity to sound location, a much simpler problem.

This state of affairs is very problematic in my opinion when it comes to understanding “neural representations” of sound location, or more generally, how the auditory system deals with sound location. For example, many studies have looked at the information content of neural responses and connected it with behavioral measurements. There are claims such as: this neuron's firing contains as much information about sound location as the entire organism. Other studies have claimed to have identified optimal codes for sound location, all based on the non-ecological approach I have just described. Sorry to be blunt, but: this is nonsense. Such claims would have been meaningful if we actually lived in a world of entirely identical sounds coming from different directions. And so in that world my little algorithm based on left ear intensity would probably be optimal. But we don't live in that world, and I would still not use the left-ear algorithm even if I encountered one of those sounds. I would use the algorithm that works in general, and not care so much about algorithms that are optimal for imaginary worlds.

What do we mean when we say that “neurons encode sound location”? Certainly we can't mean that neurons responses are sensitive to location, ie they vary when you vary sound location, because that would be true of basically all neurons that respond to sounds. If this is what we mean, then we are just saying that a sizeable portion of the brain is sensitive to auditory stimuli. Not that interesting. I think we mean, or at least we should mean, that neurons encode sound location specifically, that is, there is something in the collective response of the neurons that varies with sound location and not with other things. This something is the “representation”, and its most basic property is that it does not change if the sound location percept does not change. Unfortunately that property cannot be assessed if all you ever vary in your stimulus is the spatial dimension, and so in a nutshell: current approaches based on restricted stimulus sets cannot, by construction, address the question of neural representations of sound location. They address the question of sensitivity – a prerequisite, but really quite far from the actual ecological problem.

So I think the first thing to do would be to start actually addressing the ecological problem. This means essentially inverting the current paradigm: instead of looking at how responses (physiological/behavioral) change when a spatial dimension is varied, look at how they change (or at what doesn't change) when non-spatial dimensions are varied. I would proceed in 3 steps:

1) Acoustics. First of all, what are the ecological signals? Perhaps surprisingly, no one has measured that systematically (as far as I know). That is, for an actual physical source at a given location, not in a lab (say in a quiet field, to simplify things), how do the binaural signals look like? What is the structure of noise? How do the signals vary over repetititions, or if you use a different source? One would need to do lots of recordings with different source sources and different acoustic configurations (we have started to do that a little bit in the lab). Then we would start to have a reasonable idea of what the sound localization problem really is.

2) Behavior. The ecological problem of sound localization is difficult, but are we actually good at it? So far, I have not seen this question addressed in the previous literature. Usually, there is a restricted set of sounds, with high signal-to-noise ratio, often noises or clicks. So actually, we don't know how good we (or animals) are at localizing sounds in ecological situations. Animal behavior experiments are difficult, but a lot could be done with humans. There is some psychophysical research that tends to show that humans are generally not too much affected by confounding factors (eg level); it's a good starting point.

3) Physiology. As mentioned above, the point is to identify what in neural responses is specifically about sound location (or more precisely, perceived sound location), as opposed to other things. That implies to vary not only the spatial dimension but also other dimensions. That's a problem because you need more experiments, but you could start with one non-spatial dimension that is particularly salient. There is another problem, which is that you are looking for stable properties of neuron responses, but it's unlikely that you find that in one or a few neurons. So probably, you would need to record from many neurons (next post), and this gets quite challenging.

Next post is a criticism of tuning curves; and I'll end on stimulating vs. recording.

A neurological case supporting the sensorimotor theory of perception

I just read chapter 5 of Oliver Sacks' “The man who mistook his wife for a hat”, “Hands”. He describes the case of a 60 year old lady with cerebral palsy, who could not do anything with her hands, and more interestingly, with a profound impairment of manual perception – inability to recognize any object by touch. Yet her tactile senses were intact, and she actually could move her hands – although she did not use them. Sacks reasoned that perhaps she had not developed her manual abilities by exploration as a child would, because she had been looked after so much due to her congenital disease. Then as a therapeutic experiment, he tricked her into grasping things with her hands (e.g. putting food slightly out of reach). In a matter of weeks, she developed complete functional use of her hands (after 60 years!), and even started to make remarkable sculptures. Simultaneously, she also developed normal tactile perception (instant object recognition).

This is quite an interesting case for psychological theories of perception. As the sensorimotor theory of perception predicts, it shows that normal perception requires action. Normal sensory neural structures do not by themselves give rise to perception.

Spatial perception of pain (IV) Empirical evidence

In this post I confront the propositions I previously described about where it hurts with experimental evidence. There is a recent review about spatial perception of pain, which contains a lot of relevant information (Haggard et al., 2013).

First of all, proposition A (spatial information is independently provided by tactile receptors) is quite clearly ruled out by empirical evidence. There are three types of nerve fibers innervating the skin. Aβ fibers mediate tactile sensations, while Aδ and C fibers mediate pain and temperature sensations. There is a lot of evidence that these are clearly separated, and the type of sensation does not depend on stimulation frequency (i.e., stimulating Aβ fibers never elicits pain). In addition, spatial localization does not seem to rely on Aβ fibers. For example, it is possible to block conduction in Aβ and Aδ fibers, leaving only C fibers. In this case, noxious stimuli can be localized on the skin with about the same resolution as when nerve conduction is normal (Koltzenburg et al., 1993). This implies that the pattern of activations of nociceptors conveys all the necessary information for spatial localization.

Now these patterns could be given a spatial meaning in different ways. One is learned association with tactile or visual stimuli (proposition C). In primary somatosensory cortex (S1), there are nociceptive somatotopic maps of single digits that are highly aligned with maps of responses to Aβ stimuli; there are also neurons that are sensitive to both mechanical stimulation and temperature. But this is only suggestive of a common spatial frame for both modalities. More specifically, if spatial information in pain-specific fibers were acquired from the tactile modality, then the spatial resolution of pain could never be better than that of touch – we would expect that it is similar and perhaps slightly worse. A systematic mapping on the whole body surface shows that this is the case in most locations, but not all (Mancini et al., 2014). Specifically, spatial resolution for pain is more accurate than for touch at the shoulder. In addition, the gradients of spatial resolution from shoulder to hand are opposite for touch and pain, in accordance with the gradients of innervation density of the corresponding fibers. Finally, there was one case of a subject completely lacking Aβ fibers that had normal spatial resolution. These observations rule out the proposition that spatial information in pain is acquired from the tactile modality (at least that it is entirely acquired from it).

It could be that spatial information in pain is acquired from vision. Although I have not seen such an experimental study, I would be very surprised to learn that blind people cannot localize pain. Finally, we could then postulate that spatial localization of pain is acquired from either touch or vision, whichever one is present. The best way to test it would be to map the spatial resolution of a blind subject lacking Aβ fibers. Without this test, the possibility is still quite implausible. Indeed, the subject lacking Aβ fibers had similar spatial resolution as other subjects. But the map includes the lower back, which cannot be seen directly (however the data is not shown for that specific location – I am assuming it follows the same pattern since the authors don't comment on it). Therefore, in that region there is neither vision nor touch for that subject. All these observations tend to reject proposition C.

There is an interesting observation about the relation between innervation density and spatial resolution. As I mentioned above, along the arm there are different gradients of spatial resolution between and touch and they agree with gradients of innervation density of the corresponding fibers. But in the fingertips, the relation does not hold: spatial resolution for pain is high but innervation density of pain-related fibers is low (Mancini et al., 2013). How is it possible? One possibility is cognitive factors, for example we use our hand a lot, perhaps attention is focused on the fingers or we have more experience grasping things and thus developing our spatial discrimination abilities. Another possibility (which I haven't seen mentioned in these studies) is that the patterns of activation may be intrinsically more discriminable, because of the shape, tissue composition or the presence of the fingerprints.

We are then left with proposition B (and variation B2): you feel pain at a particular location because specific movements that you make produce that pain. I noted earlier that this proposition raises a problem, which is that you cannot localize pain that you have not produced yourself in the past. It seems a bit implausible, when we think for example of tooth ache. I argued then that to solve this problem, one would need to postulate that nociceptors can be activated at a “subthreshold” level that does not produce pain. In this case, to feel pain at a particular location requires previously producing specific movements that produce a similar (but possibly less intense) pattern of activation of the pain receptors. The subthreshold activity of these fibers should reach the central nervous system and induce plastic changes supporting future localization of noxious stimuli, without producing any conscious sensation. Finally, I note that there is a potential problem in the fact that intensity and spatial information are carried through the same channel (pain-related fibers), and therefore intensity of pain changes the pattern of activation from which spatial information is extracted. If spatial localization is learned at subthreshold levels, then there is a potential issue of generalizing to pain-inducing levels, with possibilities for biases in pain localization.

Haggard P, Iannetti GD, Longo MR (2013) Spatial sensory organization and body representation in pain perception. Curr Biol CB 23:R164–176.
Koltzenburg M, Handwerker HO, Torebjörk HE (1993) The ability of humans to localise noxious stimuli. Neurosci Lett 150:219–222.
Mancini F, Bauleo A, Cole J, Lui F, Porro CA, Haggard P, Iannetti GD (2014) Whole-body mapping of spatial acuity for pain and touch. Ann Neurol 75:917–924.
Mancini F, Sambo CF, Ramirez JD, Bennett DLH, Haggard P, Iannetti GD (2013) A Fovea for Pain at the Fingertips. Curr Biol 23:496–500.

Spatial perception of pain (III) How can we feel pain inside the body?

I will first start with a summary of the different propositions I made in the previous post about where it hurts.

- Proposition A (independent channels): there are two independent channels, one that provides pain information (intensity or quality of pain, through pain receptors) and another one that provides spatial information (through tactile receptors or vision). The two channels are bound by co-occurrence.
- Proposition B (sensorimotor): you feel pain at a particular location because specific movements that you make produce that pain.
- Proposition B2 (sensorimotor): you feel pain at a particular location because whenever this particular activation pattern of pain receptors is present, you can manipulate this pattern or the intensity of pain by specific movements or actions.
- Proposition C (learned association): the localization of pain is inferred from the activation pattern of pain receptors (which must be spatially selective), by association with another channel that carries spatial information (e.g. tactile receptors).

Note that in A and C, I have moved the problem of spatial information to another modality, either touch or vision. We may consider that spatial information in touch and vision is constituted by sensorimotor contingencies, but it is not an important assumption here. The puzzle is the following: we can only touch our skin, the surface of our body, and we cannot see inside our body. If touch is central to the spatial perception of pain, then how is it possible that we can feel pain inside the body (say, in the stomach or in the head)?

I have discussed a similar example in spatial perception: when one hears music or speech through headphones, it usually feels like the sound comes from “inside the head”. First of all, there is a simple argument why sounds should feel as coming from your body in this case: when you move the head, the sound is unaffected, which means the source is part of your head – either on the surface (skin) or inside the head. The same argument applies to pain felt inside the body: rigid displacements of the body do not change the pain or any information associated with it. Therefore the pain is in you, not in the external world. However, this remark does not explain why pain feels inside the body and not on the skin.

I mentioned another possibility for sounds, inside as a default hypothesis: if you cannot identify the source as coming from somewhere outside, then the sound feels located inside. The default hypothesis raises a question: why does it feel located inside rather than not located at all? There is also another problem here: pain does not simply feel inside, it feels at a particular place inside the body (e.g. the stomach).

A first answer is proposition B2. Perhaps you feel a headache in the head and not in the stomach because the pain is only affected by movements of the head. In the same way, touching your stomach may alter the intensity of pain but not touching other parts. This explanation is a combination of default hypothesis (it's not on the skin so it's inside) and sensorimotor theory (B2). It is appealing but let's see how it applies to the perception of sounds inside the head. Here again, sounds do not simply feel inside the head, but at a particular place inside the head (say on the left or on the right). But no movement that you make has any impact on the sound, and so proposition B2 only explains why the sound is inside the head, but not where in the head it is.

Let us formalize the problem more precisely. Your stomach hurts. There is a pattern of activation of receptors that is characteristic of this condition, but no movement that you can make generates this pattern. In addition, in the case of auditory perception inside the head, no movement may alter this pattern. The default hypothesis is logical inference: since it is a new pattern, it must be located where I cannot produce it: in my body. But as we saw, this not sufficiently precise. To make some progress, I will start with an experiment of thought.

Imagine that in your life, you have touched only two points on your skin, points A and B. When something touches point A, you feel it located at A because you recognize the activation pattern of the tactile receptors. But what if something touches a point between A and B? One possibility would be that you don't feel it located at all, you just feel that something touches you. But it contradicts the fact that you feel sounds inside the head or pain inside the body. Another possibility is the default hypothesis: since you have never encountered the activation pattern, then you know it is neither A nor B, so you feel the touch somewhere outside of A and B. But this logical inference does not produce anything more precise. It seems to contradict the fact that we can hear sounds in our head on the left or on the right. To feel the touch somewhere between A and B requires some form of interpolation: if the new activation pattern resembles the pattern that is characteristic of A, then the touch was probably located somewhere near A; if it resembles both A and B, then it was probably located between A and B.

More generally, we can only have a finite number of experiences, and so t is unlikely that the exact activation pattern of receptors is encountered twice. Even if physical stimuli were identical, the body changes over time. Thus, it appears that we could not have any perceptual experience at all unless there is some form of interpolation. A natural proposition is then that detailed perception inside our body results from perceptual interpolation. This is not the same as logical inference, as in the case of the default hypothesis, because it necessary involves some arbitrariness: there is no way you can logically know where exactly between A and B your skin was touched if you have never encountered the activation pattern before, so the perceived location is a guess.

Now let us go back to our specific problem. How can pain be located inside our body? The idea of interpolation seems to imply that the pattern of receptor activation induced by such pains should resemble that of pains induced on the skin at opposite locations on the body. For example, pain in a joint, say the knee, should produce activation patterns resembling those of pains induced at the skin all around the knee.

There are two interesting points to note about the interpolation idea:
1) Sounds and pains located inside the body tend be less precisely localized, the location is “vague”. This means that the concept of interpolation as in picking a particular point between two points is incorrect: somehow the process of perceptual interpolation also affects the uncertainty of the location, or perhaps the perceived size.
2) How specifically are perceptual locations interpolated? In other words, what is the topology of spatial perception?

Spatial perception of pain (II)

Where does it hurt? A common answer to this question is: it hurts at the location of the pain receptors in the body. I will discuss three counter-arguments to this proposition, starting with the simple one. The simple argument is that there is a discrete set of pain receptors on the skin, but the spatial perception of pain is not discrete (the same argument applies to touch): we do not feel pain at discrete locations on our skin. It seems that the phenomenal space of pain is continuous, and this observation does not match anatomy. Then there are the anecdotical arguments: we have headaches but there is no pain receptor in the head; when you lose a limb (say a hand), the non-existent limb can hurt (phantom limb pain); there are systematic mislocalizations of the causes of pain, for example heart pains are felt in the arm. Finally there is a more sophisticated argument. Let us assume that we do feel pain at the location of our pain receptors. But then how do you know where your pain receptors are? One answer would be: we know it because it is somehow encoded in our genes. The “somehow” would deserve some precise explanation, but this is not necessary for this argument. This proposition requires that there is a systematic mapping between our genes and our body at a fine level of detail. That is, the precise location of pain receptors should depend only on our genes. But we know that this is not true. For example the size of our body depends on the diet we had when we were kids, and therefore so does the location of nerves. Therefore, even if we felt pain at the location of the receptors, we would still need to find out where these receptors are.

Another common answer to the question “where does it hurt?” is the objectivist answer: it hurts where the cause of the pain is, or where the injury is. An important difference with the previous one is that it does not imply discrete spatial perception. From our experience, this proposition seems to be correct most of the time but a simple objection is that there are cases of mislocalizations of pain (e.g. heart pains). The same argument as above leads us to the same question: how do you know where the injury or cause of pain is?

If genes are not sufficient, then it must be based on experience. Let us imagine that you hit your knee against a wall. You can see the knee hitting the wall; you also have a tactile experience; you feel an intense pain at the moment of contact, which perhaps gradually fades out. I start with proposition A: there are two independent channels, one that provides pain information (intensity of pain, through pain receptors) and another one that provides spatial information (through tactile receptors or vision). The same question now applies to the spatial channel: how do you know where something touches you? This is simpler to answer because you can touch yourself: you can associate your movements with activation patterns of tactile receptors when you touch your skin. You know where a tactile stimulus is in the sense that you know how to make movements to touch it. An objection to proposition A is: what if there is no external stimulus that activates the tactile receptors? For example, your stomach could hurt because of acidity or a tooth could hurt because of bacteria. There is nothing you can see, and all the tactile receptors are on the skin, so there is no independent source of spatial information, and yet the pain feels precisely localized. The only way to save proposition A is to assume that there actually is another source of spatial information. For example, in the case of tooth pain, maybe tactile receptors (or the nerves) are actually activated. In the case of stomach ache, it is harder to imagine that these receptors on the skin are activated (but I leave it as an open question), and in this case you would need to hypothesize that there are other types of receptors, perhaps acidity receptors, that carry spatial information. But then we are back to the same problem as before: how do these receptors get to carry any spatial information at all? (how do you know where these neurons are?) You would then need to assume that these receptors inside your body can also be activated with your own movements. I leave this as an open possibility. There is still one difficulty, which I will address later because it is shared with other propositions: how can pain be localized inside the body?

I will now discuss two other related propositions. Proposition B is purely sensorimotor: you feel pain at a particular location because specific movements that you make produce that pain. This explanation only requires pain receptors, but these receptors must be activated in a way that is spatially specific (i.e., which varies systematically with the location of the pain stimulus). For example, by pinching yourself, you associate the pattern of pain receptor activation with the location of the skin where you pinched yourself. This proposition implies that you cannot feel any localized pain unless you have previously produced it yourself. But what about when a tooth aches? It seems that you could feel tooth ache without having hurt your tooth yourself before. To save proposition B, it seems necessary to assume that the pain receptors can be activated at a “subthreshold” level that does not produce pain. In this case, to feel pain at a particular location requires previously producing specific movements that produce a similar (but possibly less intense) pattern of activation of the pain receptors.

There is a variation of B, which I will call proposition B2, which goes as follows. You feel the pain at a particular location because whenever this particular activation pattern of pain receptors is present, you can manipulate this pattern or the intensity of pain by specific movements or actions. For example, you hurt your knee and then you know you will feel a bit better if you put your hand on it, maybe because of the heat. Proposition B2 is slightly different from B by the fact that it is how you can manipulate pain, rather than how you can cause pain, that provides spatial information. The example of the tooth would then be: your tooth aches, and you know where it aches because by moving your tongue on your teeth you alter the intensity of pain.

Proposition C is learned association: the localization of pain is inferred from the activation pattern of pain receptors (which must be spatially selective), by association with another channel that carries spatial information (e.g. tactile receptors). For example: you hit your knee against the wall, the tactile receptors carry information about the location of pain, which you associate with the activation pattern of pain receptors. Later, you knee hurts but there is no mechanical contact with anything: you still feel the pain in the knee because it is the same activation pattern as when it was hit by the wall. In proposition C, you could not experience the location of a pain unless you have previously experienced the same pain in conjunction with an independent cue of location. So we have the same problem as in proposition A: what if a tooth aches for the first time? The proposition can be saved in the same way by assuming that pain receptors can be activated at a subthreshold level that does not induce pain.

There are now two questions I want to address: 1) How can we feel pain inside the body? 2) Why do we make systematic errors in localizing pain?

Spatial perception of pain (I)

Pain is a great example of many important themes in philosophy of perception. Here I want to focus on spatial perception, but I will start with a few general comments.

First of all, why do we feel pain in the first place? Scientists tend to offer two types of explanations. One is expressed in terms of efficient causes: you hit your knee against a wall, it activates receptors in your skin, these receptors make some neurons in a particular region of the brain fire, and then these neurons produce a substance (molecules) that is characteristic of pain experience (material cause). Such an explanation has some value, for example it might suggest pharmacological targets for pain relief. However, it does not explain the experience of pain at all. Why is it that a particular molecule induces the experience of pain? This does not seem to be a much better explanation than to say that the contact of the wall induces pain (it is somewhat better because it is more universally associated with pain – i.e., pain caused by other events). This problem is what philosophers call the problem of “qualia”: to explain how pain feels like, why pain hurts rather than just being some information about the state of your body. It is notoriously difficult to explain qualia in terms of efficient causes (see Thomas Nagel's famous paper, “What is it like to be a bat?”). It is much easier to explain the informative content of pain (what is going on in your body), than to explain the phenomenal content of pain (how it feels like).

A second type of explanation is in terms of final cause: it hurts because it is an experience you should avoid. It is useful for you to feel pain because then you will learn to avoid dangerous stimuli: pain has a survival value, which is why it has been selected by evolution. But again this type of explanation fails to address the phenomenal content of pain, because what it requires is a modification of behavior with dangerous stimuli, not necessarily an emotional experience. You could well imagine that when your knee is hit, you get the information that this is something that you should avoid in the future, without carrying an unpleasant feeling. You could also imagine that the event triggers a series of cognitive responses (e.g. negative conditioning) without producing any feeling at all. You could imagine that you hit your knee while sleeping, without being conscious of the event, and that your body reacts to it, perhaps even with negative conditioning (e.g. avoiding to turn in the same direction again), without you actually experiencing pain. So why does it hurt?

I do not know why it hurts. So in this series I want to address another question: where does it hurt? This is also quite an interesting question, because although it sounds obvious to us what is meant by the location of pain, we are really asking about the perceived location in the world of a feeling. What kind of weird question is this?

What is sound? (XVI) On the spatial character of tones

An intriguing fact about the pitch of tones is that we tend to describe it using spatial characteristics such as “high” and “low”. In the same way, we speak of a rising intonation when the pitch increases. A sequence of notes with increasing frequency played on a piano scale is described as going “up” (even though it is going right on a piano, and going down on a guitar). Yet there is nothing intrinsically spatial in the frequency of a tone. Why do we use these spatial attributes? An obvious possibility is that it is purely cultural: “high” and “low” are just arbitrary words that we happen to use to describe these characteristics of sounds. However, the following observations should be made: - We use the terms low and high, which are also used for spatial height, and not specific words such as blurp and zboot. But we don’t use spatial words for colors and odors. Instead we use specific words (red, green) or sometimes words used for other senses (a hot color). Why use space and not something else? - All languages seem to use more or less the same type of words. - In an experiment done in 1930 by Caroll Pratt (“The spatial character of high and low tones”), subjects were asked to locate tones of various frequencies on a numbered scale running from the floor to the ceiling. The tones were presented through a speaker behind a screen, placed at random height. It turned out that the judgment of spatial height made by subjects was very consistent, but was entirely determined by tone frequency rather than actual source position. High frequency tones were placed near the ceiling, low frequency tones near the floor. The result was later confirmed in congenitally blind persons and in young children (Roffler & Butler, JASA 1968). Thus, there is some support for the hypothesis that tones are perceived to have a spatial character, which is reflected in language. But why? Here I will just speculate widely and make a list of possibilities. 1. Sensorimotor hypothesis related to vocal production: when one makes sounds (sings or speaks), sounds of high pitch are felt to be produced higher than low pitch sounds. This could be related to the spatial location of tactile vibrations on the skin depending on fundamental frequency or timbre. Professional singers indeed use spatial words to describe where the voice “comes from” (which has no physical basis as such). This could be tested by measuring skin vibrations. In addition, congenitally mute people would show different patterns of tone localization. 2. Natural statistics: high frequency sounds tend to come from sources that are physically higher than low frequency sounds. For example, steps on the ground tend to produce low frequency sounds. Testing this hypothesis would require an extensive collection of natural recordings tagged with their spatial position. But note that the opposite trend is true for sounds produced by humans: adults have a lower voice than children, which are lower in physical height. 3. Elevation-dependent spectral cues: to estimate the elevation of a sound source, we rely on spectral cues introduced by the pinnae. Indeed the circumvolutions of the pinnae introduce elevation-dependent notches in the spectrum. By association, the frequency of a tone would be associated with the spectral characteristics of a particular elevation. This could be tested by doing a tone localization experiment and comparing with individual head-related transfer functions.