What is computational neuroscience? (XVI) What is an explanation?

An explanation can often be expressed as the answer to a question starting with “why”. For example: why do neurons generate action potentials? There are different kinds of explanations. More than 2000 years ago, Aristotle categorized them as “four causes”: efficient cause, material cause, formal cause and final cause. They correspond respectively to origin, substrate, structure and function.

Efficient cause: what triggers the phenomenon to be explained. Why do neurons generate action potentials? Because their membrane potential exceeds some threshold value. A large part of science focuses on efficient causes. The standard explanation of action potential generation in biology textbooks describes the phenomenon as a chain of efficient causes: the membrane potential exceeds some threshold value, which causes the opening of sodium channels; the opening of sodium channels causes an influx of positively charged ions; the influx causes an increase in the membrane potential.

Material cause: the physical substrate of the phenomenon. For example, a wooden box burns because it is made of wood. Why do neurons generate action potentials? Because they have sodium channels, a specific sort of proteins. This kind of explanation is also very common in neuroscience, for example: Why do we see? Because the visual cortex is activated.

Formal cause: the specific pattern that is responsible for the phenomenon. Why do neurons generate action potentials? Because there is a nonlinear voltage-dependent current that produces a positive feedback loop with a bifurcation. Note how this is different from material cause: the property could be recreated in a mathematical or computer model that has no protein, or possibly by proteins that are not sodium channels but have the required properties. It is also different from efficient causes: the chain of efficient causes described above only produces the phenomenon in combination with the material cause; for example if sodium channels did not have nonlinear properties, then there would not be any bifurcation and therefore no action potential. Efficient causes are only efficient in the specific context of the material causes – i.e.: the efficient cause describes what happens with sodium channels. The formal cause is what we call a model: an idealized description of the phenomenon that captures its structure.

Final cause: the function of the phenomenon. Why do neurons generate action potentials? So as to communicate quickly with distant neurons. Final causes have a special role in biology because of the theory of evolution, and theories of life. According to evolution theory, changes in structure that result in increased rates of survival and reproduction are preferentially conserved, and therefore species that we observe today must be somehow “adapted” to their environment. For example, there is some literature about how ionic channels involved in action potentials have coordinated properties that ensures maximum energetic efficiency. Theories of life emphasize the circularity of life: the organization of a living organism is such that structure maintains the conditions for its own existence, and so an important element of biological explanation is how mechanisms (the elements) contribute to the existence of the organism (the whole).

A large part of physics concerns formal cause (mathematical models of physical phenomena) and final cause (e.g. expression of physical phenomena as the minimization of energy). In the same way, theoretical approaches to neuroscience tend to focus on formal cause and final cause. Experimental approaches to neuroscience tend to focus on material cause and efficient cause. Many epistemological misunderstandings between experimental and theoretical neuroscientists seem to come from not realizing that these are distinct and complementary kinds of explanation. I quote from Killeen (2001), “The Four Causes of Behavior”: “Exclusive focus on final causes is derided as teleological, on material causes as reductionistic, on efficient causes as mechanistic, and on formal causes as “theorizing.””. A fully satisfying scientific explanation must come from the articulation between different types of explanation.

In biology, exclusive focus on material and efficient causes is particularly unsatisfying. A good illustration is the case of convergent evolution, in which phylogenetically distant species have evolved similar traits. For example, insects and mammals have a hearing organ. Note that the terms “hearing organ” refers to the final cause: the function of that organ is to allow the animal to hear sounds, and it is understood that evolution has favored the apparition of such an organ because hearing is useful for these animals. However, the ears of insects and mammals are physically very different, so the material cause of hearing is entirely different. It follows that the chain of efficient causes (the “triggers”) is also different. Yet it is known that the structure of these organs, i.e., the formal cause, is very similar. For example, at a formal level, there is a part of the ear that performs air-to-liquid impedance conversion, although with different physical substrates. The presence of this air-to-liquid impedance conversion stage in both species can be explained by the fact that it is necessary to transmit airborne sounds to biological substrates that are much denser (= final cause). Thus, the similarity between hearing organs across species can only be explained by the articulation between formal cause (a model of the organ) and final cause (the function).

In brief, biological understanding is incomplete if it does not include formal and final explanations, which are not primarily empirical. At the light of this discussion, computational neuroscience is the subfield of neuroscience whose aim is to relate structure (formal cause = model) and function (final cause). If such a link can be found independently of the material cause (which implicitly assumes ontological reductionism), then it should be possible to simulate the model and observe the function.

What is computational neuroscience? (XV) Feynman and birds

“Philosophy of science is about as useful to scientists as ornithology is to birds”. This quote is attributed to Richard Feynman, one of the most influential physicists of the 20th century. Many other famous scientists, including Einstein, held the opposite view, but nonetheless it is true that many excellent scientists have very little esteem for philosophy of science or philosophy in general. So it is worthwhile reflecting on this quote.

This quote has been commented by a number of philosophers. Some have argued, for example, that ornithology would actually be quite useful for birds, if only they could understand it – maybe they could use it to cure their avian diseases. This is a funny remark, but presumably quite far from what Feynman meant. So why is ornithology useless to birds? Presumably, what Feynman meant is that birds do not need the intellectual knowledge about how to fly. They can fly because they are birds. They also do not need ornithology to know how to sing and communicate. So the comparison implies that scientists know how to do science, since they are scientists, and this knowledge is not intellectual but rather comes from their practice. It might be interesting to observe after the fact how scientists do science, but it is not useful for scientists, because the practice of science comes before its theory, in the same way as birds knew how to fly before there were ornithologists.

So this criticism of philosophy of science entirely relies on the idea that there is a scientific method that scientists master, without any reflections on this method. On the other hand, this method must be ineffable or at least very difficult to precisely describe, in the same way as we can walk but the intellectual knowledge of how to walk is not so easy to convey. Otherwise philosophy of science would not even exist as a discipline. If the scientific method is not something that you learn in an intellectual way, then it must be like a bodily skill, like flying for a bird. It is also implicit that scientists must agree on a single scientific method. Otherwise they would start arguing about the right way to do science, which is doing philosophy of science.

This consensual way of doing science is what Thomas Kuhn called “normal science”. It is the kind of science that is embedded within a widely accepted paradigm, which does not need to be defended because it is consensual. Normal science is what scientists learn in school. It consists of paradigms that are widely accepted at the time, which are presented as “the scientific truth”. But of course such presentation hides the way these paradigms have come to be accepted, and the fact that different paradigms were widely accepted before. For example, a few hundred years ago, the sun revolved around the Earth. From times to times, science shifts from one paradigm to another one, a process that Kuhn called “revolutionary science”. Both normal science and revolutionary science are important aspects of science. But revolutionary science requires a critical look on the established ways of doing science.

Perhaps Feynman worked at a time when physics was dominated by firmly established paradigms. Einstein, on the other hand, developed his most influential theories at a time when the foundations of physics were disputed, and he was fully aware of the relevance of philosophy of science, and philosophy in general. Could he have developed the theory of relativity without questioning the philosophical prejudices about the nature of time? Here are a few quotes from Einstein that I took from a paper by Howard (“Albert Einstein as a philosopher of science”):

“It has often been said, and certainly not without justification, that the man of science is a poor philosopher. Why then should it not be the right thing for the physicist to let the philosopher do the philosophizing? Such might indeed be the right thing to do at a time when the physicist believes he has at his disposal a rigid system of fundamental concepts and fundamental laws which are so well established that waves of doubt can’t reach them; but it cannot be right at a time when the very foundations of physics itself have become problematic as they are now. [...] Concepts that have proven useful in ordering things easily achieve such authority over us that we forget their earthly origins and accept them as unalterable givens. Thus they come to be stamped as “necessities of thought,” “a priori givens,” etc. [...] A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is - in my opinion - the mark of distinction between a mere artisan or specialist and a real seeker after truth.”

In my opinion, these views fully apply to computational and theoretical neuroscience, for at least two reasons. First, computational neuroscience is a strongly interdisciplinary field, with scientists coming from different backgrounds. Physicists come from a field with strongly established paradigms, but these paradigms are often applied to neuroscience as analogies (for example Hopfield’s spin glass theory of associative memory). Mathematicians come from a non-empirical field, to a field that is in its current state not very mathematical. Physics, mathematics and biology have widely different epistemologies. Anyone working in computational neuroscience will notice that there are strong disagreements on the value of theories, the way to make theories and the articulation between experiments and theory. Second, computational neuroscience, and in fact neuroscience in general, is not a field with undisputed paradigms. There are in fact many different paradigms, which are often only analogies coming from other fields, and there is no accepted consensus about the right level of description, for example.

Computational neuroscience is perhaps the perfect example of a scientific field where it is important for scientists to develop a critical look on the methods of scientific enquiry and on the nature of scientific concepts.

What is sound? (XV) Footsteps and head scratching

When one thinks of sounds, the image that comes to mind is a speaker playing back a sound wave, which travels through air to the ears of the listener. But not all sounds are like that. I will give two examples: head scratching and footsteps.

When you scratch your head, a sound is produced that travels in the air to your ears. But there is another pathway: the sound is actually produced by the skull and the skin, and it propagates through the skull directly to the inner ear. This is called “bone conduction”. A lot of the early work on this subject was done by von Békésy (see e.g. Hood, JASA 1962). Normally, bone conduction represents a negligible part of sounds that we hear. When an acoustical wave reaches our head, the skull is put in vibration and can transmit sounds directly to the inner ear by bone conduction. But because of the difference in acoustical impedance between air and skin, the wave is very strongly attenuated, on the order of 60 dB according to these early works. It is actually the function of the middle ear to match these two impedances.

But in the case of head scratching, the sound is actually already produced on the skull, so it is likely that a large proportion of the sound is transmitted by bone conduction, if not most of it. This implies that sound localization cues (in particular binaural cues) are completely different from airborne sounds. For example, sound propagates faster (as in water) and there are resonances. Cues might also depend on the position of the jaw. There is a complete set of binaural cues that are specific of the location of scratching on the skull, which are directly associated with tactile cues. To my knowledge, this has not been measured. This also applies to chewing sounds, and also to the sound of one’s own voice. In fact, it is thought that the reason why one’s own voice sounds higher when it is played back is because our perception of our own voice relies on bone conduction, which transmits lower frequencies better than higher frequencies.

Let us now turn to footsteps. A footstep is a very interesting sound – not even mentioning the multisensory information in a footstep. When the ground is impacted, an airborne sound is produced, coming from the location of the impact. However, the ground is not a point source. Therefore when it vibrates, the sound comes from a larger piece of material than just the location of the impact. This produces binaural cues that are unlike those of sounds produced by a speaker. In particular, the interaural correlation is lower for larger sources, and you would expect that the frequency-dependence of this correlation depends on the size of the source (the angular width, from the perspective of the listener).

When you walk in a noisy street, you may notice that you can hear your own footsteps but not those of other people walking next to you, even though the distance of the feet might be similar. Why is that? In addition to the airborne sound, your entire skeleton vibrates. This implies that there should be a large component of the sound that you hear that is in fact coming from bone conduction through your body. Again these sounds should have quite peculiar binaural cues, in addition to having stronger low frequencies. In particular, there should be different set of cues for the left foot and for the right foot.

You might also hear someone else’s footstep. In this case there is of course the airborne sound, but there is also another pathway, through the ground. Through this other pathway, the sound reaches your feet before the airborne sound, because sound propagates much faster in a solid substance than in air. Depending on the texture of the ground, higher frequencies would also be more attenuated. In principle, this vibration in your feet (perhaps if you are bare feet) will then propagate through your body to your inner ear. But it is not so clear how strong this bone conducted sound might be. Clearly it should be much softer than for your own footstep, since in that case there is an impact on your skeleton. But perhaps it is still significant. In this case, there are again different binaural cues, which should depend on the nature of the ground (since this affects the speed of propagation).

In the same way, sounds made by touching or hitting an object might also include a bone conducted component. It will be quite challenging to measure these effects, since ideally one should measure the vibration of the basilar membrane. Indirect methods might include: measurements on the skull (to have an idea of the magnitude), psychoacoustic methods using masking sounds, measuring otoacoustic emissions, electrophysiological methods (cochlear microphonics, ABR).

Sensory modalities and the sense of temperature

Perception is traditionally categorized into five senses: hearing, vision, touch, taste and olfaction. These categories seem to reflect the organs of sense, rather than the sensory modalities themselves. For example, the sense of taste is generally (in the neuroscience literature) associated with the taste receptors in the tongue (sweet, salty etc). But what we refer to as taste in daily experience actually involves the tongue, including “taste” receptors (sweet, salty) but also “tactile” receptors (the texture of food), the nose (“olfactory” receptors), and in fact probably also the eyes (color) and the ears (chewing sounds). All these are involved in a unitary experience that seems to be perceptually localized in the mouth, or on the tongue – despite the fact the most informative stimuli, which are chemical, are actually captured in the nose. One may consider that taste is then a “multimodal” experience, but this is not a very good description. If you eat a crisp, you experience the taste of a crisp. But if you isolate any of the components that make this unitary experience, you will not experience taste. For example, imagine a crisp without any chemically active component and no salt: you experience touch with your tongue, and the crisp has “no taste”. If you only experience the smell, then you have an experience of smell, not of taste. This is another sensory modality, despite the fact that the same chemical elements are involved. If only the “taste” receptors on your tongue were stimulated, you would have an experience of “salty”, not of a crisp. So the modality of taste involves a variety of receptors, but that does not make it more multimodal than vision is multimodal because it involves many photoreceptors.

“Touch” is also very complex. There is touch as in touching something: you make contact with objects and you feel their texture or shape. There is also being touched. There is also the feeling of weight, which involves gravity, and also movement. There is the feeling of pain, which is related to touch, but not classically included in the 5 senses. Finally there is the feeling of temperature, which I will discuss now from an ecological point of view (in the way of Gibson).

The sense of temperature is not usually listed in the 5 senses. It is often associated with touch, because by touch you can feel that an object is hot or cold. But you can also feel that “it” (=the weather) is cold, in a way that is not well localized. Physically, it is a quantity that is not mechanical, and in this sense it is completely different from touch. But like touch, it is a proximal sense that involves the interface between the body and either the medium (air or water) or substances (object surfaces). The sense of temperature is much more interesting that it initially seems. First, there is of course “how hot it is”, the temperature of the medium. The image that comes to mind is that of the thermometer. But temperature can be experienced all over the body. So spatial gradients of temperature can be sensed. When touching an object, parts of the object can be more or less hot. So spatial gradients of temperature can potentially be sensed through an object, in the same way as the mechanical texture can be sensed. Are there temperature textures?

The most interesting and, as far as I know, underappreciated aspect of the temperature sense is its sensorimotor structure. The body produces heat. Objects react to heat by warming up. Some materials, like metal, conduct temperature well, others, like wood, don’t. So both the temporal changes in temperature when an object is touched, and the spatial gradient of temperature that develops, depends on the material and possibly specifies it. So it seems that the sense of temperature is rich enough to qualify as a modality in the same way as touch.

What is computational neuroscience? (XIV) Analysis and synthesis

I would like to propose another way to describe the epistemological relationships between computational and experimental neuroscience. In acoustics, there is a methodology known as “analysis-synthesis” of sounds (Risset & Wessel, 1982) to understand what makes the quality (or “timbre”) of a sound (see in particular Gaver (1993), “How do we hear in the world?”). A first step is to examine the sound by various methods, for example acoustic analysis (look at the spectrum, the temporal envelope, etc), and try to extract the salient features. A second step consists in synthesizing sounds that display these features. One then listen to these sounds to evaluate whether they successfully reproduce the quality of the original sounds. This evaluation step can be made objective with psychoacoustic experiments. The results of the synthesis step then inform the analysis, which can then focus on those aspects that were not correctly captured, and the procedure goes through a new iteration. The analysis can also be guided by physical analysis, i.e., by theory. For example, the perceived size of a sounding object should be related to the resonant frequencies, whose wavelengths correspond to the dimensions of the object. The type of material (wood, metal) should be related to the decay rate of the temporal envelope. By these principles, it is possible to synthesize convincing sounds of impacts on a wood plate, for example.

There is a direct analogy with the relationship between computational and experimental neuroscience. Experimental neuroscience aims at identifying various aspects of the nervous system that seem significant: this is the analysis step. The object of experiments is a fully functional organism, or a piece of it. The empirical findings are considered significant in relationship with the theory of the moment (perhaps in analogy with physical analysis in acoustics), and with the chosen method of analysis (type of measurement and experimental protocol). By themselves, they only indicate what might contribute to the function of the organism, and more importantly how it contributes to it. For example, if the attack of a piano sound is removed, it doesn’t sound like a piano anymore, so the attack is important to the quality of the piano sound. In the same way, lesion studies inform us of what parts of the brain are critical for a given function, but this doesn’t tell us how exactly those parts contribute to the function. Computational neuroscience, then, can be viewed as the synthesis step. Starting from nothing (i.e., not a fully functional organism), one tries to build a drastically simplified system, informed by the analysis step. But the goal is not to reproduce all the pieces of empirical data that were used to inform the system. The goal is to reproduce the function of the organism. In analogy with sound: the goal is not to reproduce detailed aspects of the spectrum, but rather that the synthesized signal sounds good. If the function is not correctly reproduced, then maybe the features identified by the analysis step were not the most relevant ones. In this way the synthesis step informs the analysis step.

This analogy highlights a few important epistemological specificities of computational neuroscience. Most importantly, computational neuroscience is primarily about explaining the function, and only secondarily the empirical data. Empirical experiments on the auditory system of the barn owl aim at explaining how the barn owl catches a mouse in the dark. Computational studies also aim at explaining how the barn owl catches a mouse in the dark, not at reproducing the results of the empirical experiments. Another way to put it: the data to be explained by the theory are not only what is explicitly stated in the Results section, but also the other empirical piece of evidence that is implicitly stated in the Methods or the Introduction section, that is, that before the experiment, the barn owl was a fully functional living organism able to catch a prey in the dark. Secondly, computational neuroscience, as a synthetic approach, aims at a simple, conceptually meaningful, description of the system. Realism is in the function (how the signal sounds), not in the amount of decoration aimed at mimicking pieces of empirical data.

This discussion also brings support to the criticism of epistemic reductionism. Imagine we can measure all the components of the brain, and put them together in a realistic simulation of the brain (which already implies some form of methodological reductionism). This would correspond to fully analyzing the spectrum of a sound, recording it in complete details, and then playing it back. What is learned about what makes the quality of the sound? A second point is methodological: suppose we collect all necessary data about the brain, but from different individual brains, and perhaps a bunch of related species like mice. Would the result sound like a piano, or would it sound like a cacophony of different pianos and a violin?

What is computational neuroscience? (XIII) Making new theories

Almost all work in philosophy of science concerns the question of how a scientific theory is validated, by confronting it with empirical evidence. The converse, how a theory is formulated in the first place, is considered as a mysterious process that concerns the field of psychology. As a result of this focus, one might be led to think that the essence of scientific activity is the confrontation of theories with empirical facts. This point stands out in the structure of biology articles, which generally consist of a short introduction, where the hypothesis is formulated, the methods, where the experiments are described, the results, where the outcome of the experiments is described, and the discussion, where the hypothesis is evaluated in regard of the experimental results. The making of theory generally makes a negligible part of the articles.

Let us consider the problem from a logical point of view. At a given point of time, there is only a finite set of empirical elements that can be taken into account to formulate a theory. A theory, on the other hand, consists of universal statements that apply to an infinite number of predictions. Because the empirical basis to formulate a theory is finite, there are always an infinite number of possible theories that can be formulated. Therefore, from a purely logical point of view, it appears that the making of a theory is an arbitrary process. Imagine for example the following situation. One is presented with the first two observations of an infinite sequence of numbers: 2, 4 and 6. One theory could be: this is the sequence of even numbers, and the empirical prediction is that the next number is 8. Another theory would be: this is the beginning of a Fibonacci sequence, and so the next number should be 10. But it might also be that the next number is 7 or any other number. So no theory is a logical consequence of observations.

If what is meant by “scientific” is a process that is purely based on empirical evidence, then we must recognize that the making of a theory is a process that is not entirely scientific. This process is constrained by the empirical basis, and possibly by Popper’s falsifiability criterion (that the theory could be falsified by future experiments), but it leaves a considerable amount of possibilities. Whether a theory is “good” or “bad” can be partly judged by its consistence with the empirical evidence at the time when it is made, but mostly the empirical evaluation of a theory is posterior to its formulation. Thus, at the time when a theory is formulated, it may be considered interesting, i.e., worth investigating, rather than plausible. Therefore the choice of formulating one theory rather than another is determined by non-empirical criteria such as: the elegance and simplicity of the theory; its generality (whether it only accounts for current empirical evidence or also makes many new predictions); its similarity with other fruitful theories in other fields; its consonance with convincing philosophical point of views; the fact that it may generalize over preexisting theories; the fact that it suggests new experiments that were not thought of before; the fact that it suggests connections between previously distinct theories.

Thus, theoretical activity reaches far beyond what is usually implicitly considered as scientific, i.e., the relationship with empirical evidence. Yet there is no science without theories.

What is computational neuroscience? (XII) Why do scientists disagree?

A striking fact about the making of science is that in any field of research, there are considerable disagreements between scientists. This is an interesting observation, because it contradicts the naive view of science as a progressive accumulation of knowledge. Indeed, if science worked in this way, then any disagreement should concern empirical data only (e.g. whether the measurements are correct). On the contrary, disagreements often concern the interpretation of data rather than the data themselves. The interpretative framework is provided by a scientific theory, and there are often several of them in any field of research. Another type of disagreement concerns the judgment of how convincingly some specific piece of data demonstrates a particular claim.

There are two possibilities: either a large proportion of scientists are bad scientists, who do not correctly apply sound scientific methodology, or the adhesion to a theory and the judgment of particular claims are not entirely based on scientific principles. The difficulty with the first claim, of course, is that there is no systematic and objective criterion to judge what “good science” is and what “bad science” is. In fact, the very nature of this question is epistemological: how is knowledge acquired and how do we distinguish between different scientific theories? Thus, part of the disagreement between scientists is not scientific but epistemological. Epistemological questions are in fact at the core of scientific activity, and failure to recognize this point leads to the belief that there is a single way to do science, and therefore to dogmatism.

So why do scientists favor one theory rather than the other, given the same body of empirical data? Since the choice is not purely empirical, it must rely on other factors that are not entirely scientific. I would argue that a major determinant of the adhesion to a particular theory, at least in neuroscience, is the consonance with philosophical conceptions that the scientist holds. These conceptions may not be recognized as such, because many scientists have limited knowledge or interest in philosophy. One such conception would be, for example, that the objects of perception exist independently of the organism and that the function of a perceptual system is to represent them. Such a conception provides a framework in which empirical data are collected and interpreted, and therefore it is not generally part of the theoretical claims that are questioned by data. It is a point of view rather than a scientific statement, but it guides our scientific enquiry. Once we realize that we are in fact guided by philosophical conceptions, we can then start questioning these conceptions. For example, why would the organism need to represent the external world if the world is already there to be seen? Shouldn’t a perceptual system rather provide ways to act in the world rather than represent it? Who reads the “representation” of the world? Given that the world can only be accessed through the senses, how can this representation be interpreted in terms of the external world?

Many scientists deny that philosophy is relevant for their work, because they consider that only science can answer scientific questions. However, given that the adhesion of a scientist to a particular scientific theory (and therefore also the making of a scientific theory) is in fact guided by philosophical preconceptions, rejecting philosophy only has the result that the scientist may be guided by naive philosophical conceptions.

Finally, another determinant of the adhesion to a particular scientific theory is psychological and linked to the personal history of the scientist. The theory of cognitive dissonance, perhaps the most influential theory in psychology, claims that human psychology is determined by the drive to minimize the dissonance between different cognitive elements. For example, when a piece of evidence is presented that contradicts the beliefs of the scientist, this produces cognitive dissonance and a drive to reduce it. There are different ways to reduce it. One is that the scientist changes her mind and adopts another theory that is consistent with the new piece of data. Another one is that the piece of data is rejected or interpreted in a way that is consonant with the beliefs of the scientist, possibly by adding an ad hoc hypothesis. Another one is to add consonant elements, e.g. by providing new pieces of evidence that support the beliefs of the scientist. Another one is to seek consonant information and to avoid dissonant information (e.g. only read those papers that are most likely to support the beliefs of the scientist). The theory of cognitive dissonance predicts that the first way rarely occurs. Indeed, as the scientist develops his carrier within a given scientific theory, she develops more and more ways to discard dissonant pieces of information, seeks information that is consonant with the theory and by taking all these decisions, many of them public, increases the dissonance between her behavior and contradictory elements. An important and counter-intuitive prediction of the theory of cognitive dissonance is that contradictory evidence generally reinforces the beliefs of the scientist that is deeply committed to a particular theory.

In summary, a large part of scientific activity, including the making of and the adhesion to a scientific theory, relies on epistemological, philosophical and psychological elements.

What is computational neuroscience? (XI) Reductionism

Computational neuroscience is a field that seeks a mechanistic understanding of cognition. It has the ambition to explain how cognition arises from the interaction of neurons, to the point that if the rules that govern the brain are understood in sufficient detail, it should be in principle possible to simulate them on a computer. Therefore, the field of computational neuroscience is intrinsically reductionist: it is assumed that the whole (how the brain works) can be reduced to final elements that compose it.

To be more precise, this view refers to ontological reductionism. A non ontologically reductionist view would be for example vitalism, the idea that life is due to the existence of a vital force, without which any given set of molecules would not live. A similar view is that the mind comes from a non-material soul, which is not scientifically accessible, or at least not describable in terms of the interaction of material elements. One could also imagine that the mind arises from matter, but that there is no final intelligible element – e.g. neurons are as complex as the whole mind, and smaller elements are not more intelligible.
In modern science in general and in neuroscience in particular, ontological reductionism is fairly consensual. Computational neuroscience relies on this assumption. This is why criticisms of reductionism are sometimes wrongly perceived as if they were criticisms of the entire scientific enterprise. This perception is wrong because criticisms of reductionism are generally not about ontological reductionism but about other forms of reductionism, which are more questionable and controversial.

Methodological reductionism is the idea that the right way, or the only way, to understand the whole is to understand the elements that compose it. It is then assumed that the understanding of the whole (e.g. function) derives from this atomistic knowledge. For example, one would consider that the problem of memory is best addressed by understanding the mechanics of synaptic plasticity – e.g. how the activity of neurons changes the synapses between them. In genetics, one may consider that memory is best addressed by understanding which genes are responsible for memory, and how they control the production of proteins involved in the process. This is an assumption that is less consensual, in computational neuroscience or in science in general, including in physics. Historically, it is certainly not true that scientific enquiry in physics started from understanding microscopic laws before macroscopic laws. Classical mechanics came before quantum mechanics. In addition, macroscopic principles (such as thermodynamics and energy in general) and symmetry principles are also widely used in physics in place of microscopic laws (for example, to understand why soap makes spherical bubbles). However, this is a relatively weak criticism, as it can be conceived that macroscopic principles are derived from microscopic laws, even if this does not reflect the history of physics.

In life sciences, there are specific reasons to criticize methodological reductionism. The most common criticism in computational neuroscience is that, while function derives from the interaction of neurons, it can also be said that the way neurons interact together is indirectly determined by function, since living organisms are adapted to their environment through evolution. Therefore, unlike objects of physics, living beings are characterized by a circular rather than causal relationship between microscopic and macroscopic laws. This view underlies “principle-based” or “top-down” approaches in computational neuroscience. Note that this is a criticism of methodological reductionism, but not of ontological reductionism.

There is also a deeper criticism of methodological reductionism, following the theme of circularity. It stems from the view that the organization of life is circular. It has been developed by Humberto Maturana and Francisco Varela under the name “autopoiesis”, and by Robert Rosen under the name “M-R systems” (M for metabolism and R for repair). What defines an entity as living, before the fact that it may be able to reproduce itself, is the fact that it is able to live. It is such an obvious truth about life that it is easy to forget, but to maintain its existence as an energy-consuming organism is not trivial at all. Therefore, a living entity is viewed as a set of physical processes in interaction with the environment that are organized in such a way that they maintain their own existence. It follows that, while a part of a rock is a smaller rock, a part of a living being is generally not a living being. Each component of the living entity exists in relationship with the organization that defines the entity as living. For this reason, the organism cannot be fully understood by examining each element of its structure in isolation. This is so because the relationship between structure and organization is not causal but circular, while methodological reductionism assumes a causal relationship between the elements of structure and higher-order constructs (“function”). This criticism is deep, because it does not only claim that the whole cannot be understood by only looking at the parts, but also that the parts themselves cannot be fully understood without understanding the whole. That is, to understand what a neuron does, one must understand in what way it contributes to the organization of the brain (or more generally of the living entity).

Finally, there is another type of criticism of reductionism that has been formulated against attempts to simulate the brain. The criticism is that, even if we did manage to successfully simulate the entire brain, this would not imply that we would understand it. In other words, to reproduce is not to understand. Indeed we can clone an animal, and this fact alone does not give us a deep understanding of the biology of that animal. It could be opposed that the cloned animal is never exactly the same animal, but certainly the same could be said about the simulated brain. But tenants of the view that simulating a brain would necessarily imply understanding the brain may rather mean that such a simulation requires a detailed knowledge of the entire structure of the brain (ionic channels in neurons, connections between neurons, etc) and that by having this detailed knowledge about everything that is in the brain, we would necessarily understand the brain. This form of reductionism is called epistemic reductionism. It is somehow the reciprocal of ontological reductionism. According to ontological reductionism, if you claim to have a full mechanistic understanding of the brain, then you should be able to simulate it (providing adequate resources). Epistemic reductionism claims that this is not only a necessary condition but also a sufficient condition: if you are able to simulate the brain, then you fully understand it. This is a much stronger form of reductionism.

Criticisms of reductionism can be summarized by their answers to the question: “Can we (in principle, one day) simulate the brain?”. Critics of ontological reductionism would answer negatively, arguing that there is something critical (e.g., the soul) that cannot be simulated. Critics of epistemic reductionism would answer: yes, but this would not necessarily help us understanding the brain. Critics of methodological reductionism would answer: yes, and it would probably require a global understanding of the brain, but it could only be achieved by examining the organism as a system with an organization rather than as a set of independent elements in interaction.

What is sound? (XIV) Are there unnatural sounds?

In a previous post, I argued that some artificial sounds might be wrongly presented as if they were not natural, because ecological environments are complex and so natural sounds are diverse. But what if they were actually not natural? Perhaps these particular sounds can be encountered in a natural environment, but there might be other sounds that can be synthesized and heard but that are never encountered in nature.

Why exactly do we care about this question? If we are interested in knowing whether these sounds exist in nature, it is because we hypothesize that they acquire a particular meaning that is related to the context in which they appear (e.g. a binaural sound with a large ITD is produced by a source located on the ipsilateral leading side). This is a form of objectivism: it is argued that if we subjectively lateralize a binaural sound with a 10 ms ITD to the right, it is because in nature, such a sound would actually be produced by a source located on the right. So in fact, what we are interested in is not only whether these sounds exist in nature, but also additionally whether we have encountered them in a meaningful situation.

So have we previously encountered all the sounds that we subjectively localize? Certainly this cannot be literally true, for a new sound (e.g. in a new acoustical environment) could then never be localized. Therefore there must be some level of extrapolation in our perception. It cannot be that what we perceive is a direct reflection of the world. In fact, there is a relationship between this point and the question of inductivism in philosophy of science. Inductivism is the position that a scientific theory can be deduced from the facts. But this cannot be true, for a scientific theory is a universal statement about the world, and no finite set of observations can imply a universal statement. No scientific theory is ever “true”: rather, it agrees with a large body of data collected so far, and it is understood that any theory is bound to be amended or changed for a new theory at some point. The same can be said about perception, for example sound localization: given a number of past observations, a perceptual theory can be formed that relates some acoustical properties and the spatial location of the source. This implies that there should be sounds that have never been heard but that can still be associated with a specific source location.

Now we reach an interesting point, because it means that there may be a relationship between phenomenology and biology. When sounds are presented that deviate from the set of natural sounds, their perceived quality says something about the perceptual theory that the individual has developed. This provides some credit to the idea that the fact we lateralize binaural sounds with large ITDs might say something about the way the auditory system processes binaural sounds – but of course this is probably not the best example since it may well be in agreement with an objectivist viewpoint.

What is sound? (XIII) Loudness constancy

Perhaps the biggest puzzle in loudness perception is why a pure tone, or a stationary sound such as a noise burst, feels like it has constant loudness. Or more generally: why does a pure tone feel like it is a constant sound? (both in loudness and other qualities like pitch)

The question is not obvious because physically, the acoustical wave changes all the time. Even though we are sensitive to this change in the temporal fine structure of the wave, because for example it contributes to our perception of pitch, we do not hear it as a change: we do not hear the amplitude rising and falling. Only the envelope remains constant, and this is an abstract property of the acoustical wave. We could have chosen another property. For example, in models of the auditory periphery, it is customary to represent the envelope as a low-pass filtered version of the rectified signal. But this does not produce an exactly constant signal for pure tones.

Secondly, at the physiological level nothing is constant either for pure tones. The basilar membrane follows the temporal fine structure of the acoustical wave. The auditory nerve fibers fire at several hundred Hz. At low frequency they fire at specific phases of the tone. At higher frequency their firing seems more random. In both cases we hear a pure tone with a constant loudness. What is more, fibers adapt: they fire more at the onset of a tone, then their firing rate decreases with time. Yet we do not hear the loudness decreasing. On the other hand, when we strike a piano key, the level (envelope) of the acoustical wave decreases and we can hear this very distinctly. In both cases (pure tone and piano key) the firing rate of fibers decreases, but in one case we hear a constant loudness and in the other case a decreasing loudness.

Finally, it is not just that some high-level property of sound feels constant, but with a pure tone we are simply unable to hear any variation in the sound at all, whether in loudness or in any other quality.

This discussion raises the question: what does it mean that something changes perceptually? To (tentatively) answer this question, I will start with pitch constancy. A pure tone feels like it has a constant pitch. If its frequency is progressively increased, then we feel that the pitch increases. If the frequency remains constant, then the pure tone feels like a completely constant percept. We do not feel the acoustical pressure going up and down. Why? The pure tone has this characteristic property that from the observation of a few periods of the wave, it is possible to predict the entire future wave. Pitch is indeed associated with the periodicity of the sound wave. If the basis of what we perceive as pitch if this periodicity relationship, then as the acoustical wave unfolds, this relationship (or law) remains constantly valid and so the perceived pitch should remain constant. There is some variation in the acoustical pressure, but not in the law that the signal follows. So there is in fact some constancy, but at the level of the relationships or laws that the signal follows. I would propose that the pure tone feels constant because the signal never deviates from the perceptual expectation.

This hypothesis about perceptual constancy implies several non-trivial facts: 1) how sensory signals are presented to the system (in the form of spike trains or acoustical signals) is largely irrelevant, if these specific aspects of presentation (or “projection”) can be included in the expectation; 2) signal variations are not perceived as variations if they are expected; 3) signal variations are not perceived if there is no expectation. This last point deserves further explanation. To perceive a change, an expectation must be formed prior to this change, and then violated: the variation must be surprising, and surprise is defined by the relation between the expectation (which can be precise or broad) and the variation. So if there is no expectation (expectation is broad), then we cannot perceive variation.

From this hypothesis it follows that a completely predictable acoustical wave such as a pure tone should produce a constant percept. Let us come back to the initial problem, loudness constancy, and consider that the firing rate of auditory nerve fibers adapt. For a tone of constant intensity, the firing rate decays at some speed. For tones of increasing intensity, the firing rate might decay at slower speed, or even increase. For tones of decreasing intensity, the firing rate would decay faster. How is it that constant loudness corresponds to the specific speed of decay that is obtained for the tone of constant intensity, if the auditory system never has direct access to the acoustical signals?

Loudness constancy seems more difficult to explain than pitch constancy. I will start with the ecological viewpoint. In an ecological environment, many natural sounds are transient (e.g. impacts) and therefore do not have constant intensity. However, even though the intensity of an impact sound decays, its perceived loudness may not decay, i.e., it may be perceived as a single timed sound (e.g. a footstep). There are also natural sounds that are stationary and therefore have constant intensity, at least at a large enough timescale: a river, the wind. However, these sounds do not address the problem of neural adaptation, as adaptation only applies to sounds with a sharp onset. Finally, vocalizations have a sharp onset and slowly varying intensity (although this might be questionable). Thus, for a vocalization, the expected intensity profile is constant, and therefore it could be speculated that this explains the relationship between constant loudness and constant intensity, despite variations at the neurophysiological level.

A second line of explanation is related to the view of loudness as a perceptual correlate of intelligibility. A pure tone presented in a stationary background has constant intelligibility (or signal-to-noise ratio), and this fact is independent of any further (non-destructive) processing applied to the acoustical wave. Therefore, the fact that loudness is constant for a pure tone is consistent with the view that loudness primarily reflects the intelligibility of sounds.