Neural correlates of perception (what's wrong with them)

Broadly speaking, neural correlates of a percept, e.g. seeing a face, are what happens with neurons when we see a face. For example, a bunch of neurons would fire when we see Jennifer Aniston. What do neural correlates teach us about perception, or more generally about the mind-body problem?

The interest in neural correlates of perception is largely subtended by the implicit belief that there must be a mapping between perception and brain state seen as a physical system. That is, the percept of seeing Jennifer Aniston's face corresponds to a particular brain state; the percept of a sound being at a particular spatial position corresponds to another one. There are considerable conceptual difficulties with this belief. Consider these two thoughts experiments.

1) Imagine we can instantaneously freeze the brain so that its components, ions, tissues, etc are preserved in the same state (it's a thought experiment!). Does the brain still experience seeing Jennifer Aniston's face?

2) Imagine we record the precise spatiotemporal activity of all neurons in response to Jennifer Aniston's face. Then we inactivate all synapses, and we replay the activity pattern optogenetically. Would the brain still experience seeing Jennifer Aniston's face?

Intuitively, our answer to both questions is negative. If you answered the second question positively, then consider this one:

2b) Imagine we record the precise spatiotemporal activity of all neurons in response to Jennifer Aniston's face. Then we replay the activity pattern on a set of light diodes. Would the diodes experience seeing Jennifer Aniston's face?

If our intuition is correct, then brain states, even understood more broadly as “firing patterns” are not constitutive of percepts. It appears that whatever a percept is, it must involve not just the state or even the activity of neurons, but the interaction between neurons. Therefore when we describe neural correlates of perception in terms of neural “activity”, we appear to be missing a crucial ingredient, which has to do with interactional properties of neurons. To be honest, I must admit here that “interactional properties of neurons” is a loosely defined concept, but apparently there seems to be a need for a concept that goes beyond the concept of activity “pattern”, a concept to be clarified (see afterthought below).

Underlying the problematic concept of neural correlates of perception is the representational view of perception; the idea that whatever we perceive must somehow be “represented” in the brain, like neural paintings of the world. I have pointed out the deep problems with the representational view on this blog (for example here and there) – and obviously I am not the first one to do so (see e.g. Gibson, Brooks, Merleau-Ponty, O'Regan etc). Let us simply reflect on the following one. When we look at Jennifer Anniston's face, we experience the percept of seeing her face at different moments. It seems as if at any instant, we are experiencing the same percept (along with others, of course). Possibly this is an illusion and experience is actually discrete in time, but in any case the perceptual “grain of time” is no more than a few tens of ms. Therefore when looking for neural correlates of the percept, then we cannot be happy to rely on average activity, over time or over trials. We do not experience percepts “on average”, but at any instant (this is related to my points on the rate vs. spike debate). What we should be looking for is something in the interactional properties of neurons that is invariant through the entire time during which the percept is experienced. The concept is quite different from the more traditional “neural paintings” concept.

So, in the current state of research, what neural correlates of perception tell us about perception, more specifically about the mind-body problem, is disappointingly: not so much.

 

Afterthought: an interesting analogy is the concept of temperature in physics. Since temperature corresponds to the movement of particles, you cannot really define the temperature of a physical object at any given time. Temperature corresponds to the activity, not the position or nature of the particles. What's more, the concept of temperature makes no sense except when considering the interaction between agitated particles. Temperature is perhaps an example of “interactional property” of a set of particles.

This week's paper selection (27 Apr - 4 May 2016)

 

This week's paper selection (20-27 Apr 2016)

 

General bibliography on action potential theory

Two general introductory biology textbooks, covering the excitability of neurons and muscles are (Matthews, 2002) and (Keynes et al., 2011). The biophysics and modeling of neurons are covered in (Johnston and Wu, 1994) and (Sterratt et al., 2011). Both are quite accessible and include all essential material including compartmental modeling of dendrites.

There is a great book that covers many topics in cell biology from a physicist perspective, including excitability: Physical biology of the cell (Phillips et al., 2008).

There are two excellent reviews by Hodgkin that are particularly useful to understand the experimental basis of Hodgkin-Huxley theory, including myelinated axons: (Hodgkin, 1951, 1964).

The reference textbook for the biophysics ionic channels is (Hille, 2001). (Johnston and Wu, 1994) also includes some material about stochastic analysis of channels.

Linear cable theory is covered in great detail in (Tuckwell, 1988) and (Koch, 1999). An excellent review by one of the historical figures of cable theory is (Rall, 2011). (Jack et al., 1975) also covers cable theory including active axonal conduction, and it also includes muscle APs and their propagation and classic theory of excitability (threshold).

Theory of electro-osmosis (interaction between osmosis and electrical field) is treated in (Hoppensteadt and Peskin, 2004).

 

Hille B (2001) Ion Channels of Excitable Membranes. Sinauer Associates.

Hodgkin AL (1951) The Ionic Basis of Electrical Activity in Nerve and Muscle. Biological Reviews 26:339–409.

Hodgkin AL (1964) The conduction of the nervous impulse. C. C. Thomas.

Hoppensteadt FC, Peskin C (2004) Modeling and Simulation in Medicine and the Life Sciences, 2nd edition. New York: Springer.

Jack JB, Noble D, Tsien R (1975) Electric Current Flow in Excitable Cells. Oxford: OUP Australia and New Zealand.

Johnston D, Wu SM-S (1994) Foundations of Cellular Neurophysiology, 1 edition. Cambridge, Mass: A Bradford Book.

Keynes RD, Aidley DJ, Huang CL-H (2011) Nerve and Muscle, 4 edition. Cambridge ; New York: Cambridge University Press.

Koch C (1999) Biophysics of computation: Information processing in single neurons. Oxford University Press, USA.

Matthews GG (2002) Cellular Physiology of Nerve and Muscle, 4 edition. Osney Mead, Oxford ; Malden, MA: Wiley-Blackwell.

Phillips R, Kondev J, Theriot J (2008) Physical Biology of the Cell, 1 edition. New York: Garland Science.

Rall W (2011) Core Conductor Theory and Cable Properties of Neurons. In: Comprehensive Physiology. John Wiley & Sons, Inc.

Sterratt D, Graham B, Gillies DA, Willshaw D (2011) Principles of Computational Modelling in Neuroscience, 1 edition. Cambridge; New York: Cambridge University Press.

Tuckwell H (1988) Introduction to theoretical neurobiology, vol 1: linear cable theory and dendritic structure. Cambridge: Cambridge University Press.

 

On the misuses of statistics in biology, medicine, neuroscience and psychology

These days, it seems that the scientific community is concerned about the lack of reproducibility of experimental results in biology and related fields (e.g. medicine, neuroscience, psychology), in relation with statistics. Say, some study claims that there is a significant correlation between X (eating potatoes) and Y (developing lung cancer), with p<0.05. The next study tries to replicate it and finds no significant correlation. This happens all the time. Every week we read a media report about a peer-review study finding a significant correlation between something we eat and some disease. But in general those reports are not taken too seriously, at least by scientists and doctors. But why is that? Isn't “statistically significant” precisely supposed to mean that the observed outcome was not a matter of chance and that we should trust it? If yes, then why shouldn't we take those reports seriously? And if not, then why do we keep on backing up scientific claims with statistics?

I have to say I am a bit surprised that the biology fields generally don't seem to fully draw the implications of this rather obvious observation. Paper after paper, X and Y are “significantly different” or “not significantly different”, and some scientific conclusions seem to be drawn from this statistical statement.

Let us briefly recall what “significantly different with p<0.05” means. You have two sets of quantitative observations, obtained in two different conditions. You calculate the difference of their means, and you want to know whether the result is just due to irrelevant variability or whether it really reflects a difference due to the different conditions. To say that the difference is “significant with p<0.05” essentially means that if the two sets of observations were drawn from the same distribution (with the same mean), then there would be a 5% chance that you would observe that difference. For example, if you do the same experiments 20 times, then you should expect to find a “statistically significant” difference, even if the experimental condition has actually no effect at all on what you are measuring. For the biologists: every 20 experiments, you will find something statistically significant even when there is actually nothing interesting. I bet only that experiment will be published. And that is if only one particular condition is monitored. For the epidemiologists: if you monitor 20 correlations, chances are that one of them will be statistically significant by pure chance.

It is ironic that this replicability crisis comes at the same time as the hype around “big data”. Michael Jordan, one of the leading experts in machine learning, pointed out very clearly that we don't have the statistical tools to deal with big data. Looking for statistically significant correlations among piles of data is nonsense. Of course will find them, tons of them. And how will you know which ones are meaningful?

As I pointed out above, in biology there is a huge selection bias, ie you publish only statistically significant results. Some say that we should also publish negative results, and not just positive results. But the problem is that “not statistically significant” is not a negative result. It's no result at all. It says that we haven't seen anything. It doesn't mean there is nothing. Maybe we would see something with more observations. We just can't know. I often read in biology papers that “X and Y are not significantly correlated” as if it was a result, i.e. X and Y have nothing to do with each other. But that is not true at all! It's a lack of result, and neither a positive nor a negative result.

So some have argued for increasing the number of observations (ie cells/animals/subjects), so that we can say something like: X and Y are significantly different with p<0.005. I think this misses the point and reveals a deeper problem, which is of epistemological nature, not just statistical. Here is a quote I like by Ernest Rutherford, who was a Nobelized physicist and chemist: "If your experiment needs statistics, you ought to have done a better experiment". That's a bit exaggerated, probably, but quite true in my opinion. If you need statistics, it's because the result is not obvious, in other words the difference between X and Y might be “statistically significant” but it's tiny. Statistical significance is not at all about significance in the usual sense of the word. A difference of 0.1% can be statistically significant, if you have enough observations. But think about it: in a complex system, like a living organism for example, would you expect that one part of the system is absolutely uncorrelated with another part of the system? That would basically mean that the two parts are unconnected and therefore not part of the same system. For example, I'm pretty sure that eating potatoes is positively or negatively correlated with developing lung cancers. There might be a correlation between potatoe eating and revenue, and clearly a correlation between revenue and any kind of disease. I'm not even mentioning the effect of potatoes on metabolism, which certainly has some slight correlation with cancer development or the immune system. However tiny these correlations might be, you will still find a significant correlation between potatoe eating and lung cancers, if you look at enough cases.

So in reality, in a complex system, one should expect that every single pair of variables are correlated, and any condition that affects the system should affect all its components in various, possible tiny, amounts. Therefore “significantly different” and “significantly correlated” are really not very useful statistical concepts for biology. The first thing to do should be to start reporting how big those differences or correlations actually are, and not just if they exist. A useful statistical concept, for example, is effect size: the difference between the means of the two groups of observations divided by their standard deviation. So for example, effect size of 1 tells you that the mean difference you observe is of the same order as the intrinsic variability in each group, so that would be considered a quite strong effect. Effect size is much closer to what we would naturally mean by “significant” than statistical significance. If we reported effect sizes for all the observed cognitive differences between men and women that have been publicized in media and books, we would find in most cases that they are statistically significant but their effect size is very small. As Dindia put it concisely, “Men are from North Dakota, women are from South Dakota”.

In this context, I don't find the call for increasing the number of observations in biology papers (ie of animals) very ethical. Reporting effect sizes would be the minimum that everyone should do in biology papers, and if it's tiny then why bother increasing the number of observations to show that indeed it is tiny, yes, but statistically significant?

This obviously doesn't solve all the issues. I'll mention just one more. Very often, groups are compared that differ not by one condition, but by many. This is typical in epidemiology for example. You are looking for the effect of obesity and heart disease, say, and you find a strong correlation. But you want to make sure that it isn't just due to the fact that obese people tend to do less exercise, for example. It's crucial because then diet would probably not be efficient. The standard way to deal with this issue is to do multilinear regression or analysis of variance, that is, to fit a statistical model that includes all the variables that you think might be important. These are almost always linear models, ie, you assume that the observation you are interested in scales linearly with every variable. Then you will read in the paper that the authors have taken into account the other possible factors and so they can be sure that those are not involved.

I find this sort of statement hilarious. Who would think that a living organism is a linear system? A linear system takes no decision. In a linear system, there is no state that is qualitatively different from any other state. There is no life and death. There is no cancer. If you inject a dose of anesthetics into a human and monitor say heart beat, I bet that you won't find a linear relation between dose and heart beat.

I am not saying that multilinear regressions and similar tools are completely useless (although possibly close to it in many cases); after all linear approximations often work in some limited ranges (but that would need to be demonstrated specifically!). I simply mean that you can't trust their results. They should be taken as suggestive at best. If you look at the epidemiology literature, or simply reviews and reports from WHO, you'll find that there are not so many cases where experts hold strong convinctions about causal relations between diet or habits and diseases. When they do, it is never only based on statistical correlations. It is a combination of epidemiological studies, which are difficult to interpret because of the issues I have mentioned, and also intervention studies (ie an experiment with a control group) and detailed biological knowledge about the physiopathology of the disease, ie biological experiments. Such is the case of the relation between smoking and lung cancer, for example. Hence the relevance of Rutherford's quote: if you need statistics, then you ought to have done a better experiment.

I would like to conclude that the problem with the use of statistics in biology and related field is not in my opinion due to a lack of mathematical (specifically, statistical) education. Rather, it is due to a lack of epistemological education or reflection, which is a much broader problem, not specific to biologists at all. The question is not to know about all statistical tests and how to use them, the question is to understand the epistemological value of results, whether statistical or not, ie: what does it tell me exactly about the system I am interested in, and what exact question, beyond the suggestive words (“significant”), does it provide an answer to?

This week's paper selection (6-13 Apr 2016)

 

This week's paper selection (2-31 March 2016)

After a few weeks away from the lab, here is a selection of papers:

This week's paper selection (24 Feb - 2 Mar 2016)

This week I got interested in how a cell adapts too changes in size (for example, the membrane area of a neuron grows by several orders of magnitude during development).