What is computational neuroscience? (XXI) Lewis Carroll and Norbert Wiener on detailed models

The last published novel of Lewis Carroll, Sylvie and Bruno (1893 for the second volume), contains a passage that explains that a high level of detail is not necessarily what you want from a model. I quote it in full:

“What a useful thing a pocket-map is!” I remarked.

“That’s another thing we’ve learned from your Nation,” said Mein Herr, “map-making. But we’ve carried it much further than you. What do you consider the largest map that would be really useful?”

“About six inches to the mile.”

“Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”

“Have you used it much?” I enquired.

“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.”

In other words: if the model is nearly as complex as the thing it applies to, then it is no more useful than the thing itself. This theme also appears in a 1945 essay by Arturo Rosenblueth and Norbert Wiener, “The Role of Models in Science”:

“The best material model for a cat is another, or preferably the same cat. In other words, should a material model thoroughly realize its purpose, the original situation could be grasped in its entirety and a model would be unnecessary. […] This ideal theoretical model cannot probably be achieved. Partial models, imperfect as they may be, are the only means developed by science for understanding the universe. This statement does not imply an attitude of defeatism but the recognition that the main tool of science is the human mind and that the human mind is finite.”

The last sentence is the most important: a model is not something that is meant to mimic reality; it is something that is constructed by and for the human mind to help it grasp complex aspects of reality.

Subjective physics

I just finished writing a text about "subjective physics": a term I made up to designate the description of the laws that govern sensory signals and their relationships with actions. It is relevant to systems computational neuroscience, embodiment theories and psychological theories of perception (in particular Gibson's ecological theory and the sensorimotor theory). Here is the abstract:

Imagine a naive organism who does not know anything about the world. It can capture signals through its sensors and it can make actions. What kind of knowledge about the world is accessible to the organism? This situation is analog to that of a physicist trying to understand the world through observations and experiments. In the same way as physics describes the laws of the world obtained in this way by the scientist, I propose to name subjective physics the description of the laws that govern sensory signals and their relationships with actions, as observed from the perspective of the perceptual system of the organism. In this text, I present the main concepts of subjective physics, illustrated with concrete examples.

What is computational neuroscience? (XX) What is a realistic model?

What is a realistic neuron model? There is a hierarchy among neuron models, which goes like this: least realistic model is the integrate-and-fire model, which is phenomenological; then the single-compartment Hodgkin-Huxley model; then multicompartmental Hodgkin-Huxley models (this hierarchy is questioned by a recently accepted paper that I wrote, but I will discuss it when the paper is out).

But what is meant exactly by “realistic”? Take two models of a plane: a toy plane made of wood, and a simple paper plane. The first model certainly looks more like a plane. It has different recognizable elements of a plane: wings, helixes, a cockpit. One might say that this model is more realistic. The second model doesn’t have a cockpit, and in fact doesn’t really look like a plane. However, unlike the first model, it flies – definitely an important characteristic of planes. So which one is more realistic?

There are generally two types of answers to justify the fact that the Hodgkin-Huxley model (HH) is more realistic than the integrate-and-fire model (IF). One is: the HH model has ionic channels, the IF model doesn’t. Another one is: the HH model has been proven right with experiments.

Let us start with the first type of answer. Strictly speaking, the HH model does not have ionic channels. Ionic channels are proteins. The HH model is a set of equations. There are parts of these equations that we identify with properties of proteins, but they are not the real things. Saying that the HH model has ionic channels is like saying that the wooden plane has a helix: there is something we call a “helix”, yes, but functionally it is not a helix, it is a nice-looking piece of wood. Specifically, in the HH model, the sodium gating variable (m) has no biophysical counterpart in the actual sodium channel. The sodium current in the HH model corresponds to something that can be physically measured, but it is described as proportional to the third power of gating variable m, only because exponent 3 was the best fit to their data. We call it “gating” variable only because it is part of a story in which it is a gating variable: the story that there are three independent gates that must all be open for the channel to be open. It is an attractive story, but we now know that this is not what happens with the sodium channel. So the model is consistent with a story in which there is a neuron with sodium channels, but the story is not an accurate description of reality. We might call this “wooden plane realism”.

The second of type of answer is more scientific in its expression. However, it is a bit ambiguous. What Hodgkin and Huxley proved is that their model was an accurate description of the electrical behavior of a giant squid axon, which was space-clamped with a metal wire. But when we claim that the HH model is realistic, we mean something more general than that. We mean that the same “kind” of model would successfully account for electrical behavior of other neurons. It would not be exactly the same model, because parameters and ionic channels would be different, and would have to be properly adjusted. So in fact it is rather the HH theory or formalism that is meant to be more realistic. However, for a given neuron, the HH “model” is only more realistic if the structure and parameters of the model are properly adjusted for that given neuron.

These remarks touch on several epistemological concepts that have been described by Karl Popper (The logic of scientific discovery, 1935). The first one is the notion of “empirical content” of a theory, which is defined as the set of possible falsifiers of the theory. In short, for a model, it is the type of (non-tautological) predictions that a model can make. For example, the integrate-and-fire model can make predictions about the membrane potential and the spike times, as a function of the input current. The HH model can additionally make predictions about the sodium and potassium currents. This is just about the logical structure of the models, in their articulation with empirical data, not about whether the models are accurate or not. We can consider greater empirical content as a more satisfying way to rephrase the idea that the HH model is more realistic because it “has” ionic channels. But it is a mistake to identify realism with empirical content: a theory can have a very large empirical content and make predictions that turn out to be all completely wrong.

Related to this notion is the “levels of universality”. Consider these two statements (taken from Popper): all orbits of planets are ellipses; all orbits of heavenly bodies are ellipses. The second statement is more universal, because planets are heavenly bodies. So in this sense it is a better theory. HH theory has this quality of being quite universal: it is meant to apply to spiking and non-spiking neurons, for example.

Finally, a theory can be characterized by its “degree of precision”. Taking again an example from Popper: all orbits of planets are circles; all orbits of planets are ellipses. Independently of the empirical validity of these two statements, the first one is more precise than the second one, because all circles are ellipses. Applied to models, this is related to the number of parameters that are left unspecified. For example, multicompartmental models have a greater empirical content than single-compartment models, because they can make predictions about membrane potential at different locations on the dendritic tree. However, they are not necessarily more realistic because they are less precise: there are many unspecified parameters, and the additional empirical content is only accurate if these parameters are properly set.

So in fact there are two aspects of realism that can be discussed about models. One has to do with the logical structure of the model: what cases it is meant to apply to (empirical content, universality), how precise it is in its predictions (precision); in other words: the ambition of the model. On this dimension, one seeks models with greater universality, greater empirical content, greater precision. Another way to phrase it is to say that a useful model is one that has many opportunities to be wrong. It is less easy than we might think to compare HH and IF models on this dimension: on one hand the HH model is more universal, but on the other hand it is less precise than the IF model (for example, a HH model does not necessarily spike).

This first aspect has nothing to do with how accurate the model is, with respect to empirical observations. It only has to do with the logical structure of the model. The second aspect has to do with empirical validity: how accurate the model predictions are. For example, we could well imagine that a phenomenological model produces more accurate predictions than a biophysical model, which has a greater empirical content. In this case the biophysical model makes more predictions, but they do not match empirical observations as well as the phenomenological model. Which model is more realistic?

What is sound? (XVI) On the spatial character of tones

An intriguing fact about the pitch of tones is that we tend to describe it using spatial characteristics such as “high” and “low”. In the same way, we speak of a rising intonation when the pitch increases. A sequence of notes with increasing frequency played on a piano scale is described as going “up” (even though it is going right on a piano, and going down on a guitar). Yet there is nothing intrinsically spatial in the frequency of a tone. Why do we use these spatial attributes? An obvious possibility is that it is purely cultural: “high” and “low” are just arbitrary words that we happen to use to describe these characteristics of sounds. However, the following observations should be made: - We use the terms low and high, which are also used for spatial height, and not specific words such as blurp and zboot. But we don’t use spatial words for colors and odors. Instead we use specific words (red, green) or sometimes words used for other senses (a hot color). Why use space and not something else? - All languages seem to use more or less the same type of words. - In an experiment done in 1930 by Caroll Pratt (“The spatial character of high and low tones”), subjects were asked to locate tones of various frequencies on a numbered scale running from the floor to the ceiling. The tones were presented through a speaker behind a screen, placed at random height. It turned out that the judgment of spatial height made by subjects was very consistent, but was entirely determined by tone frequency rather than actual source position. High frequency tones were placed near the ceiling, low frequency tones near the floor. The result was later confirmed in congenitally blind persons and in young children (Roffler & Butler, JASA 1968). Thus, there is some support for the hypothesis that tones are perceived to have a spatial character, which is reflected in language. But why? Here I will just speculate widely and make a list of possibilities. 1. Sensorimotor hypothesis related to vocal production: when one makes sounds (sings or speaks), sounds of high pitch are felt to be produced higher than low pitch sounds. This could be related to the spatial location of tactile vibrations on the skin depending on fundamental frequency or timbre. Professional singers indeed use spatial words to describe where the voice “comes from” (which has no physical basis as such). This could be tested by measuring skin vibrations. In addition, congenitally mute people would show different patterns of tone localization. 2. Natural statistics: high frequency sounds tend to come from sources that are physically higher than low frequency sounds. For example, steps on the ground tend to produce low frequency sounds. Testing this hypothesis would require an extensive collection of natural recordings tagged with their spatial position. But note that the opposite trend is true for sounds produced by humans: adults have a lower voice than children, which are lower in physical height. 3. Elevation-dependent spectral cues: to estimate the elevation of a sound source, we rely on spectral cues introduced by the pinnae. Indeed the circumvolutions of the pinnae introduce elevation-dependent notches in the spectrum. By association, the frequency of a tone would be associated with the spectral characteristics of a particular elevation. This could be tested by doing a tone localization experiment and comparing with individual head-related transfer functions.

What is computational neuroscience? (XIX) Does the brain process information?

A general phrase that one reads very often about the brain in the context of perception is that it “processes information”. I have already discussed the term “information”, which is ambiguous and misleading. But here I want to discuss the term “process”. Is it true that the brain is in the business of “information processing”?

“Processing” refers to a procedure that takes something and turns it into something else by a sequence of operations, for example trees into paper. So the sentence implies that what the brain is doing is transforming things into other things. For example, it transforms the image of a face into the identity of the face. The coding paradigm, and more generally the information-processing paradigm, relies on this view.

I will take a concrete example. Animals can localize sounds, based on some auditory cues such as the level difference between the two ears. In the information processing view, what sound localization means is a process that takes a pair of acoustic signals and turns it into a value representing the direction of the sound source. However, this not literally what an animal does.

Let us take a cat. The cat lives and, most of the time, does nothing. Through its ears, it receives a continuous acoustic flow. This flow is transduced into electrical currents, which triggers some activity in the brain, that is, electrical events happening. At some moment in time, a mouse scratches the ground for a second, and the cat turns its eyes towards the source, or perhaps crawls to the mouse. During an extended period of time, the mouse is there in the world, and its location exists as a stable property. What the cat “produces”, on the other hand, is a discrete movement with properties that one can relate to the location of the mouse. Thus, sound localization behavior is characterized by discrete events that occur in a continuous sensory flow. Behavior is not adequately described as a transformation of things into things, because behavior is an event, not a thing: it happens.

The same remark applies to neurons. While a neuron is a thing that exists, a spike is an event that happens. It is a transient change in electrical properties that triggers changes in other neurons. As the terms “neural activity” clearly suggest, a spike is not a “thing” but an event, an action on other neurons or muscles. But the notion of information processing implies that neural activity is actually the end result of a process rather than the process itself. There is a confusion between things and events. In a plant that turns trees into paper, trees and papers are the things that are transformed; the action of cutting trees is not one of these things that are transformed. Yet this is what the information processing metaphor says about neural activity.

There are important practical implications for neural models. Traditionally, these models follow the information-processing paradigm. There is an input to the model, for example a pair of acoustical signals, and there is an output, for example an estimate of sound location (I have worked on this kind model myself, see e.g. Goodman & Brette, PLoS Comp Biol 2010). The estimate is generally calculated from the activity of the neurons over the course of the simulation, which corresponds to the time of the sound. For example, one could select the neuron with the maximum firing rate and map its index to location; or one could compute estimate based on population averages, etc. In any case, there is a well-defined input corresponding to a single sound event, and a single output value corresponding to the estimated location.

Now try to embed this kind of model into a more realistic scenario. There is a continuous acoustic flow. Sounds are presented at various locations in sequence, with silent gaps between them. The model must estimate the locations of these sounds. We have a first problem, which is that the model produces estimates based on total activity over time, and this is clearly not going to work here since there is a sequence of sounds. The model could either produce a continuous estimate of source location (the equivalent of continuously pointing to the source), or it could produce an estimate of source location at specific times (the equivalent of making a discrete movement to the source), for example when the sounds stop. In either case, what is the basis for the estimate, since it cannot be the total activity any more? If it is a continuous estimate, how can it be a stable value if neurons have transient activities? More generally, how can the continuous flow of neural activity produce a discrete movement to a target position?

Thus, sound localization behavior is more than a mapping between pairs of signals and direction estimates. Describing perception as “information processing” entails the following steps: a particular time interval of sensory flow is selected and considered as a thing (rather than a flow of events); a particular set of movements is considered and some of its properties are extracted (e.g. direction); what the brain does is described as the transformation of the first thing into the second thing. Thus, it is an abstract construction by an external observer.

Let me summarize this post and the previous one. What is wrong about “information processing”? Two things are wrong. First (previous post), the view that perception is the transformation of information of some kind into information of another kind is self-contradictory, because a signal can only be considered “information” with respect to a perceptual system. This view of perception therefore proposes that there are things to be perceived by something else than the perceptual system. Second (this post), “processing” is the wrong term because actions produced by the brain are not things but events: it is true at the scale of the organism (behavior) and it is true at the scale of neurons (spikes). Both behavior and causes of behavior are constituted by events, not things. It is also true of the mind (phenomenal consciousness). A thing can be transformed into another thing; an event happens.

Psychology of academic hiring committees (VII): more halo effect and conclusion

Perhaps the most striking example of the halo effect that I have noticed is the weight given to very high-impact journals (say Nature). Most scientists I know would love to publish in such journals, because of the visibility and prestige, and because they know it helps their career (for example to get grants). But the same scientists generally also agree that papers in those journals are not necessarily better than those published in good journals in their field, and everyone has examples in mind about obviously flawed papers in their field in such journals. There is in fact little objective reason that those journals may publish better science. The reviewing process is essentially the same as in other journals (2-3 reviewers writing a report), except faster, and most of the selection is done by professional editors (i.e., whose full-time job is to evaluate papers), not by people in the field, on the basis of “significance”. Significance is obviously a highly subjective notion, especially when assessed by non-specialists. In any case, publishing in Nature should not make you a better scientist than publishing in, say, Journal of Neuroscience.

Some might have a different opinion, but it is not my point here. The point is that committee members also often share the same opinion, and yet it is almost impossible for them to resist the appeal of a Nature paper on a CV. Publishing in such journals is so desirable that one cannot help but think, consciously or not, that a scientist who got a paper accepted in such journals must be some sort of genius, forgetting their opinion that the competition in those journals is not so much about scientific quality. There clearly seems to be a halo effect with prestigious journals.

Ending words

Taken together, all the cognitive biases I have described make a rather gloomy picture of academic hiring committees. To be clear, my intention was certainly not to imply that committee members are stupid or crazy. They are not at all. But they are humans, and humans suffer from well-documented cognitive biases in difficult decision processes. The solution to these issues is therefore not to appoint different humans. It is to change the processes so as reduce these issues. Even though selecting a permanent faculty is a very difficult task, it is not the worst hiring scenarios. Compared to other professions, the work of scientists is public and there might be therefore more useful information available for the hiring committee than in other professions, where only a CV and an interview can be used.

For example, one simple recommendation would be that documents produced by the hiring process (reports etc) be made public after the decision. This would: 1) increase feedback, 2) suppress some irrelevant criteria (quotas etc).

I would be interested in hearing about how academic hiring processes work in other countries.

Psychology of academic hiring committees (VI): The oral presentation and interview

 This part will illustrate two psychological biases, known as the illusion of validity and the halo effect. For research and university positions in France, the candidate must provide a written application (CV, project, description of previous work) and must also give an oral presentation and answer questions. The oral presentation is generally 5-15 minutes, followed by questions, for a total of up to 30 minutes. For a decision relevant to the entire career of a scientist, this is rather short. Yet it does seem to bear quite a strong weight on the committee's decision (although not in all committees).

Why some committees decide to put a such a strong weight on this stage is probably related to another cognitive bias named the illusion of validity. In his book, Daniel Kahneman describes his experience during his military service, where he had to evaluate officer candidates with a test. There had been an empirical analysis of that procedure, where they found that the grades given after the test were essentially uncorrelated with their future performance in officer school. However, when someone grades the test, he/she is very confident in the grades and thinks they are highly predictive. This is a case of substitution: the test is not directly related to what an officer does, but confidence is about the substituted attribute, whether it is highly correlated or uncorrelated to the target attribute. The illusion of validity is very strong, though, and it is difficult to avoid it because one's beliefs are built only from the cognitive elements that are currently available, not from the uncertain elements. It is impossible to consider unavailable elements, and so any consistent set of available elements seem like compelling evidence.

Let us examine the case of the oral presentation. Prior to the presentation, the candidate submitted a complete application including a written project, a detailed CV, and a report on previous results. Therefore, there is zero new information in the oral presentation, which is very short. However, the presentation is graded anyway, and you cannot ignore a grade that you decided on, whether it is relevant or not to the decision. I have heard from committee members that the presentation is useful because most committee members have not read the project (only the referee). This is obviously not a great defense for a lifetime decision!

Whether the presentation is explicitly graded or not, it is next to impossible to avoid the priming effects induced by a good or bad presentation. It is well documented in psychological research that many impressions that are completely irrelevant to the decision actually greatly influence it, when the decision is difficult. This is called the halo effect. These impressions include attractiveness, voice, self-confidence, or more generally “looking the part”. Often, committee members will tell you unashamedly that you made a good, or bad, impression on the committee, and that it might have played a role in the ranking. Of course they would never admit that they thought you “looked like a good scientist”. However, let us ask ourselves: what is the correlation between an impression conveyed in a one-time 5 minute presentation and the quality of the future career of a scientist over 30-40 years? I do not know what this correlation is, but I would guess not very high. The point is: neither does the committee. Yet, with zero evidence that there is any relationship between the two attributes, some committees seem to happily and confidently use it in the ranking decision.

Answers to questions are a slightly different problem. The committee will evaluate (substitution again) whether you answered their questions satisfactorily or not. From my experience both as candidate and committee member, the interview is unstructured. Any member can spontaneously ask a question. Some members ask many, others don't. Questions can be on the scientific project, or on other issues. Questions differ between candidates. The candidate will be graded on how well he/she answered the questions. Again, this is a case of substitution. Some questions may be highly relevant to the decision, such as how much the candidate seems to master the subject (although this is obviously problematic in the absence of any expert of the subject in the committee), discuss alternative hypotheses, etc. Other questions might bear less correlation with the target attribute. But the way the candidate answers questions will have an impact, whether a relevant or an irrelevant question is answered. Finally, the answers to many scientific questions cannot be evaluated by most committee members, because they do not know the subject. Therefore, there will be a case of substitution, in which some other element, most likely the candidate's confidence, will be used instead of the intrinsic quality of the answer.

Many committees consider that this second stage (oral presentation/interview) is very important to make their decision. Yet, they have close to zero empirical evidence that it increases the correlation between their decision and the desired outcome of the decision. What it does increase for sure, is the level of confidence in their decision. But, as I explained before, especially in the absence of any feedback, there is no general relationship between self-confidence and accuracy.

Psychology of academic hiring committees (V): Cognitive dissonance

It turns out that such criteria, based on the current employment status of the candidates, are also not legal according to French law. Public hiring must be based on criteria that are directly relevant to the position. Factors such as sex or race are obviously illegal, but any other criterion irrelevant to the advertised position is also illegal (which is not the case for private hiring, which is more flexible). This is why you will never find the criteria mentioned in this section on any official document. This is also why the CNRS and other research institutions insist that all reports and any document produced by committee members remain confidential, an organized opacity on the entire hiring process.

It is by asking committee members about the reason for this organized opacity, which is clearly not a great help for candidates, that I discovered the legal-psychological concept of “not really illegal”, an amazing illustration of cognitive dissonance theory, certainly one of the most brilliant theories in psychology, developed by Festinger in the 1950s. It is worth giving some background information about this theory. It proposes that people have a drive to reduce cognitive elements that are dissonant. This can happen by different means, such as avoiding sources of information that are likely to be dissonant (e.g. speak with people with similar opinions), adding consonant elements, or changing one of the dissonant factors. Festinger first described the theory in an entertaining study of a small UFO cult who believed in imminent apocalypse, “When Prophecy Fails”. The group had predicted the apocalypse for December 20. They waited together on that day, but nothing happened. Subsequently, the group did not collapse. On the contrary, consonant elements were added to the current set of beliefs so as to deal with the new element. Two things happened: 1) they were now convinced that the apocalypse did not come because they had prayed so much, 2) they became more active in spreading their beliefs and thereby seeking social support from others. The theory was then presented in a more academic book, with a number of experiments to support it. The bottom line is that a consistent story is constructed a posteriori to justify judgments and beliefs so as to accommodate them with reality, and that the beliefs are strengthened by social support from a group of people sharing the same beliefs. The phenomenon is also well documented in philosophy of science (see for example Lakatos, Feyerabend and Kuhn).

So, committees know perfectly well why the hiring process must remain confidential and would tell you without any trace of embarrassment (in private!). It is to avoid lawsuits. They all have some cases in mind when candidates complained and could prove that they had suffered from what they and the justice considered as an unfair hiring process. To me and anyone else that is not in a committee, this means that confidentiality is imposed so as to hide illegal procedures, which is certainly not a good reason. But for most committee members I have talked with, this is not what it means. It is just to protect the hiring process from an endless series of lawsuits, which certainly would be a problem. They did not seem to see that a successful lawsuit means that the attacked procedure was illegal, and that perhaps the problem is rather the illegality of the procedure than the fact that there was a procedure.

I teased one committee member with a particular case which I am personally concerned about, which is that assistant professors are almost entirely denied access to senior research positions in CNRS. Indeed, those positions (“directeur de recherches” or DR) are officially open to all, but statistics show that more than 95% of successful applicants actually held a junior position in CNRS, with a very different statistics in all applicants.

This member, as other members I have talked with, recognized that it was indeed a problem for me and many other people. However, also like other members I have talked with, he found that I was being really extreme by complaining that it was illegal to use my current employment status as a criterion. Yet the legal texts on this subject are very simple and explicit, it is impossible to interpret them differently. In public hiring, there are so-called “external” positions and “internal” positions. Internal ones are reserved for promotion (there are a few in CNRS), external ones are open to anyone. Senior research positions are external, and advertised as such, with an explicit mention that no distinction will be made between internal and external candidates. I pushed the member a little bit with these arguments, but he could not admit that he was participating in an illegal procedure. This is where he came up with an amazing example of cognitive dissonance: the concept of “real illegality”. Alright, so the argument goes, perhaps using such criterion contradicts the law, but it is not really illegal. So here is what I understood of the concept of not being really illegal:

- something not really illegal contradicts the law;

- it is a problematic issue that “in an ideal world” (another interesting concept) should be avoided;

- it should remain confidential, for otherwise a legal suit would be successful;

- it is actually not considered illegal if it remains confidential;

- taking the not really illegal decision is not morally bad, but rather something that has to be done.

Thanks to the psychological concept of “real illegality”, committees seem to accommodate illegal substituted criteria quite well. In defense of these committees, I should add that, if what I heard is correct (likely, given the number of independent sources), this belief is actively supported by the direction of these institutions. However, committees are independent. They have no legal obligation to follow the orders or recommendation of the direction. That they nevertheless comply is a case of obedience to authority, another well known psychological bias illustrated by the Milgram experiment.

The psychology of academic hiring committees (IV) Attribute susbtitution: how much does the candidate/lab/field need the position?

Even though experience is only a substitute for the attribute that is supposed to be evaluated in hiring decisions, there is at least some correlation between the two attributes, or at least in some cases (like number of publications). More troubling are the following criteria which have little to do with the target attribute:

- The number of times the candidate has previously applied.

- Whether another candidate is applying for the same lab (which would be bad).

- Whether the lab had a successful candidate the previous year.

- Whether the sub-discipline has not had a successful candidate for some time (which would be good).

- In more senior positions, whether the candidate already has a (junior) position in France (bad).

These criteria, which are used in actual rankings, answer an entirely different question: how much does the candidate or discipline or lab need the position? This has nothing to do with any of the official criteria.

The “queue”

In some disciplines, committee members will easily tell you that there is in effect a “queue”, because there are so many good candidates. You should not expect to get the position the first time, even if you are very good (although it certainly helps!).

First remark, in defence of the committees: the second time you apply, you should certainly have a better chance to get the position, since a number of people who were better ranked than you got the position and therefore will not be competing anymore. This is obvious, but it is not exactly what is meant by “there is a queue”. As I have understood it, what is meant is the following. Again as I wrote before, selecting young scientists is a very difficult task, especially for a committee with heterogeneous expertise. Committee members are therefore happy to use an easier substituted attribute. When two good applications are discussed, and one candidate applies for the first time while the other one applies for the third time, it is tempting for the committee to reason as follows: let us give the position to the older applicant, and the younger applicant will get it next time; implied: this way we actually pick the two applicants who are both good. This is of course a fallacy since no position is created in the process. Whichever applicant is chosen, it will mean that another applicant will not get the position. So the reasoning is illogical. The applicant that should be selected is the best one, not the one that has applied the most number of times. Since the number of positions granted each year is not going to be changed by the decision process, the result of such fallacious reasoning is only to artificially create a “queue” and increase the average hiring age.


Criteria based on discipline or lab quotas are not necessary irrational, although they have nothing to do with the individual's scientific quality. But there is a similarly irrational criterion, this time generally in more advanced positions, 1st class junior scientists (CR1 in administrative slang). In France, there are two types of junior positions: full-time researcher (e.g. in CNRS; 2nd class or 1st class) and assistant professor in a university, which is both research and teaching. In principle, you could apply to a full-time research position if you are a postdoc, or if you are an assistant professor and you want to do more research. Officially no distinction is made. However, in practice, any committee member will tell you that it is next to impossible for an assistant professor to get such a position. Why is that? Again this is a case of substitution: failing to clearly distinguish between good scientists on the basis of their expected scientific career, committees answer a different question: who needs the position most. So the reasoning (which I have heard explicitly many times) is as follows: 1) if the candidate already has a permanent position, then he/she needs the advertised position less than a candidate who is currently a postdoc; 2) if the candidate has a permanent position but abroad, then he should be favoured over the candidate who has a permanent position in France, because it increases the number of faculty positions in France by one.

Again this is a fallacy because no committee decision whatsoever can create or destroy a position, or has any effect on public budget. The only impact is on who gets the position. If an assistant professor in France gets the research position, then the budget corresponding to the former position is now freed and another assistant professor is hired instead. Whoever is selected by the committee, it will not increase or decrease the amount of public budget allocated to permanent academic hiring, which is an independent political decision.

The consequence of basing decisions on substituted attributes, or simply taking these irrelevant attributes into account, is logically obvious: it reduces the weight given to the target attribute in the final ranking, i.e., worse candidates are selected.

The psychology of academic hiring committees (III) Attribute substitution: what is the experience of the candidate?

First I will comment on the criteria that corresponds to the question “what is the experience of the candidate?”, rather than the original question that the committee is supposed to answer: “how likely is the candidate to have a brilliant scientific career over the next 30-40 years”. There is no doubt that the candidate's experience is one factor that should be taken into account to make the hiring decision. Attribute substitution is when that factor is not simply taken into account, but mistaken for the target attribute that constitutes the object of the judgment.

Here are a few criteria that are unofficially used to assess the experience of the candidate:

- The number of publications; in particular, there is often an unofficial threshold for that number.

- The age of the candidate (older is better, but not too old).

- In teaching positions (not CNRS), whether the candidate has already taught quite a bit.

- Whether the candidate did a postdoc abroad (rather than in the same country, which would be bad).

The key here is to note that these criteria are about the substituted attribute, the experience of the candidate at the time of the application. They are not directly about the target attribute, the future career of the scientist. But they are assessed as if they were actually directly about that target attribute.

The number of publications

Consider the number of publications. Quite obviously, a candidate who has not published (not as a main author), should not be hired. Even if the candidate is brilliant, there is simply no information to know it. But using the number of publications as a proxy for “scientific excellence” (one of the official criteria) is another story. “Scientific excellence” is about productivity. All other things being equal (e.g. quality of the publications), more is better: it is better a hire a scientist who will publish in his/her entire life 100 high-quality papers than 10 papers of the same quality. The problem is that “number of publications at the time of application” is a rather poor substitute for future productivity. Imagine you have all the information you need, that is, the number of future publications of the candidate if he/she is hired. For a given productivity, the number of publications at time of application obviously correlates with the time for which the candidate has been publishing. Every year the substituted attribute (number of publications) increases with no change in the target attribute (productivity). This leads to paradoxical decisions: a candidate who has published 10 papers in 15 years will be ranked higher than a candidate who has published 8 papers in 4 years (again, all other things being equal - I am only considering the number of publications). The substituted attribute has no direct relationship with the target attribute. Yet it seems to be used as such (at least in biology sections).

The age of the candidate

Over the last few decades, the average hiring age in CNRS junior positions has increased very substantially. There was a time when scientists were hired right after they obtained their PhD. This would seem almost crazy now, yet a number of the older committee members got their position at that time, and would now argue (cognitive dissonance!, I will come back to that) that it would not be reasonable to hire scientists so young. Why did hiring age increase so much? A major fact to consider is that the number of applications has considerably increased without a corresponding increase in the number of positions, but let us consider a few hypotheses.

Hypothesis 1: age is a reasonable criterion, but and committee members failed to recognize it before. Given that the way committees are composed has not changed and that, as I noted above, committees have no feedback to learn from errors, this seems unlikely.

Hypothesis 2: young scientists were much better in the old times, and so informed decisions could be taken at an earlier age. This would require that young scientists published more before (otherwise there can be no informed decision), but the trend is in fact opposite.

Hypothesis 3: the way science is done has completely changed, and so now people need much more experience. I have heard this hypothesis, in particular to explain why young biologists must now do several postdocs before finding a position. Note that the argument is about experience, a substituted attribute, and not about future scientific career. I observe that: 1) one can learn things before or after being permanently hired, 2) during the time when they learn the things that are now required for being a good permanent scientist, the candidate is a non-permanent scientist, i.e. a scientist on a different type of contract; so it would seem that the requirement if for having a permanent contract, not for doing science. In my field, which is interdisciplinary, I know of many examples of older renown scientists who made a career in a different field (e.g. physics) before changing field (e.g. biology/neuroscience) while being on permanent positions. So empirically, having a permanent position does not seem to prevent one to learn new things and even change fields. Take Gerald Edelman: Nobel prize for his immunology work, then changed fields to work in neuroscience. He was not a postdoc when he got the Nobel prize. Therefore, this hypothesis does not seem to have a clear rational or empirical basis.

Hypothesis 4: finally, the explanation I have heard most often from committee members (so, people who have a role in this age increase) is the following: the number of candidates has increased, and so “mechanically” hiring age increases. More recently, there has been another increase in mean hiring age after the legal limit on application age was raised, which was explained to me as follows: now there is a competition with older candidates who necessarily have a better application, and since we take the best application, we have to take the oldest candidates. “Better application” means more publications here. So quite explicitly, it appears that the committee substitutes experience for excellence. I note again that the older scientist once was a younger scientist, with fewer publications (= substituted attribute), and yet both have the same scientific career (= target attribute).

Teaching experience

Another typical case of substituting experience is in hiring assistant professors in universities. As I have been in a few such committees, I can tell that one strong criterion (which is a threshold, i.e., a pass/fail criterion) is the number of hours that the candidate has taught, and it has to be sufficient. In general, lectures are taught only by permanent faculty (assistant professors and professors), and so candidates have taught tutorials, which is considered fine. At first sight, it seems to make sense to consider teaching experience for a teaching position, and this is why this attribute is used as a substitute, without even necessarily noting the substitution. However, the target attribute is not how much one as taught, but how good the candidate will be as a teacher. The number of hours that someone has taught is essentially irrelevant, since it gives no indication as to whether the candidate is a good or a bad teacher. In general, students teach during their PhD because either it is mandatory or they get a substantial amount of money. Students who do not teach during their PhD have a contract that does require them to teach, and these students want to spend more time on their research. It doesn't make necessarily them bad teachers.

Now one could (and would) oppose the following argument: candidates who have more teaching experience will be better teachers. First of all, since we are talking of a permanent position, the few hours one has taught at the time of application will have little impact on the timescale of a 30-40 years career. But more importantly, this argument is illogical. Candidates who have taught before being hired, and who are supposedly better teachers now, were then worse teachers when they started (which apparently was not an issue then). The decision of permanently hiring when they start teaching or after they start teaching has no impact whatsoever on their teaching experience over the course of their teaching career. Experience in teaching at the time of application should not be relevant to the hiring decision if there is no feedback on the teaching experience. This illustrates a clear case of attribute substitution.

Unfortunately, this substituted criterion is in broad use and prevents a number of good young scientists to apply. Indeed, in France, to be authorized to locally apply to a faculty position in a university, one must first pass a national screening stage (“qualification”) in which a national committee decides, with their own criteria, whether the candidate has the required credentials to apply to faculty positions. In general, committees require candidates to have taught a certain number of hours. The committees have no information on the quality of teaching, only on what was taught and for how many hours. So let me be clear: an explicit requirement to be allowed to teach on a permanent contract is to have taught on a fixed-term contract (e.g. PhD or ATER, which is temporary assistant professor), in which case there is no requirement.

It gets better: there is one committee for each discipline, but if this screening stage is passed, then the candidate can apply to any faculty position in any other discipline. I have a persona example. I did a PhD in computational neuroscience, an interdisciplinary field, during which I gave math tutorials. After I got my diploma, I applied to that screening stage in different committees. At the time, I already had a few published papers and had taught about 200 hours. But the computer science committee decided not to grant me the authorization, because (I assume) I had not taught computer science, or perhaps they considered the research not relevant to their discipline, even though it was quite obvious that I had the required credentials. This has happened again this year for former PhD student of mine: he had taught the right number of hours, but both in maths and computer science instead of only in computer science, so they rejected the application. Fortunately for me, the neuroscience committee did grant me the authorization. The next year, I got a position in the computer science department of Ecole Normale Supérieure, thanks to the authorization from the neuroscience committee. It is clear here that the question that the computer science committee answered was not, as it should have been: “should this guy be authorized to apply for a faculty position in some discipline?” but a substituted question, “did that guy teach 200 hours of computer science?”. This is particularly problematic for interdisciplinary science.

Postdoc experience

In many committees, in particular in biology, but also in many others, it is considered that one must have done a postdoc abroad to be seriously considered. It acts as a first screening: no postdoc abroad = fail. In some committees, the candidate must also have published during that postdoc, so as to show that the PhD publications were not just because of the supervisor, who is generally a co-author – although the postdoc is generally not an independent position either.

I also have a personal example. When I applied after the PhD, I failed. I asked one committee member for feedback. He told me: your application was considered excellent, so you should do a postdoc abroad and then next year you will get the position. For personal reasons, I did not want to move far, and I could not understand the logic behind that requirement. I already had two publications as a single author, and in fact my supervisor did not sign any of my publications, so the argument I previously mentioned did not apply. But in any case, as it was phrased, what the committee wanted was not that I publish in another lab, they just wanted me to check the box “postdoc abroad” (there actually is a box in the CNRS application forms). Also, the requirement was not simply: postdoc in another lab, but in a different country. I also did not understand that geographic requirement: I had spent an extended period in England during studies, which the committee knew, and so the linguistic argument did not apply either. It turned out that what I wanted to do was best done in another lab in France, but it didn't help me check the required box. So apparently, the single fact that a postdoc was done abroad, without knowing how the postdoc actually went, was a decisive criterion in the ranking, independently of any other consideration. Imagine I had actually planned to do a postdoc abroad, and already had made arrangements to do it at the time of application. Then, given the information, the committee would have known with certainty that the next year, I would have done a postdoc abroad and therefore that I would definitely pass that criterion. So actually doing the postdoc abroad was apparently irrelevant to their decision. So there was no rational basis to that requirement.

The committee member had given me that piece of information without blushing, and he did not seem to be embarrassed by the fact that the committee insisted on such an irrational criterion. It surprised me at first that people involved in irrational decision making seem at the same time to be very confident about of the correctness of their decisions. This occurs even though it is clear that the decision to be taken is difficult and there is considerable uncertainty in the choice. But in fact this is a very well established psychological phenomenon, which is explained for example in Daniel Kahneman's book. The degree of confidence one has in a decision or judgment is essentially determined not by the rationality of that decision or objective facts, but by the consistence of the causal story that one makes to explain the decision. So one would say: it is clear that Peter should be ranked before Paul, since he has two more publications; we could not do otherwise. But the story neglects the fact that the number of publications is actually a substituted attribute, not the target attribute of the judgment. The same goes for other substituted attributes. People involved in the decision can hear the objective elements that contradict the decision, but if they cannot be fit in the story, they are essentially ignored. This is related to cognitive dissonance theory, certainly one of the most brilliant theories in psychology, developed by Festinger in the 1950s. I will talk about it later.