Psychology of academic hiring committees (VII): more halo effect and conclusion

Perhaps the most striking example of the halo effect that I have noticed is the weight given to very high-impact journals (say Nature). Most scientists I know would love to publish in such journals, because of the visibility and prestige, and because they know it helps their career (for example to get grants). But the same scientists generally also agree that papers in those journals are not necessarily better than those published in good journals in their field, and everyone has examples in mind about obviously flawed papers in their field in such journals. There is in fact little objective reason that those journals may publish better science. The reviewing process is essentially the same as in other journals (2-3 reviewers writing a report), except faster, and most of the selection is done by professional editors (i.e., whose full-time job is to evaluate papers), not by people in the field, on the basis of “significance”. Significance is obviously a highly subjective notion, especially when assessed by non-specialists. In any case, publishing in Nature should not make you a better scientist than publishing in, say, Journal of Neuroscience.

Some might have a different opinion, but it is not my point here. The point is that committee members also often share the same opinion, and yet it is almost impossible for them to resist the appeal of a Nature paper on a CV. Publishing in such journals is so desirable that one cannot help but think, consciously or not, that a scientist who got a paper accepted in such journals must be some sort of genius, forgetting their opinion that the competition in those journals is not so much about scientific quality. There clearly seems to be a halo effect with prestigious journals.

Ending words

Taken together, all the cognitive biases I have described make a rather gloomy picture of academic hiring committees. To be clear, my intention was certainly not to imply that committee members are stupid or crazy. They are not at all. But they are humans, and humans suffer from well-documented cognitive biases in difficult decision processes. The solution to these issues is therefore not to appoint different humans. It is to change the processes so as reduce these issues. Even though selecting a permanent faculty is a very difficult task, it is not the worst hiring scenarios. Compared to other professions, the work of scientists is public and there might be therefore more useful information available for the hiring committee than in other professions, where only a CV and an interview can be used.

For example, one simple recommendation would be that documents produced by the hiring process (reports etc) be made public after the decision. This would: 1) increase feedback, 2) suppress some irrelevant criteria (quotas etc).

I would be interested in hearing about how academic hiring processes work in other countries.

Psychology of academic hiring committees (VI): The oral presentation and interview

 This part will illustrate two psychological biases, known as the illusion of validity and the halo effect. For research and university positions in France, the candidate must provide a written application (CV, project, description of previous work) and must also give an oral presentation and answer questions. The oral presentation is generally 5-15 minutes, followed by questions, for a total of up to 30 minutes. For a decision relevant to the entire career of a scientist, this is rather short. Yet it does seem to bear quite a strong weight on the committee's decision (although not in all committees).

Why some committees decide to put a such a strong weight on this stage is probably related to another cognitive bias named the illusion of validity. In his book, Daniel Kahneman describes his experience during his military service, where he had to evaluate officer candidates with a test. There had been an empirical analysis of that procedure, where they found that the grades given after the test were essentially uncorrelated with their future performance in officer school. However, when someone grades the test, he/she is very confident in the grades and thinks they are highly predictive. This is a case of substitution: the test is not directly related to what an officer does, but confidence is about the substituted attribute, whether it is highly correlated or uncorrelated to the target attribute. The illusion of validity is very strong, though, and it is difficult to avoid it because one's beliefs are built only from the cognitive elements that are currently available, not from the uncertain elements. It is impossible to consider unavailable elements, and so any consistent set of available elements seem like compelling evidence.

Let us examine the case of the oral presentation. Prior to the presentation, the candidate submitted a complete application including a written project, a detailed CV, and a report on previous results. Therefore, there is zero new information in the oral presentation, which is very short. However, the presentation is graded anyway, and you cannot ignore a grade that you decided on, whether it is relevant or not to the decision. I have heard from committee members that the presentation is useful because most committee members have not read the project (only the referee). This is obviously not a great defense for a lifetime decision!

Whether the presentation is explicitly graded or not, it is next to impossible to avoid the priming effects induced by a good or bad presentation. It is well documented in psychological research that many impressions that are completely irrelevant to the decision actually greatly influence it, when the decision is difficult. This is called the halo effect. These impressions include attractiveness, voice, self-confidence, or more generally “looking the part”. Often, committee members will tell you unashamedly that you made a good, or bad, impression on the committee, and that it might have played a role in the ranking. Of course they would never admit that they thought you “looked like a good scientist”. However, let us ask ourselves: what is the correlation between an impression conveyed in a one-time 5 minute presentation and the quality of the future career of a scientist over 30-40 years? I do not know what this correlation is, but I would guess not very high. The point is: neither does the committee. Yet, with zero evidence that there is any relationship between the two attributes, some committees seem to happily and confidently use it in the ranking decision.

Answers to questions are a slightly different problem. The committee will evaluate (substitution again) whether you answered their questions satisfactorily or not. From my experience both as candidate and committee member, the interview is unstructured. Any member can spontaneously ask a question. Some members ask many, others don't. Questions can be on the scientific project, or on other issues. Questions differ between candidates. The candidate will be graded on how well he/she answered the questions. Again, this is a case of substitution. Some questions may be highly relevant to the decision, such as how much the candidate seems to master the subject (although this is obviously problematic in the absence of any expert of the subject in the committee), discuss alternative hypotheses, etc. Other questions might bear less correlation with the target attribute. But the way the candidate answers questions will have an impact, whether a relevant or an irrelevant question is answered. Finally, the answers to many scientific questions cannot be evaluated by most committee members, because they do not know the subject. Therefore, there will be a case of substitution, in which some other element, most likely the candidate's confidence, will be used instead of the intrinsic quality of the answer.

Many committees consider that this second stage (oral presentation/interview) is very important to make their decision. Yet, they have close to zero empirical evidence that it increases the correlation between their decision and the desired outcome of the decision. What it does increase for sure, is the level of confidence in their decision. But, as I explained before, especially in the absence of any feedback, there is no general relationship between self-confidence and accuracy.

Psychology of academic hiring committees (V): Cognitive dissonance

It turns out that such criteria, based on the current employment status of the candidates, are also not legal according to French law. Public hiring must be based on criteria that are directly relevant to the position. Factors such as sex or race are obviously illegal, but any other criterion irrelevant to the advertised position is also illegal (which is not the case for private hiring, which is more flexible). This is why you will never find the criteria mentioned in this section on any official document. This is also why the CNRS and other research institutions insist that all reports and any document produced by committee members remain confidential, an organized opacity on the entire hiring process.

It is by asking committee members about the reason for this organized opacity, which is clearly not a great help for candidates, that I discovered the legal-psychological concept of “not really illegal”, an amazing illustration of cognitive dissonance theory, certainly one of the most brilliant theories in psychology, developed by Festinger in the 1950s. It is worth giving some background information about this theory. It proposes that people have a drive to reduce cognitive elements that are dissonant. This can happen by different means, such as avoiding sources of information that are likely to be dissonant (e.g. speak with people with similar opinions), adding consonant elements, or changing one of the dissonant factors. Festinger first described the theory in an entertaining study of a small UFO cult who believed in imminent apocalypse, “When Prophecy Fails”. The group had predicted the apocalypse for December 20. They waited together on that day, but nothing happened. Subsequently, the group did not collapse. On the contrary, consonant elements were added to the current set of beliefs so as to deal with the new element. Two things happened: 1) they were now convinced that the apocalypse did not come because they had prayed so much, 2) they became more active in spreading their beliefs and thereby seeking social support from others. The theory was then presented in a more academic book, with a number of experiments to support it. The bottom line is that a consistent story is constructed a posteriori to justify judgments and beliefs so as to accommodate them with reality, and that the beliefs are strengthened by social support from a group of people sharing the same beliefs. The phenomenon is also well documented in philosophy of science (see for example Lakatos, Feyerabend and Kuhn).

So, committees know perfectly well why the hiring process must remain confidential and would tell you without any trace of embarrassment (in private!). It is to avoid lawsuits. They all have some cases in mind when candidates complained and could prove that they had suffered from what they and the justice considered as an unfair hiring process. To me and anyone else that is not in a committee, this means that confidentiality is imposed so as to hide illegal procedures, which is certainly not a good reason. But for most committee members I have talked with, this is not what it means. It is just to protect the hiring process from an endless series of lawsuits, which certainly would be a problem. They did not seem to see that a successful lawsuit means that the attacked procedure was illegal, and that perhaps the problem is rather the illegality of the procedure than the fact that there was a procedure.

I teased one committee member with a particular case which I am personally concerned about, which is that assistant professors are almost entirely denied access to senior research positions in CNRS. Indeed, those positions (“directeur de recherches” or DR) are officially open to all, but statistics show that more than 95% of successful applicants actually held a junior position in CNRS, with a very different statistics in all applicants.

This member, as other members I have talked with, recognized that it was indeed a problem for me and many other people. However, also like other members I have talked with, he found that I was being really extreme by complaining that it was illegal to use my current employment status as a criterion. Yet the legal texts on this subject are very simple and explicit, it is impossible to interpret them differently. In public hiring, there are so-called “external” positions and “internal” positions. Internal ones are reserved for promotion (there are a few in CNRS), external ones are open to anyone. Senior research positions are external, and advertised as such, with an explicit mention that no distinction will be made between internal and external candidates. I pushed the member a little bit with these arguments, but he could not admit that he was participating in an illegal procedure. This is where he came up with an amazing example of cognitive dissonance: the concept of “real illegality”. Alright, so the argument goes, perhaps using such criterion contradicts the law, but it is not really illegal. So here is what I understood of the concept of not being really illegal:

- something not really illegal contradicts the law;

- it is a problematic issue that “in an ideal world” (another interesting concept) should be avoided;

- it should remain confidential, for otherwise a legal suit would be successful;

- it is actually not considered illegal if it remains confidential;

- taking the not really illegal decision is not morally bad, but rather something that has to be done.

Thanks to the psychological concept of “real illegality”, committees seem to accommodate illegal substituted criteria quite well. In defense of these committees, I should add that, if what I heard is correct (likely, given the number of independent sources), this belief is actively supported by the direction of these institutions. However, committees are independent. They have no legal obligation to follow the orders or recommendation of the direction. That they nevertheless comply is a case of obedience to authority, another well known psychological bias illustrated by the Milgram experiment.

The psychology of academic hiring committees (IV) Attribute susbtitution: how much does the candidate/lab/field need the position?

Even though experience is only a substitute for the attribute that is supposed to be evaluated in hiring decisions, there is at least some correlation between the two attributes, or at least in some cases (like number of publications). More troubling are the following criteria which have little to do with the target attribute:

- The number of times the candidate has previously applied.

- Whether another candidate is applying for the same lab (which would be bad).

- Whether the lab had a successful candidate the previous year.

- Whether the sub-discipline has not had a successful candidate for some time (which would be good).

- In more senior positions, whether the candidate already has a (junior) position in France (bad).

These criteria, which are used in actual rankings, answer an entirely different question: how much does the candidate or discipline or lab need the position? This has nothing to do with any of the official criteria.

The “queue”

In some disciplines, committee members will easily tell you that there is in effect a “queue”, because there are so many good candidates. You should not expect to get the position the first time, even if you are very good (although it certainly helps!).

First remark, in defence of the committees: the second time you apply, you should certainly have a better chance to get the position, since a number of people who were better ranked than you got the position and therefore will not be competing anymore. This is obvious, but it is not exactly what is meant by “there is a queue”. As I have understood it, what is meant is the following. Again as I wrote before, selecting young scientists is a very difficult task, especially for a committee with heterogeneous expertise. Committee members are therefore happy to use an easier substituted attribute. When two good applications are discussed, and one candidate applies for the first time while the other one applies for the third time, it is tempting for the committee to reason as follows: let us give the position to the older applicant, and the younger applicant will get it next time; implied: this way we actually pick the two applicants who are both good. This is of course a fallacy since no position is created in the process. Whichever applicant is chosen, it will mean that another applicant will not get the position. So the reasoning is illogical. The applicant that should be selected is the best one, not the one that has applied the most number of times. Since the number of positions granted each year is not going to be changed by the decision process, the result of such fallacious reasoning is only to artificially create a “queue” and increase the average hiring age.


Criteria based on discipline or lab quotas are not necessary irrational, although they have nothing to do with the individual's scientific quality. But there is a similarly irrational criterion, this time generally in more advanced positions, 1st class junior scientists (CR1 in administrative slang). In France, there are two types of junior positions: full-time researcher (e.g. in CNRS; 2nd class or 1st class) and assistant professor in a university, which is both research and teaching. In principle, you could apply to a full-time research position if you are a postdoc, or if you are an assistant professor and you want to do more research. Officially no distinction is made. However, in practice, any committee member will tell you that it is next to impossible for an assistant professor to get such a position. Why is that? Again this is a case of substitution: failing to clearly distinguish between good scientists on the basis of their expected scientific career, committees answer a different question: who needs the position most. So the reasoning (which I have heard explicitly many times) is as follows: 1) if the candidate already has a permanent position, then he/she needs the advertised position less than a candidate who is currently a postdoc; 2) if the candidate has a permanent position but abroad, then he should be favoured over the candidate who has a permanent position in France, because it increases the number of faculty positions in France by one.

Again this is a fallacy because no committee decision whatsoever can create or destroy a position, or has any effect on public budget. The only impact is on who gets the position. If an assistant professor in France gets the research position, then the budget corresponding to the former position is now freed and another assistant professor is hired instead. Whoever is selected by the committee, it will not increase or decrease the amount of public budget allocated to permanent academic hiring, which is an independent political decision.

The consequence of basing decisions on substituted attributes, or simply taking these irrelevant attributes into account, is logically obvious: it reduces the weight given to the target attribute in the final ranking, i.e., worse candidates are selected.

The psychology of academic hiring committees (III) Attribute substitution: what is the experience of the candidate?

First I will comment on the criteria that corresponds to the question “what is the experience of the candidate?”, rather than the original question that the committee is supposed to answer: “how likely is the candidate to have a brilliant scientific career over the next 30-40 years”. There is no doubt that the candidate's experience is one factor that should be taken into account to make the hiring decision. Attribute substitution is when that factor is not simply taken into account, but mistaken for the target attribute that constitutes the object of the judgment.

Here are a few criteria that are unofficially used to assess the experience of the candidate:

- The number of publications; in particular, there is often an unofficial threshold for that number.

- The age of the candidate (older is better, but not too old).

- In teaching positions (not CNRS), whether the candidate has already taught quite a bit.

- Whether the candidate did a postdoc abroad (rather than in the same country, which would be bad).

The key here is to note that these criteria are about the substituted attribute, the experience of the candidate at the time of the application. They are not directly about the target attribute, the future career of the scientist. But they are assessed as if they were actually directly about that target attribute.

The number of publications

Consider the number of publications. Quite obviously, a candidate who has not published (not as a main author), should not be hired. Even if the candidate is brilliant, there is simply no information to know it. But using the number of publications as a proxy for “scientific excellence” (one of the official criteria) is another story. “Scientific excellence” is about productivity. All other things being equal (e.g. quality of the publications), more is better: it is better a hire a scientist who will publish in his/her entire life 100 high-quality papers than 10 papers of the same quality. The problem is that “number of publications at the time of application” is a rather poor substitute for future productivity. Imagine you have all the information you need, that is, the number of future publications of the candidate if he/she is hired. For a given productivity, the number of publications at time of application obviously correlates with the time for which the candidate has been publishing. Every year the substituted attribute (number of publications) increases with no change in the target attribute (productivity). This leads to paradoxical decisions: a candidate who has published 10 papers in 15 years will be ranked higher than a candidate who has published 8 papers in 4 years (again, all other things being equal - I am only considering the number of publications). The substituted attribute has no direct relationship with the target attribute. Yet it seems to be used as such (at least in biology sections).

The age of the candidate

Over the last few decades, the average hiring age in CNRS junior positions has increased very substantially. There was a time when scientists were hired right after they obtained their PhD. This would seem almost crazy now, yet a number of the older committee members got their position at that time, and would now argue (cognitive dissonance!, I will come back to that) that it would not be reasonable to hire scientists so young. Why did hiring age increase so much? A major fact to consider is that the number of applications has considerably increased without a corresponding increase in the number of positions, but let us consider a few hypotheses.

Hypothesis 1: age is a reasonable criterion, but and committee members failed to recognize it before. Given that the way committees are composed has not changed and that, as I noted above, committees have no feedback to learn from errors, this seems unlikely.

Hypothesis 2: young scientists were much better in the old times, and so informed decisions could be taken at an earlier age. This would require that young scientists published more before (otherwise there can be no informed decision), but the trend is in fact opposite.

Hypothesis 3: the way science is done has completely changed, and so now people need much more experience. I have heard this hypothesis, in particular to explain why young biologists must now do several postdocs before finding a position. Note that the argument is about experience, a substituted attribute, and not about future scientific career. I observe that: 1) one can learn things before or after being permanently hired, 2) during the time when they learn the things that are now required for being a good permanent scientist, the candidate is a non-permanent scientist, i.e. a scientist on a different type of contract; so it would seem that the requirement if for having a permanent contract, not for doing science. In my field, which is interdisciplinary, I know of many examples of older renown scientists who made a career in a different field (e.g. physics) before changing field (e.g. biology/neuroscience) while being on permanent positions. So empirically, having a permanent position does not seem to prevent one to learn new things and even change fields. Take Gerald Edelman: Nobel prize for his immunology work, then changed fields to work in neuroscience. He was not a postdoc when he got the Nobel prize. Therefore, this hypothesis does not seem to have a clear rational or empirical basis.

Hypothesis 4: finally, the explanation I have heard most often from committee members (so, people who have a role in this age increase) is the following: the number of candidates has increased, and so “mechanically” hiring age increases. More recently, there has been another increase in mean hiring age after the legal limit on application age was raised, which was explained to me as follows: now there is a competition with older candidates who necessarily have a better application, and since we take the best application, we have to take the oldest candidates. “Better application” means more publications here. So quite explicitly, it appears that the committee substitutes experience for excellence. I note again that the older scientist once was a younger scientist, with fewer publications (= substituted attribute), and yet both have the same scientific career (= target attribute).

Teaching experience

Another typical case of substituting experience is in hiring assistant professors in universities. As I have been in a few such committees, I can tell that one strong criterion (which is a threshold, i.e., a pass/fail criterion) is the number of hours that the candidate has taught, and it has to be sufficient. In general, lectures are taught only by permanent faculty (assistant professors and professors), and so candidates have taught tutorials, which is considered fine. At first sight, it seems to make sense to consider teaching experience for a teaching position, and this is why this attribute is used as a substitute, without even necessarily noting the substitution. However, the target attribute is not how much one as taught, but how good the candidate will be as a teacher. The number of hours that someone has taught is essentially irrelevant, since it gives no indication as to whether the candidate is a good or a bad teacher. In general, students teach during their PhD because either it is mandatory or they get a substantial amount of money. Students who do not teach during their PhD have a contract that does require them to teach, and these students want to spend more time on their research. It doesn't make necessarily them bad teachers.

Now one could (and would) oppose the following argument: candidates who have more teaching experience will be better teachers. First of all, since we are talking of a permanent position, the few hours one has taught at the time of application will have little impact on the timescale of a 30-40 years career. But more importantly, this argument is illogical. Candidates who have taught before being hired, and who are supposedly better teachers now, were then worse teachers when they started (which apparently was not an issue then). The decision of permanently hiring when they start teaching or after they start teaching has no impact whatsoever on their teaching experience over the course of their teaching career. Experience in teaching at the time of application should not be relevant to the hiring decision if there is no feedback on the teaching experience. This illustrates a clear case of attribute substitution.

Unfortunately, this substituted criterion is in broad use and prevents a number of good young scientists to apply. Indeed, in France, to be authorized to locally apply to a faculty position in a university, one must first pass a national screening stage (“qualification”) in which a national committee decides, with their own criteria, whether the candidate has the required credentials to apply to faculty positions. In general, committees require candidates to have taught a certain number of hours. The committees have no information on the quality of teaching, only on what was taught and for how many hours. So let me be clear: an explicit requirement to be allowed to teach on a permanent contract is to have taught on a fixed-term contract (e.g. PhD or ATER, which is temporary assistant professor), in which case there is no requirement.

It gets better: there is one committee for each discipline, but if this screening stage is passed, then the candidate can apply to any faculty position in any other discipline. I have a persona example. I did a PhD in computational neuroscience, an interdisciplinary field, during which I gave math tutorials. After I got my diploma, I applied to that screening stage in different committees. At the time, I already had a few published papers and had taught about 200 hours. But the computer science committee decided not to grant me the authorization, because (I assume) I had not taught computer science, or perhaps they considered the research not relevant to their discipline, even though it was quite obvious that I had the required credentials. This has happened again this year for former PhD student of mine: he had taught the right number of hours, but both in maths and computer science instead of only in computer science, so they rejected the application. Fortunately for me, the neuroscience committee did grant me the authorization. The next year, I got a position in the computer science department of Ecole Normale Supérieure, thanks to the authorization from the neuroscience committee. It is clear here that the question that the computer science committee answered was not, as it should have been: “should this guy be authorized to apply for a faculty position in some discipline?” but a substituted question, “did that guy teach 200 hours of computer science?”. This is particularly problematic for interdisciplinary science.

Postdoc experience

In many committees, in particular in biology, but also in many others, it is considered that one must have done a postdoc abroad to be seriously considered. It acts as a first screening: no postdoc abroad = fail. In some committees, the candidate must also have published during that postdoc, so as to show that the PhD publications were not just because of the supervisor, who is generally a co-author – although the postdoc is generally not an independent position either.

I also have a personal example. When I applied after the PhD, I failed. I asked one committee member for feedback. He told me: your application was considered excellent, so you should do a postdoc abroad and then next year you will get the position. For personal reasons, I did not want to move far, and I could not understand the logic behind that requirement. I already had two publications as a single author, and in fact my supervisor did not sign any of my publications, so the argument I previously mentioned did not apply. But in any case, as it was phrased, what the committee wanted was not that I publish in another lab, they just wanted me to check the box “postdoc abroad” (there actually is a box in the CNRS application forms). Also, the requirement was not simply: postdoc in another lab, but in a different country. I also did not understand that geographic requirement: I had spent an extended period in England during studies, which the committee knew, and so the linguistic argument did not apply either. It turned out that what I wanted to do was best done in another lab in France, but it didn't help me check the required box. So apparently, the single fact that a postdoc was done abroad, without knowing how the postdoc actually went, was a decisive criterion in the ranking, independently of any other consideration. Imagine I had actually planned to do a postdoc abroad, and already had made arrangements to do it at the time of application. Then, given the information, the committee would have known with certainty that the next year, I would have done a postdoc abroad and therefore that I would definitely pass that criterion. So actually doing the postdoc abroad was apparently irrelevant to their decision. So there was no rational basis to that requirement.

The committee member had given me that piece of information without blushing, and he did not seem to be embarrassed by the fact that the committee insisted on such an irrational criterion. It surprised me at first that people involved in irrational decision making seem at the same time to be very confident about of the correctness of their decisions. This occurs even though it is clear that the decision to be taken is difficult and there is considerable uncertainty in the choice. But in fact this is a very well established psychological phenomenon, which is explained for example in Daniel Kahneman's book. The degree of confidence one has in a decision or judgment is essentially determined not by the rationality of that decision or objective facts, but by the consistence of the causal story that one makes to explain the decision. So one would say: it is clear that Peter should be ranked before Paul, since he has two more publications; we could not do otherwise. But the story neglects the fact that the number of publications is actually a substituted attribute, not the target attribute of the judgment. The same goes for other substituted attributes. People involved in the decision can hear the objective elements that contradict the decision, but if they cannot be fit in the story, they are essentially ignored. This is related to cognitive dissonance theory, certainly one of the most brilliant theories in psychology, developed by Festinger in the 1950s. I will talk about it later.

The psychology of academic hiring committees (II) Attribute substitution

Substitution occurs in difficult judgments for which there is no direct access to the target attribute to be evaluated. This is typically the case of permanently hiring a young scientist: one wants to know whether the scientist will have a successful career, but the outcome is uncertain and especially difficult to assess if you do not know the scientist's field. In such situations, the target attribute is replaced by some other more available attribute. An obvious one would be: the number of publications of the candidate.

To some extent, committees know that the decision is difficult and that they have to use indirect criteria. So they agree in advance on a list of criteria that are made public. An example from the neurophysiology section: scientific excellence, quality and originality of the scientific production, mobility (i.e., whether the candidate did a postdoc abroad), a good scientific project, a good oral presentation and interview.

Establishing a list of indirect criteria is not irrational in such a difficult situation, since the target attribute (future career) is not directly accessible. But what you might notice in this list is that, to the exception of one criterion (mobility), all criteria are still fairly vague and difficult to evaluate. What is “scientific excellence” and how can someone outside the field evaluate it? How can someone who is not in the field of the candidate know if the research is original? This is where attribute substitution occurs. From what I have heard by discussing with many committee members from different disciplines, here are some of the criteria that turned out to be actually used in their decisions, which I have categorized as a function of the question that is substituted for the actual question:

Substituted question: what is the experience of the candidate?

- The number of publications; in particular, there is often an unofficial threshold for that number.

- The age of the candidate (older is better, but not too old).

- Whether the candidate did a postdoc abroad (rather than in the same country, which would be bad).

- In teaching positions (not CNRS), whether the candidate has already taught quite a bit.

Substituted question: how much does the candidate/lab/field need the position?

- The number of times the candidate has previously applied.

- Whether another candidate is applying for the same lab (which would be bad).

- Whether the lab had a successful candidate the previous year.

- Whether the sub-discipline has not had a successful candidate for some time (which would be good).

- For more senior positions (a separate competition), whether the candidate already has a (junior) position in France (bad).

And a few other criteria I will talk about later, because they are not so much about attribute substitution:

- Whether one committee member knows the candidate (good).

- Whether one committee member knows a person who recommended the candidate (good).

- Whether the candidate “made a good impression” during the oral presentation.

- Publications in sexy journals such as Nature and Science.

There are also other criteria that have more to do with politics than with psychology, such as committee members pushing their former students, or candidates for their own labs. I will not comment them. I will comment a few of the unofficial criteria listed above. To fully understand, bear in mind that the application of a candidate is generally read in full detail by a single member of the committee, who is the referee.

The psychology of academic hiring committees (I)

When a university or research institution wants to hire a professor or other permanent academic position (e.g. full-time research scientist in CNRS in France), a committee is appointed (or elected) to decide who to hire. From my experience as a candidate and as a member of such committees, the way committees work is very interesting from the point of view of psychology. It seems like the perfect illustration of many known cognitive biases in decision making (see the excellent book of Nobel prize psychologist Daniel Kahneman, “Thinking, fast and slow”). I believe it also applies to grant selection committees.

Scientists tend to see them as rational beings. But it is now well established that 1) humans (including “experts”) are not rational at all in many situations, 2) we tend to think of ourselves as more rational than we actually are (i.e., we are confident in our biased judgments). Selecting the right candidate for a position is a difficult decision making problem, and most scientists in hiring committees do not know much about either decision making or psychology. Therefore it is likely that they are subject to cognitive biases. In addition, these are collective decisions, which come with additional biases described in social psychology.

I will focus on one particular situation, the selection of junior scientists by national research organizations in France, the largest one being the CNRS. I will also briefly mention a couple of other cases. After reading this text, you will probably feel that I have exposed some serious problems in the hiring process. However, the aim of this text is not to point fingers at individuals. On the contrary, my aim is to provide a psychological perspective on the process, that is to show that these problems reflect general human cognitive biases, which are well documented. Of course, I also believe that there are ways to reduce these problems, but this means changing the processes, not the individuals (who will still be humans). This text focuses on the psychological side of the problem (explaining what happens), not on its political side (changing the processes).

1. The situation

1.1. The decision to be taken

In France, each year, the CNRS offers a number of permanent positions to junior scientists (“junior” meaning generally in their 30s) in all academic disciplines. The call is national, not tied to a particular university (which have their own system). There is a general call for each entire discipline (say computer science) and there are many, many candidates.

Given that the positions are permanent (i.e., about 30-40 years of employment), the goal is to select the most promising young scientists, those who will have the most brilliant career. It is therefore a judgment on the expected future scientific output of the candidates, which I call the “target attribute”. Note that this is quite different from to the decision to a hire a postdoc for a specific 2-3 years project, where the target attribute is the correct accomplishment of the project.

The judgment is based on information available at the time of the decision, which is the past scientific career of the candidate, education and any other element available at that time.

1.2. The committee

For each discipline there is a committee of about 20 scientists (2/3 are elected, 1/3 are nominated). A large part of these scientists are junior scientists themselves, therefore they do not have either a long scientific experience or much experience in hiring people (e.g. postdocs). There is no external review, meaning that all applications are reviewed internally by some of these 20 scientists. Each candidate has a referee in the committee who assesses the application in detail.

1.3. Information and (lack of) feedback

The committee has to select a small number of candidates from a very large and diverse set of applications. For most of these candidates, there is no expert in the committee. The application consists of a CV (in particular list of publications), research project and report on previous work. There may also be reference letters, although there is no consistent rule across committees. The candidate also makes a short oral presentation and is interviewed for a short time by a subset of the committee (for practical reasons).

Part of the committee has no experience in hiring. How much can the committee learn from experience? First of all, the composition of committees changes every 4 years – even though some members can remain. To learn from experience requires feedback on decisions. A decision is: select a candidate, or reject a candidate. The target attribute to be judged is the future scientific output of the candidate during his/her 30-40 years contract.

First option: the candidate is selected, he or she goes to a lab. The lab is in general not the lab of one of the committee members. As far as I know, there is no follow-up to the decisions, e.g. to see how well the selected scientist does. If there was some, it would in any case be very limited in time, since the lifetime of a committee is 1/10th the duration of the scientist's career.

Second option: the candidate is rejected. Candidates that are rejected may 1) quit science, 2) find a position elsewhere, 3) apply again the next year. In the first two cases, the event is generally not known to the committee. But in any case, it is not possible to know how well the candidate would have done if he/she had been selected. So there is no feedback on the decision in these two cases. In the third case, there is still little useful feedback on the decision not to hire the person the previous year, given that the target attribute is the life-long career of an individual.

Finally, discussions and reports of the committee are not public, in fact they are strictly confidential. Therefore there can be no external feedback on the committee, and committees of different disciplines cannot exchange information.

1.4. Summary

In summary, the situation is that of a group of people that must take a very difficult decision, whose good or bad outcome can be assessed only in the long run, with limited information, and who have no opportunity to learn from experience. Therefore, decisions are based not on experience, but on the self-confidence of the committee in their own judgments. Additionally, information is unevenly distributed across the committee, because one member (the referee) examines the application in detail, and a subset of the committee is present for the oral presentation and interview.

The setting is perfect for all sorts of interesting cognitive phenomena. In the next posts, I will discuss in particular attribute substitution, cognitive dissonance, the halo effect, the illusion of validity, obedience to authority.