A criticism of homo economicus (Or: people are neither rational nor irrational)

The mainstream theory of economics, neoclassical economics, is based on a very peculiar model of human behavior and social interactions. The core assumption is that people’s behavior consists in maximizing “utility”, which is a measure of personal preferences. That is, each situation is assigned some utility and people choose the situation with maximal utility, by making the best possible use of available information. This is called “rational behavior” (this is somewhat related to the view in psychology that perceptual behavior is optimal, which I have criticized on similar grounds).

This model has been criticized repeatedly on empirical grounds, in particular on the grounds that humans are actually not that rational, psychology has documented numerous cognitive biases, and so on. This line of criticism forms an entire field, behavioral economics. Epistemologically, economics is quite a particular field because lack of empirical evidence for its core models or even direct empirical contradiction does not seem to be a problem at all. One reason is that the ambition of economic theory is not just empirical but also normative, i.e., it also has a political dimension. In other words, if reality does not fit the model, then reality should be changed so as to fit it (hence the prescription of free markets). It is of course questionable that theories can be called scientific if they constitutively offer no possibility of empirical grounding.

Thus, although the assumptions of neoclassical economics have been pretty much demolished on empirical grounds by psychology (actual behavior of people) and anthropology (actual social interactions; see for example David Graeber’s “Debt: The First 5000 Years”), it still remains the dominant mode of economic thinking because it is intellectually appealing. Of course, political interference certainly has a role in this state of affairs, but here I want to focus on the intellectual aspects.

When the field of behavioral economics points out that humans actually do not behave “rationally”, those deviations are depicted as flaws or “bounds on rationality”. If you are not rational, then you are irrational. This is really not a radical criticism. We are bound to conclude that the rational agent is an approximation of real behavior, and everybody knows that a model cannot be exact. Perhaps the model could be amended, made more sophisticated, or perhaps we should educate people so that they are more rational (this seems to be Daniel Kahneman’s view). But fundamentally, “rational behavior” is a sensible, if imperfect, model of human behavior and social interactions.

What these criticisms miss is the fact that both “rational behavior” and “irrational behavior” have in common several implicit assumptions, which are not only empirically questionable but also absurd foundations for an economic theory – and which therefore cannot ground a normative approach any more than an empirical approach.

1) The first problem is with the idea of “rationality”. Rationality is something that belongs to the domain of logics, and therefore which can only be exerted on a particular model. Thus, to describe human behavior as “rational”, we must first assume that there exists a fixed model of the world and personal preferences, and that this model is not a subject of inquiry. In particular, personal preferences are given and cannot be changed. If, however, the advertisement business is not totally foolish, then this is wrong. Not only do personal preferences change, but one way of satisfying your own desires is by manipulating the desires of others, and this appears to be a large part of the activity of modern multinational companies. The fact that personal preferences are actually not fixed has two problematic consequences: 1) you cannot frame behavior as optimization if the optimization criterion is also a free parameter, 2) it becomes totally unclear how satisfying people’s preferences is supposed to be a good thing, if that means making them want what you sell, rather than selling them what they need; what preexisting economic problem is being solved in this way? Personal preferences can also be changed by the individual itself: for example she can decide, after reflection, that buying expensive branded clothes is futile (see e.g. cognitive dissonance theory about how people change their preferences and beliefs). But again, if that possibility is on the table, then how can we even define “rational behavior”? is it to buy the expensive cloth or is it to change the “utility function”? Assuming preferences are fixed properties of people is the move that allows economic theory to avoid philosophical and in particular ethical questions (what is “good”? see e.g. stoicism and buddhism), as well as the possibility that people influence each other for various reasons (manipulation, but also conversation and education). Unfortunately those questions do not disappear just by ignoring them.

2) The assumption of “rationality” also assumes that people have a fixed model of the world over which that rationality is exerted. They do not learn, for example, and they do not need to be taught either. They just happen to know everything useful there is to know about the world. Building an adequate model of the world, of the consequences of one’s actions, is considered outside the realm of economic theory. But in a normative perspective, this is really paradoxical. One aim of economic theory is to devise efficient organizations of work, in particular which ensure the distribution of accurate information to the relevant people. But by postulating that people are “rational agents”, economic theory considers as already solved the problem it is supposed to address in the first place. In other words, the problem of designing rational organizations of production is dismissed by postulating that people are rational. No wonder that this view leads to the bureaucratization of economy (see David Graeber’s Bullshit Jobs and Béatrice Hibou’s The Bureaucratization of the World in the Neoliberal Era).

3) Finally, implicit with the idea of “rational behavior” is caricatural reductionism. That is, the presumption that the optimization of individual preferences is realized at the individual level. This, in fact, amounts to neglecting the possibility that there are social interactions – quite problematic for a social science. A well-known example in game theory is the prisoner’s dilemma: two criminals are arrested; if they both remain silent, they will do one year in prison; if one betrays the other, he is set free and the other goes to jail for three years; if both betray the other, they both go to jail for two years. Whatever the other decides to do, it is always in your best interest to betray him: this would be the “rational behavior”. The paradox is that two “rational” criminals would end up in jail for two years, while two “irrational” criminals that would not betray each other would do just one year. Thus, “rationality” is not necessarily the most advantageous way to organize social interactions. Or to rephrase, individual rationality is not the same as collective rationality. This is of course a well-known problem in economy, in particular in the “tragedy of the commons” version. But again, this tends to be depicted as an amendment to be made to the core assumption of rationality (cf the concept of “externalities”), when it actually demonstrates the fallacy of the concept of “individual rationality”. Accordingly, neoclassical economists propose to solve the problem by incentives (e.g. carbon tax). But first of all, this is not the same as building collective infrastructures. And second, what this means is that anything that cannot be modeled as independent individual actions is not addressed by the economic theory, but instead must be tailored in the form of an “incentive structure”. Each collective problem now requires its own complex “incentive structure” designed in such a way that the “free” play of individual rationalities ensures the collective good, which is to say that each collective problem must be solved in an ad hoc way outside of the conceptual framework of theory. In other words, with its focus on “rational behavior”, neoclassical economics sets out to solve exclusively problems that do not involve social interactions. It is not clear, then, what the theory is meant to solve in the first place (how omniscient independent agents manage to organize themselves?), or to demonstrate (selfishness entails collective good, except when it doesn’t?).

This issue is actually an important theme of evolutionary theory. Namely, how can social species exist at all, if individualist behavior is rewarded by increased survival and reproduction rate? The answer that evolutionary theory has come to, as well as anthropology and ethology of social animals including primates (see e.g. Frans de Waal’s books), is that social animals display a variety of non-individualist behaviors based on altruism, reciprocity and authority, which ensure successful social interactions and therefore are beneficial for the species. In other words, studies in all those non-economic fields have concurred to demonstrate that efficient collective organizations are not based on individual rationality. This conclusion is not immensely surprising, yet it is essentially the opposite of mainstream economic theory.

In summary, the problem with the “rational behavior” model of human behavior that subtends neoclassical economics is not that people are “irrational”. The problem is that framing human behavior in terms of individual rationality already assumes from the outset that 1) people already have an accurate model of the world, and so no social organization is required to ensure that people’s actions have their intended consequences, this is already solved by people’s “rationality”; 2) people have preexisting fixed “preferences”, and so we don’t need to care about what a “good society” might mean, this is already taken care of by the “preferences”; 3) there is no collective rationality beyond individual rationality, and so there is in fact no society at all, just a group of independent people. Thus, the epistemological implications of the “rational behavior” model are in fact tremendous: essentially, the model amounts to putting aside all the problems that economic theory is supposed to solve in the first place. In other words, the “rational behavior” model of neoclassical economics is not just empirically wrong, it is also theoretically absurd.

 

p.s.: This is partially related to a recent discussion in perceptual psychology on the presumed optimality of human behavior. Rahnev & Denison (2018) review an extensive literature to show that in perceptual tasks, people are actually not optimal. These findings are referred to in the title as “suboptimality”, but in my view this is an unfortunate terminology. My objection to this terminology is that it implicitly accepts the framework of optimization, in which there already is a fixed model of the world for which we only need to tune the parameters. But this means ignoring what perception is largely about, namely modeling the world (object formation, scene analysis, etc).

How belief systems handle contradiction - (I) Empirical contradiction

In this essay, I will discuss the different ways in which a theory can be contradicted, and how theories react. The scope of this discussion is broader than science, so I will be discussing belief systems, of which scientific theories are a particular kind (although, according to Feyerabend, not that particular). Another kind of belief system is political theories, for example. What is a belief system? Roughly speaking (and it will get more precise in the discussion), it is a set of propositions about the world that have a universal character. In science, this could be the law of gravitation, for example. Those propositions have relations with each other, and thus they form a system. For example, some propositions might logically imply others. In a belief system, there are generally core concepts over which other propositions build upon (examples: the atom; the rational agent of economic theory).

How do we evaluate belief systems? In philosophy of science, it is generally considered that scientific theories are evaluated empirically, by testing the empirical validity of propositions. That is, we evaluate the extent to which propositions are contradicted by facts. This has been the core target of much of modern philosophy of science, and thus I will start by recapitulating arguments about empirical contradiction, and add a few remarks. What has been less discussed is two other types of contradiction, social and theoretical. By social contradiction, I refer to the fact that at any given time, different people hold contradictory beliefs, even when they are aware of the same empirical body of observations. How is it possible and do such contradictions get solved? By theoretical contradiction, I refer to the possibility that a system is in fact not logically coherent. It seems that in philosophy of knowledge, belief systems are generally seen as a set of logically consistent propositions, but I will argue that this view is not tenable or rather is a normative view, and that belief systems actually are in some sense “archipelagos of knowledge”.

Empirical contradiction

Science is largely dominated by empiricism. One version of it is the logical empiricism of the circle of Vienna (or logical positivism), dating from the early 20th century. In a nutshell, it claims that scientific statements are of two types: elementary propositions whose truth can be verified empirically, in other words observations, and propositions that can be logically deduced from those elements. This leads to a bottom-up view of science, where experimental scientists establish facts, and then theoreticians build consistent theories from these facts. As far as I can see in my own field, this view is still held by a large portion of scientists today, even though it has been pretty much demolished by philosophy of science in the course of the 20th century. To give an example, logical empiricism is the philosophical doctrine that underlies the logic of the Human Brain Project, whose core idea is to collect measurements of the brain and then build a model from those.

Karl Popper objected that, on a logical ground, propositions can in fact never be verified if they have a universal nature. For example, to verify the law of gravitation, you would have to make all apples in the world fall, and you would still be unsure of whether another apple in the future might not fall in the way you expect. Universal propositions can only be contradicted by observations. This leads to falsificationism, the idea that scientific theories can only be falsified, not verified. On this view, at any given time, there are different theories that are consistent with the current body of experimental observations, and science progresses by elimination, by coming up with critical tests of theories. For example, one of the motivations advanced for collecting “big data” in neuroscience, such as the connectome, is that theories are presumed to be insufficiently constrained by current data (Denk et al., 2012). This view is extremely popular in biology today, even though again later work in philosophy of science has also pretty much demolished it.

Paraphrasing Quine (see Two dogmas of empiricism and the Duhem-Quine thesis), we can object that a theory never gets tested directly, only models specific of a particular situation do. For example, if you wanted to test Newton’s laws, you could let an apple fall and measure its trajectory. But to do this, you would first need to come up with a model of the situation, where for example you would consider the apple as a point of a given mass subject to only the force of gravity. In this case, you would conclude that Newton laws are false. But you would have concluded differently if you had added an auxiliary assumption, that air also exerts a friction force on the apple, for which you would have to come up with a particular model.

Kuhn and Lakatos have pointed out that in fact, the way empirical contradictions are resolved is almost never by abandoning a theory. The process is rather one of interpretation, that is, of coming up with ways of making the observation congruent with the theory. This could be seen as a rhetorical maneuver, or as a fruitful scientific process. In this example, if you think that Newton laws are valid, then you would actually deduce the laws of friction from the empirical contradiction. Laws of friction are in fact very complicated in general and still an active field of research in physics, which draws on various domains of physics, and to make progress one has to accept the underlying theories.

The key point is to recognize that interpretation is not a flaw of the scientific process, but a logical necessity to confront a theory to reality. A theory is framed in the discrete structure of language, that is, in formal terms. For it to apply to anything in the world, things in the world must be mapped to the formal structure of the theory. This, in essence, is the process of modeling. In contrast with theory, a model does not have a universal character; it applies to a specific situation. In the example above, we would have to introduce the assumption that the apple is a rigid body, and that friction follows a particular law, for example that the friction force is proportional to speed. This implies that it is actually not possible to either verify or falsify a theory on the basis of an empirical observation.

This argument does not lead to a relativistic view (all theories have the same epistemic value and it is a question of taste); in this, I would temper some of the conclusions of Feyerabend. Interpretation is in fact not only a logical necessity, but also a key element in scientific progress. Lakatos proposed that it is not theories that compete, but research programs. Some research programs are “degenerate”: they evolve by adding heteroclite ad hoc hypotheses to account for each new observation. Others are “progressive”: they evolve by extending their theoretical core, which then applies to new situations. In scientific practice, this is obtained by dissolving the specific character of interpretations into the universal character of theories. To come back to the apple example, initially we would interpret the empirical contradiction by coming up with an empirical model of friction, which essentially amounts to calling the empirical error “friction”. More precisely, it is an auxiliary hypothesis that makes the observation compatible with the theory. But this could then be turned into theoretical progress: from an analysis of a number of cases of falls of objects, we could then postulate that there is a friction force that is proportional to the speed of the apple and to its size (Stokes law). By doing so, we make parts of the previous interpretations instances of a new theoretical proposition. Note that this proposition only makes sense in the context of Newton’s laws, and thus we are indeed describing a system and not just a set of independent laws. The evaluative situation of the apple fall has now changed: we are evaluating a broader theoretical body (Newton’s laws + Stokes law) by using a narrower interpretative model (the apple is a rigid sphere).

Thus, interpretation is a key feature of belief systems, both logically necessary and progressive, and it appears to be neglected by the two flavors of empiricism that are broadly observed in science (verificationism and falsificationism). Yet without it, it is impossible to understand why people can disagree, other than by postulating than some must be either idiots or liars. I will address this issue in the next part on social contradiction.

So far, I have argued that scientists face empirical contradiction by interpretation. Theoretical progress is then made by dissolving interpretations into new theory. What this really means is that the very notion of “empirical contradiction” is in fact quite misleading, because for the person doing the interpretative work, there is no real contradiction, only a more complex situation than expected. I will end this part by drawing on developments of psychology, specifically cognitive dissonance theory, and extending to non-scientific situations.

Resolving empirical contradiction by interpretation is not at all specific of science, but is a general feature of how people confront their beliefs to facts. In When Prophecy Fails, Leon Festinger and colleagues infiltrated a UFO sect that believed in an imminent apocalypse, and they examined what happened when the predicted end of the world did not happen. Believers did not at all abandon their belief. Instead, the leader claimed that they had managed to postpone the end of the world thanks to their prayers, and she proposed a new date. This is a case of interpretation of the observation within the belief system. But importantly, as discussed above, interpretation is not a flaw of human nature, but a necessary feature of belief systems. In this case, the believers appear to have arbitrarily made up an ad hoc justification, and we are tempted to dismiss it as a hallmark of irrational thinking. But when observe the anomalous trajectory of Jupiter and, to make up for this anomaly, we postulate that there must be an unobserved satellite orbiting around the planet, we are making an interpretative move of the same nature, except in this case it turns out to be correct. Our initial reaction in the former case is that any reasonable person should reject the theory if the prediction is directly contradicted, yet in the latter case we find the same attitude reasonable. In reality, the main difference between the two cases is not in the way empirical contradictions are handled, but in the perceived plausibility of both the prediction and the interpretation. Specifically, we do not believe that prayers can have any visible effect at all and thus the interpretative move appears irrational. But of course, the situation is quite different if the powers of prayer have a prominent role in our belief system. Thus, it is an error to describe the ad hoc interpretation as irrational. It is actually totally rational, in that it follows from logical reasoning (A, the end of the world should have occurred if we had not intervened; B, we have prayed; C, prayers have an impact; conclusion: the end of the world has probably be been prevented by our prayers). Only, rationality is applied within a highly questionable theoretical framework. In the end, we realize that it is not really the non-occurrence of the end of the world that should lead us to abandon the belief system, but rather the empirical contradictions of the belief system in its globality, for example the fact that prayers actually do not work.

Thus, it should not be surprising that in their field study, the authors find that it takes a number of failed end-of-the-world predictions before the beliefs are finally abandoned. This is what Imre Lakatos called a “degenerative research program”: the theory survives repeated contradictions only by making up an incoherent series of ad hoc assumptions. It ends up being overthrown, but the process can take a very long time (this process of change between scientific theories is well documented also by Thomas Kuhn’s The structure of scientific revolutions).

This phenomenon is particularly visible in political discourse. Any significant political event or observation will be given a different interpretation depending on the political preferences of the person. It never happens that a right wing person turns left wing upon noticing that wealthy countries have many homeless people. Rather, an interpretation is made that revolves around the notion of personal responsibility.

To give a contemporary example, recent demonstrations in France have been met by an extraordinarily repressive response, with hundreds of serious injuries caused by police, some on journalists and children, which are documented by hundreds of videos circulating on social media (see @davduf and a recent article in Le Monde). A recent one is a 47 year old voluntary fire man, father of three, who was demonstrating with his wife until they were dispersed by tear gas. He was later found lying alone in an empty street with a head injury, and an amateur video shows that police shot him in the head from behind with a flash-ball gun and launched a grenade in the street. The man is currently in a coma. In any of those cases, on social media there is invariably part of the comments that suggest that the man or woman must have done something bad (this is indeed the official doctrine, as the Minister of Interior has recently claimed that he is not aware of any case of police violence). It is not in itself irrational: simply, the commenter presumes that police do not hurt innocent citizens, deduces that the citizen involved is not innocent, and concludes that the critics are irrational conspiracy seekers.

There are indeed conspiracy theorists, for example those that claim that the landing on the moon was in fact filmed in Hollywood studios. The fact that it is a conspiracy theory is not itself a reason to discredit it, since there have been conspiracies in History. The theory itself is also not irrational, in that it has logical coherence as well as empirical elements of support. For example, on the video the American flag appears to float in the wind whereas there can be no wind on the moon and in fact the flag should appear folded. Indeed: the flag was made rigid precisely for that reason. But most people do not know this fact, and thus the reasons why the ordinary citizen believes that man actually did land on the moon is that she trusts the source and finds the information plausible. Which attitude is irrational?

These examples illustrate several points. First, isolated empirical contradictions almost never shake a belief system. In fact, this is precisely what we mean when we say that “extraordinary claims require extraordinary evidence”. This proposition, however, is quite misleading since the notion of what is extraordinary is specific of a particular belief system. There is no objective definition of “extraordinary”. Therefore, rather than being a normative feature of scientific method, it simply expresses the inherent conservativeness of belief systems. As Festinger’s study shows, it can take a large number of empirical contradictions to impact a belief system. As explained previously, this is not necessarily a flaw as those contradictions can (sometimes) be turned into theoretical progress within the belief system.

But there are other ways in which empirical contradictions are handled by belief systems, which are documented in psychology by cognitive dissonance theory. A major one is simply to avoid being confronted to those contradictions, for example by reading newspapers with the same political views, or by discrediting the source of information without actually examining the information (e.g. social media propagate fake news, therefore the video cannot be trusted). Another is proselytism, that is, trying to convert other people to your belief system.

These mechanisms explain that at any given moment, mutually contradictory belief systems are held by people who live in the same world and are in contact with each other, and who can even discuss the same empirical observations. The conclusion of our discussion is that the main problematic issue in belief systems is not so much irrationality as dogmatism (but we will come back to irrationality in the third part on theoretical contradiction). Dogmatism arises from two different attitudes: blindness and deafness. Dogmatism is blind in that it actively refuses to see empirical evidence as potentially contradictory: it is not just that it is contradicted by empirical observations (this very notion is questionable), but rather that it dismisses empirical contradiction without seriously trying to accommodate for it in a progressive way (i.e., by strengthening its theoretical core). Dogmatism is deaf in that it refuses to acknowledge for the possibility that other rational belief systems can exist, and may have diverging interpretations of the empirical body of observations. Dogmatism denies the theoretical possibility of disagreement: the opponent is always either an idiot (irrational) or a liar (has ulterior motives). In the next part, I will turn to social contradiction: how different belief systems can co-exist and influence each other.