In this essay, I will discuss the different ways in which a theory can be contradicted, and how theories react. The scope of this discussion is broader than science, so I will be discussing belief systems, of which scientific theories are a particular kind (although, according to Feyerabend, not that particular). Another kind of belief system is political theories, for example. What is a belief system? Roughly speaking (and it will get more precise in the discussion), it is a set of propositions about the world that have a universal character. In science, this could be the law of gravitation, for example. Those propositions have relations with each other, and thus they form a system. For example, some propositions might logically imply others. In a belief system, there are generally core concepts over which other propositions build upon (examples: the atom; the rational agent of economic theory).
How do we evaluate belief systems? In philosophy of science, it is generally considered that scientific theories are evaluated empirically, by testing the empirical validity of propositions. That is, we evaluate the extent to which propositions are contradicted by facts. This has been the core target of much of modern philosophy of science, and thus I will start by recapitulating arguments about empirical contradiction, and add a few remarks. What has been less discussed is two other types of contradiction, social and theoretical. By social contradiction, I refer to the fact that at any given time, different people hold contradictory beliefs, even when they are aware of the same empirical body of observations. How is it possible and do such contradictions get solved? By theoretical contradiction, I refer to the possibility that a system is in fact not logically coherent. It seems that in philosophy of knowledge, belief systems are generally seen as a set of logically consistent propositions, but I will argue that this view is not tenable or rather is a normative view, and that belief systems actually are in some sense “archipelagos of knowledge”.
Empirical contradiction
Science is largely dominated by empiricism. One version of it is the logical empiricism of the circle of Vienna (or logical positivism), dating from the early 20th century. In a nutshell, it claims that scientific statements are of two types: elementary propositions whose truth can be verified empirically, in other words observations, and propositions that can be logically deduced from those elements. This leads to a bottom-up view of science, where experimental scientists establish facts, and then theoreticians build consistent theories from these facts. As far as I can see in my own field, this view is still held by a large portion of scientists today, even though it has been pretty much demolished by philosophy of science in the course of the 20th century. To give an example, logical empiricism is the philosophical doctrine that underlies the logic of the Human Brain Project, whose core idea is to collect measurements of the brain and then build a model from those.
Karl Popper objected that, on a logical ground, propositions can in fact never be verified if they have a universal nature. For example, to verify the law of gravitation, you would have to make all apples in the world fall, and you would still be unsure of whether another apple in the future might not fall in the way you expect. Universal propositions can only be contradicted by observations. This leads to falsificationism, the idea that scientific theories can only be falsified, not verified. On this view, at any given time, there are different theories that are consistent with the current body of experimental observations, and science progresses by elimination, by coming up with critical tests of theories. For example, one of the motivations advanced for collecting “big data” in neuroscience, such as the connectome, is that theories are presumed to be insufficiently constrained by current data (Denk et al., 2012). This view is extremely popular in biology today, even though again later work in philosophy of science has also pretty much demolished it.
Paraphrasing Quine (see Two dogmas of empiricism and the Duhem-Quine thesis), we can object that a theory never gets tested directly, only models specific of a particular situation do. For example, if you wanted to test Newton’s laws, you could let an apple fall and measure its trajectory. But to do this, you would first need to come up with a model of the situation, where for example you would consider the apple as a point of a given mass subject to only the force of gravity. In this case, you would conclude that Newton laws are false. But you would have concluded differently if you had added an auxiliary assumption, that air also exerts a friction force on the apple, for which you would have to come up with a particular model.
Kuhn and Lakatos have pointed out that in fact, the way empirical contradictions are resolved is almost never by abandoning a theory. The process is rather one of interpretation, that is, of coming up with ways of making the observation congruent with the theory. This could be seen as a rhetorical maneuver, or as a fruitful scientific process. In this example, if you think that Newton laws are valid, then you would actually deduce the laws of friction from the empirical contradiction. Laws of friction are in fact very complicated in general and still an active field of research in physics, which draws on various domains of physics, and to make progress one has to accept the underlying theories.
The key point is to recognize that interpretation is not a flaw of the scientific process, but a logical necessity to confront a theory to reality. A theory is framed in the discrete structure of language, that is, in formal terms. For it to apply to anything in the world, things in the world must be mapped to the formal structure of the theory. This, in essence, is the process of modeling. In contrast with theory, a model does not have a universal character; it applies to a specific situation. In the example above, we would have to introduce the assumption that the apple is a rigid body, and that friction follows a particular law, for example that the friction force is proportional to speed. This implies that it is actually not possible to either verify or falsify a theory on the basis of an empirical observation.
This argument does not lead to a relativistic view (all theories have the same epistemic value and it is a question of taste); in this, I would temper some of the conclusions of Feyerabend. Interpretation is in fact not only a logical necessity, but also a key element in scientific progress. Lakatos proposed that it is not theories that compete, but research programs. Some research programs are “degenerate”: they evolve by adding heteroclite ad hoc hypotheses to account for each new observation. Others are “progressive”: they evolve by extending their theoretical core, which then applies to new situations. In scientific practice, this is obtained by dissolving the specific character of interpretations into the universal character of theories. To come back to the apple example, initially we would interpret the empirical contradiction by coming up with an empirical model of friction, which essentially amounts to calling the empirical error “friction”. More precisely, it is an auxiliary hypothesis that makes the observation compatible with the theory. But this could then be turned into theoretical progress: from an analysis of a number of cases of falls of objects, we could then postulate that there is a friction force that is proportional to the speed of the apple and to its size (Stokes law). By doing so, we make parts of the previous interpretations instances of a new theoretical proposition. Note that this proposition only makes sense in the context of Newton’s laws, and thus we are indeed describing a system and not just a set of independent laws. The evaluative situation of the apple fall has now changed: we are evaluating a broader theoretical body (Newton’s laws + Stokes law) by using a narrower interpretative model (the apple is a rigid sphere).
Thus, interpretation is a key feature of belief systems, both logically necessary and progressive, and it appears to be neglected by the two flavors of empiricism that are broadly observed in science (verificationism and falsificationism). Yet without it, it is impossible to understand why people can disagree, other than by postulating than some must be either idiots or liars. I will address this issue in the next part on social contradiction.
So far, I have argued that scientists face empirical contradiction by interpretation. Theoretical progress is then made by dissolving interpretations into new theory. What this really means is that the very notion of “empirical contradiction” is in fact quite misleading, because for the person doing the interpretative work, there is no real contradiction, only a more complex situation than expected. I will end this part by drawing on developments of psychology, specifically cognitive dissonance theory, and extending to non-scientific situations.
Resolving empirical contradiction by interpretation is not at all specific of science, but is a general feature of how people confront their beliefs to facts. In When Prophecy Fails, Leon Festinger and colleagues infiltrated a UFO sect that believed in an imminent apocalypse, and they examined what happened when the predicted end of the world did not happen. Believers did not at all abandon their belief. Instead, the leader claimed that they had managed to postpone the end of the world thanks to their prayers, and she proposed a new date. This is a case of interpretation of the observation within the belief system. But importantly, as discussed above, interpretation is not a flaw of human nature, but a necessary feature of belief systems. In this case, the believers appear to have arbitrarily made up an ad hoc justification, and we are tempted to dismiss it as a hallmark of irrational thinking. But when observe the anomalous trajectory of Jupiter and, to make up for this anomaly, we postulate that there must be an unobserved satellite orbiting around the planet, we are making an interpretative move of the same nature, except in this case it turns out to be correct. Our initial reaction in the former case is that any reasonable person should reject the theory if the prediction is directly contradicted, yet in the latter case we find the same attitude reasonable. In reality, the main difference between the two cases is not in the way empirical contradictions are handled, but in the perceived plausibility of both the prediction and the interpretation. Specifically, we do not believe that prayers can have any visible effect at all and thus the interpretative move appears irrational. But of course, the situation is quite different if the powers of prayer have a prominent role in our belief system. Thus, it is an error to describe the ad hoc interpretation as irrational. It is actually totally rational, in that it follows from logical reasoning (A, the end of the world should have occurred if we had not intervened; B, we have prayed; C, prayers have an impact; conclusion: the end of the world has probably be been prevented by our prayers). Only, rationality is applied within a highly questionable theoretical framework. In the end, we realize that it is not really the non-occurrence of the end of the world that should lead us to abandon the belief system, but rather the empirical contradictions of the belief system in its globality, for example the fact that prayers actually do not work.
Thus, it should not be surprising that in their field study, the authors find that it takes a number of failed end-of-the-world predictions before the beliefs are finally abandoned. This is what Imre Lakatos called a “degenerative research program”: the theory survives repeated contradictions only by making up an incoherent series of ad hoc assumptions. It ends up being overthrown, but the process can take a very long time (this process of change between scientific theories is well documented also by Thomas Kuhn’s The structure of scientific revolutions).
This phenomenon is particularly visible in political discourse. Any significant political event or observation will be given a different interpretation depending on the political preferences of the person. It never happens that a right wing person turns left wing upon noticing that wealthy countries have many homeless people. Rather, an interpretation is made that revolves around the notion of personal responsibility.
To give a contemporary example, recent demonstrations in France have been met by an extraordinarily repressive response, with hundreds of serious injuries caused by police, some on journalists and children, which are documented by hundreds of videos circulating on social media (see @davduf and a recent article in Le Monde). A recent one is a 47 year old voluntary fire man, father of three, who was demonstrating with his wife until they were dispersed by tear gas. He was later found lying alone in an empty street with a head injury, and an amateur video shows that police shot him in the head from behind with a flash-ball gun and launched a grenade in the street. The man is currently in a coma. In any of those cases, on social media there is invariably part of the comments that suggest that the man or woman must have done something bad (this is indeed the official doctrine, as the Minister of Interior has recently claimed that he is not aware of any case of police violence). It is not in itself irrational: simply, the commenter presumes that police do not hurt innocent citizens, deduces that the citizen involved is not innocent, and concludes that the critics are irrational conspiracy seekers.
There are indeed conspiracy theorists, for example those that claim that the landing on the moon was in fact filmed in Hollywood studios. The fact that it is a conspiracy theory is not itself a reason to discredit it, since there have been conspiracies in History. The theory itself is also not irrational, in that it has logical coherence as well as empirical elements of support. For example, on the video the American flag appears to float in the wind whereas there can be no wind on the moon and in fact the flag should appear folded. Indeed: the flag was made rigid precisely for that reason. But most people do not know this fact, and thus the reasons why the ordinary citizen believes that man actually did land on the moon is that she trusts the source and finds the information plausible. Which attitude is irrational?
These examples illustrate several points. First, isolated empirical contradictions almost never shake a belief system. In fact, this is precisely what we mean when we say that “extraordinary claims require extraordinary evidence”. This proposition, however, is quite misleading since the notion of what is extraordinary is specific of a particular belief system. There is no objective definition of “extraordinary”. Therefore, rather than being a normative feature of scientific method, it simply expresses the inherent conservativeness of belief systems. As Festinger’s study shows, it can take a large number of empirical contradictions to impact a belief system. As explained previously, this is not necessarily a flaw as those contradictions can (sometimes) be turned into theoretical progress within the belief system.
But there are other ways in which empirical contradictions are handled by belief systems, which are documented in psychology by cognitive dissonance theory. A major one is simply to avoid being confronted to those contradictions, for example by reading newspapers with the same political views, or by discrediting the source of information without actually examining the information (e.g. social media propagate fake news, therefore the video cannot be trusted). Another is proselytism, that is, trying to convert other people to your belief system.
These mechanisms explain that at any given moment, mutually contradictory belief systems are held by people who live in the same world and are in contact with each other, and who can even discuss the same empirical observations. The conclusion of our discussion is that the main problematic issue in belief systems is not so much irrationality as dogmatism (but we will come back to irrationality in the third part on theoretical contradiction). Dogmatism arises from two different attitudes: blindness and deafness. Dogmatism is blind in that it actively refuses to see empirical evidence as potentially contradictory: it is not just that it is contradicted by empirical observations (this very notion is questionable), but rather that it dismisses empirical contradiction without seriously trying to accommodate for it in a progressive way (i.e., by strengthening its theoretical core). Dogmatism is deaf in that it refuses to acknowledge for the possibility that other rational belief systems can exist, and may have diverging interpretations of the empirical body of observations. Dogmatism denies the theoretical possibility of disagreement: the opponent is always either an idiot (irrational) or a liar (has ulterior motives). In the next part, I will turn to social contradiction: how different belief systems can co-exist and influence each other.