By Paul Newall (2005)
This short essay discusses the various forms of falsificationism, particularly insofar as it functions as a proposed answer to the demarcation problem; that is, the search for a means to distinguish between science and non-science.
The dogmatic (sometimes called naturalistic) version of falsificationism is at once the easiest to understand and (apparently) the most straightforward. The way to demarcate between theories is to call scientific those for which we can specify (beforehand) one or more potential falsifiers; that is, an experiment with a particular result that would cause us to give up our theory. The most common example of this approach is the proposition "all swans are white": this can never be proven, since that would require checking each and every swan anywhere; but it can be disproven by finding a single instance of a non-white swan.
A theory is scientific, then, if we can say what would possibly cause us to reject it. This seems a reasonable approach to take because if there were no circumstances that could ever lead us to reject the theory, it would be uninteresting; after all, why bother investigating a theory that cannot be wrong and is therefore already true? We could just get on with more important things, like rugby.
For the dogmatic falsificationist, this understanding helps to make sense of what goes on in science. Although a theory is never proven, if we can falsify it then we force ourselves to look again and come up with a better one. This is also unproven but an improvement on the last; and so it goes. Lakatos referred to the illustrative progression from Descartes theory of gravity, through Newton’s, to Einstein’s. As the first was refuted, the second came along and was able to explain the observed phenomena without falling victim to the same difficulties. Eventually it was also falsified but Einstein was able to do likewise again, explaining what went before but without the flaws. Falsification thus demarcates between scientific and non-scientific theories and helps account for the development of scientific theories.
Sadly, it does no such thing and it was not long before the flaws were demonstrated. There were three main concerns. Firstly, there was a reliance on a separation between observational and theoretical propositions. The latter would be a particular theory of gravity, say, while the former would be the observations that are supposed to potentially falsify it. Unfortunately this distinction is untenable. To take an example, consider the famous Tower Argument used by geokineticists and geostaticists alike (that is, those who, in Galileo’s time, held that the earth was or was not in motion, respectively). By dropping a stone from a tower, it was supposed that it could be shown whether or not the Earth was revolving as some claimed: if the Earth was in motion, the stone should fall some distance away from the tower; if not, it should land at the base. The theory was thus to be tested by observation, but the problem came when interpreting what had occurred. When the stone did fall at or near the base of the tower (allowing for experimenter error), the geostaticists remarked that this was what predicted by their theory. In like fashion, the geokineticists also expected the stone to fall at the base because they held that everything on the Earth was moving with it, including the stone and the air through which it fell. Hence we see that there was no observational statement without the theories to interpret them. This is an instance of the more general theory-ladenness of observational terms; subsequent study has shown that there can be no (theory-)neutral observational terms because we do not just passively experience the world but actively encounter it and can choose different ways to do so.
Secondly, there was a logical concern: no proposition can ever be proven by experiment. This basic result has apparently caused much confusion but it is the very difficulty that falsification was proposed to address; namely, that no proposition could ever be proven, hence the effort to disprove them instead. More generally, this is an instance of the problem of inductive logic or the knowledge that logical relations like proof are between propositions, not facts and propositions. Although falsification was supposed to avoid this difficulty by proceeding deductively instead of inductively, in order to call a theory disproven we have to rely on an experiment proving another theory – the negation of the theory under consideration – which is precisely what we agreed could not be done.
The third and last difficulty was even more severe. When we test a theory by experiment, we do not do so in isolation. Instead, what is actually tested is the conjunction of the theory with a ceteris paribus clause (a Latin term meaning "all other things being equal"). Even if we allow that the first two problems are surmountable, then, we can always dodge a falsification by saying that the ceteris paribus clause was refuted and change it for another, thereby leaving the theory intact. This is exactly what was done with the Tower Argument, for example: the experiment designed to disprove the motion of the Earth was actually testing the theory "a stone dropped from a tower on a static earth will fall at the base, assuming everything else on the Earth is not moving with it". The geostaticists thus immediately said that the italicised ceteris paribus clause has been falsified, not the motion of the Earth. Lakatos gave another example of an astronomical theory that predicts certain behaviour in the heavens, which is actually not observed. Rather than consider his theory falsified, the theorist then says that there must be another body invisible to the naked eye causing the anomalous effects seen. Even when a new telescope is invented and this is no longer tenable, the theorist appeals to the influence of a magnetic field nearby; and so it goes, each new ceteris paribus clause saving the theory from falsification. These auxiliary hypotheses can always prevent the conclusion that the general theory has been falsified, so dogmatic falsificationism collapses.
If all theories are thus equally disproven then all scientific theories are fallible and we are no closer to solving the demarcation problem or characterising what makes a proposition scientific. This unpalatable conclusion brings us to the second form of falsificationism: methodological. The falsificationist now makes the same basic assumptions as his or her dogmatic colleague but calls them tentative – "piles driven into a swamp", as Popper put it. Relying on a set of supposedly unproblematic propositions, which he or she accepts tentatively, the methodological falsificationist proceeds as before to try to falsify theories. He or she is thus a conventionalist in that certain propositions are taken as basic and used to build a foundation of scientific theories upon. Methodological falsificationism suggests taking some things as given and seeing what happens when we test other theories thereafter; in a word, it advocates risky decisions.
We can see this at once when we ask what we are to do when a theory is ostensibly falsified. It could be that the theory is false, or that the ceteris parisbus is, or even that one or more of the "basic" propositions assumed by convention are. Although the choice we make could be wrong, the methodological falsificationist sees this as a matter of the lesser of two evils. Dogmatic falsificationism was a dead-end and hence some bold choices need to be made. The chance of rejecting a true theory as falsified is one to be taken in order to allow the possibility of progress; that is, a choice is made between a brand of falsificationism that may not work and giving up completely in favour of irrationalism and an inability to give any justification for theories. As Lakatos put it, it is "a game in which one has little hopes of winning" but he or she believes "it is still better to play than give up."
It is difficult to critique methodological falsificationism for the simple reason that it is unfalsifiable. What should concern us most is that the history of science gives little indication of having followed anything like a methodological falsificationist approach. Indeed, and as many studies have shown, scientists of the past (and still today) tended to be reluctant to give up theories that we would have to call falsified in the methodological sense; and very often it turned out that they were correct to do so (when seen from our later perspective). This tenacity in the face of apparent adversity – such as when Einstein dismissed "verification through little effect" when his special theory of relativity was falsified by Kaufman’s results – is reinforced by the commitment to the themata that Holton has shown characterise scientists' unwillingness to give up their fundamental conceptions of how the universe is. (For a critique of a potential response to historical arguments, see here.)
The study of the history of science leaves us with a stark choice: either we have to give up the attempt to provide a rational account of how science worked and works (looking for alternatives as Kuhn did), or we must try to reduce in some way the reliance on conventionalist "basic" propositions in methodological falsification and try again.
Popper attempted to do this by conceiving a sophisiticated version of falsification that held a theory T1 to be falsified only if the following three conditions were satisfied:
- There exists a theory T2 that has excess empirical content; that is, it predicts novel facts – new ones not predicted by T1;
- T2 explains everything that was previously explained by T1; and
- Some of these new predictions have been confirmed by experiment.
It is thus not enough to find a falsifier to reject T1. Sophisticated falsificationism takes us away from making decisions about theories in isolation and towards considering them in company with others. A theory is not to be rejected as falsified until a better one comes along. Although we might find that a number of experiments are conflicting with a particular theory, we know from our previous considerations that this is never enough to dismiss it. Instead, we wait until a new theory is found which tells us the same things as the old one but without the difficulties (some or all). This gives us a notion of growth or development of theories in place of the dogmatic falsificationism that either accepts or rejects them in single instances. It also means that the so-called "crucial experiment" of dogmatic falsificationism – one that decides the issue at a stroke – is superseded by the realisation that no experiment can be crucial, unless interpreted as such after the event in light of a new theory for which it offers corroboration. Finally, it shows that the idea of proliferating theories (trying lots of alternatives) is important to sophisticated falsificationism as it was not at all for the dogmatic version.
To go back to an earlier example, then, what made Einstein’s theory of gravity "better" than Newton’s was not that one was falsified while the other was not, but instead that Einstein’s explained everything that the earlier theory did while at the same time offering new predictions, some of which were confirmed (such as Eddington’s expedition to observe the eclipse of the sun in 1916).
The conflict in science is thus not between theories and experiments but always between rival theories. The problem with sophisticated falsification, however, arises from the fact that it is always a series of theories that are consequently referred to as scientific or non-scientific and never a single theory on its own. Where we have two incompatible theories, we may try to replace one with the other, and vice versa, in order to see which (if either) provides the greatest increase in empirical content; but we must fall back on the conventionalist aspects of methodological falsificationism or the untenable assumptions of dogmatic falsificationism in order to ultimately make a choice. After all, calling novel facts corroborated presupposes a clear demarcation between observational and theoretical terms and also that we have a straightforward situation in which no anomalies are involved – both decisions of convention as to what constitutes "basic" or "background" knowledge when undertaking the process. We have the additional difficulty of not knowing whether a potential falsifier refers to the theory being tested – the explanatory theory – or the underlying one(s) used to make sense of it – the interpretive theory. If we can satisfy the requirements of sophisticated falsificationism, which should we reject? Propositions are also no more proven by experiment for sophisticated falsificationism than they were for the dogmatic version, while we can make the same mistakes in rejecting true theories when we assume that excess empirical content has been demonstrated – not least because a different ceteris paribus clause may have new consequences that can be tested. Finally, we are still no closer to explicating the tenacity of theories, even when the conditions of sophisticated falsificationism would have us conclude them falsified, which we again find in the history of science.
In summary, then, falsificationism in its various forms is an interesting idea but insufficient either to characterise science or solve the demarcation problem. It suffers from a series of logical and philosophical difficulties that should perhaps give us pause if hoping to find a single answer to what makes good science and what does not.
Feyerabend, P.K., Against Method (London: Verso, 1975).
Kuhn, T.S., The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962).
Lakatos, I., The methodology of scientific research programmes (Cambridge: Cambridge University Press, 1978).
Popper, K.R., The Logic of Scientific Discovery (New York: Basic Books, 1959).