This site is supported by Nobility Studios.
  • Sign in to follow this  
    Followers 0

    Ockham's Razor

       (0 reviews)

    • 06/24/2005 http://www.galilean-library.org/site/uploads/

    By Paul Newall (2005)

    Ockham’s Razor, otherwise called the principle of the economy of thought, is invoked often in debate, usually to discount one or more theories on the basis that another exists which is simpler or more parsimonious. In this essay we shall consider this principle, its domain of application and some associated philosophical concerns, using examples from the history of science to illustrate some of the points at issue.

    The Simplest Explanation

    The principle of parsimony is typically stated as Entia non sunt multiplicanda praeter necessitatem ("Entities are not to be multiplied beyond necessity"). Although referred to as Ockham’s Razor after William of Ockham, a Franciscan living at the turn of the fourteenth century, this version has not been found in any of his extant works. The closest match (Frustra fit per plura quod potest fieri per pauciora or "It is pointless to do with more what can be done with fewer") may have been written in quoting others, and indeed the general principle was common among Aristotelians. In brief, the advice is that we should not invoke entities in explaining a phenomenon or developing a theory that are not necessary to do so.

    For example, some people suspect that crop circles are due in some way to extraterrestrial influence, whether directly or otherwise. Others suggest that the patterns are the work of dedicated artists or hoaxers and very much an earthly occurrence. On the face of it, then, especially given that the latter group have been able to demonstrate the construction of a crop circle, there is no need to posit aliens to account for why farmer’s fields are routinely invaded in this fashion. If we wish to hold to economy of thought, we should pick the simpler explanation.

    Ockham’s Razor is a principle; that is, it does not tell us that the simplest explanation is true (or what there is); but instead that we ought to prefer it on methodological grounds. We are counselled to adopt theories which are minimally efficient, insofar as they can do the same with less. Note that there is apparently no reason why we should do so: a direct route to a destination is neither better nor worse than a diversion unless we include the criterion that we wish to get there by the most direct route (and even then it may not be, so we will return to this analogy later.) Nevertheless, it seems plain enough that we are inclined to favour the simpler explanation, other things being equal. It is this assumption that we shall now examine.

    Applying Ockham’s Razor

    Perhaps the best-known example of two competing theories between which a decision had to be made was the seventeenth century controversy over astronomical systems. The long-standing Ptolemaic/Aristotelian model of the heavens was challenged by the Copernicans, who insisted that heliocentrism was simpler than geocentrism. (Note that the question of geostaticism – or the fixed (or otherwise) nature of the Earth itself – was a separate issue.) Since that time much effort has gone into demonstrating (or refuting) the claim that either system was more parsimonious than the other.

    Although Copernicus had believed that a sun-centred universe consisting in circular orbits was the most beautiful that could be created, he did so on the basis of thematic assumptions derived from his neo-platonic influences and not as a result of any new observations, of which there were none until some years later. (Max Jammer has shown that Copernicus’ reasoning resulted in his being faced with having to reject either geocentrism or the Aristotelian conception of space. Having no metaphysical substitute for the latter, he was forced to dispense with the former. Ptolemy had actually considered the possibility of circular motion but dismissed it precisely because it did not agree with what was seen in the night sky.) On making the change to heliocentrism, Copernicus found that he still required the assistance of devices like epicycles to save the phenomenon; that is, to make the predictions of his theory agree with what was actually discerned by astronomers. The issue of comparative simplicity has subsequently been reduced by some commentators to counting epicycles but for our purposes this is beside the point: neither the Ptolemaic nor Copernican system was empirically adequate, leading Kepler to produce another.

    The basic error inherent in the counting approach is to consider theories in isolation. A theory includes a host of ancillary presuppositions and exists within a metaphysical system. A comparison with an alternative implicitly or otherwise assumes that all other things are equal (called a ceteris paribus clause in Latin) when they are not (or, at the very least, no attempt is made to show that this requirement is satisfied). Copernicus himself was wary of asserting the truth of his system and only received a copy of his De revolutionibus orbium celestium on his deathbed. When the issue was forced during the so-called "Galileo Affair", a judgement was sought between two systems whose empirical base was the same and whose practical utility was identical at that time. Galileo sought to delay any choice by invoking the Augustinian principle that it would be folly to ground theological certainties on physical propositions that may subsequently be shown to be false, but his pleas were not heard.

    There are several lessons to take from this historical episode. In the first place, we have two competing theories with the same content, and thus a prime candidate for the application of Ockham’s razor. Upon consideration, however, we immediately note that the ceteris paribus clause was not satisfied, and for many reasons. The theological consequences were (ostensibly) very different; the political outcome moreso, particularly against the backdrop of the Reformation; the implications for morality were easy to predict but harder to judge; and the metaphysical fallout was just beginning to be investigated. The decision made on this basis did not count the number of postulated entities (which were the same to all intents and purposes) and did not include a conclusion on the relative economies of each theory, since they were also equivalent. In any event, Copernicanism was rejected with scarcely a mention of William of Ockham.

    We know now, of course, that a variant of heliocentrism eventually won the day. Galileo’s warning to the Church was not heeded and its choice to assert the reality of geocentrism had catastrophic results for its authority and – later - its credibility. Nevertheless, the history of this change is also illustrative: at no time was there an invocation of the "decisive experiment" of myth, dreamt of by many a philosopher of science. When Foucault’s experiments with his pendulum showed the movement of the Earth, confidence in geocentrism had already been slowly eroded over the years. At the only stage in this entire episode that a comparison between rival theories had been insisted upon, the question was decided by "non-scientific" means (notwithstanding the anachronism implying the inverted commas) with Ockham’s Razor playing no part.

    The general point raised by this brief study is that Copernicanism required time to develop. Attempting to make a straightforward comparison was disastrous for the Church and for astronomy (and subsequently science) in Italy. Kepler was able to refine the basic Copernican insight because the theory was not limited to the narrow domain in which it was judged.

    Theories of gases

    Consider now a theory, which we call T1, say, applying within a domain D. T1 predicts P while the actual state of affairs is in fact P', close to P but their difference being beyond experimental possibilities. That is, there is a difference but it is so slight that we could never notice it by investigation. In such circumstances it would be of little use to hope (or even expect) that an increase in experimental capabilities will lead to the discovery that P' actually obtains because there is no apparent need to refine T1. Suppose instead that we propose additional theories T2, T3, etc, each of which differs from T1 within D and which predicts P'. Ockham's razor cannot help us decide whether or not to pursue these new theories, but when we investigate them further we may find that T2, say, is confirmed where T1 was but also makes novel predictions not given by T1, or else suggests answers to extant problems for T1. In that case, then, we may chose to reject T1 and adopt T2, even though no refuting case has been made against T1.

    Although this hypothetical example may be considered fanciful, it is illustrative of what occurred when the kinetic theory of gases was proposed in opposition to the prevailing phenomenological theory. For the phenomenological theory of gases (i.e. based on describing the behaviour of gases via the laws of thermodynamics), Brownian motion was an instance of a perpetuum mobile that refuted the second law of thermodynamics, which expressly disallows perpetual motion. (In brief, the apparently random movement of the Brownian particle seems to go on indefinitely, suggesting that somehow the particle does not run out of energy. In kinetic terms, however, we now say that it is being "bumped" by other molecules, explaining both its behaviour and where its energy comes from.) Following his studies of Brownian motion, Einstein (see the 1956 edition of his Investigations on the Theory of the Brownian Motion) was able to entirely recast the phenomenological theory in kinetic terms, in spite of having no experimental motivation beyond the known difficulties to do so; after all, the differences in temperature expected, were the kinetic theory correct, were below the range of detection of thermometers (see Furth, 1933). Nevertheless, the new theory prevailed when Einstein used his theory to derive statistical predictions for the behaviour of the Brownian particle by assuming that molecules existed and a mechanical account of the motion could be given. (Feyerabend (1963 (1999, pp.92-94)) made this argument for a different reason, which Laymon (1977) disputed). This decision could later be justified by the eventual successes of the kinetic programme, but this is only to say that parsimony was discussed after the event, if at all. The possibility of applying Ockham’s Razor was again not considered, nor could it be of any use.

    The Special theory of Relativity

    When Einstein published his 1905 paper on special relativity, the first response remarked on how his ideas had been decisively refuted by Kaufman's papers of that year and the next in the Annalen der Physik (in issues 19 and 20, especially his Uber die Konstitution des Electrons (1906, p.487)). Kaufman began, in italics, by saying that the "measurement results are not compatible with the Lorentz-Einstein fundamental assumptions". To see how convincing Kaufman's work was considered, we may note that Lorentz wrote to Poincaré in March of 1906, saying that his theory was "in contradiction with Kaufman's results, and I must abandon it." The latter agreed and could offer no advice. A glance through the journal and the absence of significant (indeed, any for quite some time) response shows how seriously Kaufman's objections were taken. (See Feyerabend, 1999, pp.146-148 for more detail on this and the below.)

    Planck, however, was committed to Einstein's ideas because he thought their "simplicity and generality" meant that they should be preferred, even in the face of experimental refutation. He attempted to re-examine Kaufman's data and demonstrate that there were flaws, but instead he found that they were far closer to Abraham's rival theory. Thereafter he presented his findings at the Deutsche Naturvorscherversammlung in Stuttgart in September 1906, a rather amusing affair in which Abraham drew much applause by observing that since the Lorentz-Einstein theory was twice as far from Kaufman's data as his own, it followed that his theory was twice as good (Physikalische Zeitschrift 7, 1906, pp.759-761). Planck tried but ultimately failed to convince Sommerfeld, Abraham or Bucherer that Einstein's ideas should be given time to develop (ibid). Ultimately, of course, they were accepted because of their "inner consistency" (Wien, 1909) or because Kaufman's experiments lacked "the great simple universal principle" of relativity theory (von Laue - see below), so that the matter was decided well before Kaufman's results were finally shown to have been flawed (Guye and Lavanchy, 1916).

    Thus we find that Einstein's ideas succeeded because of a large measure of rhetoric from him, Bohr, Planck and others, and because of a commitment to the presuppositions of relativity theory, long after there had been very little doubt (on the parts of very many great and distinguished physicists) that experimental considerations had killed it. Indeed, by 1911 von Laue was writing that "a really experimental decision between the theory of Lorentz and the Relativity Theory is indeed not to be gained; and that the first of these nevertheless had receded into the background is chiefly due to the fact that, close as it comes to the Relativity Theory, yet it lacks the great simple universal principle, the possession of which lends the Relativity Theory from the start an imposing appearance" (see Das Relativitätsprinzip, 1911). Physicists were more interested in how they could use Einstein's ideas to explain the result of the Michelson-Morley experiment, even though they were still confusing Lorentz's and Einstein's theories in 1921 significantly enough for von Laue to address it (see the fourth edition of his text, then entitled Das Relativitätstheorie as acceptance of the theory had grown and hence changed its status from a mere "principle"). As a result of these theoretical and thematic factors, D.C. Miller's later (apparent) falsification of Einstein was given very little attention at all, even though it again took a long time (almost thirty years) for Shankland to find the mistake (1955, pp.167ff). (See Holton, 1988, for more discussion of these episodes in the history of physics.)

    We see, then, that even in this instance in which the notion of simplicity was relied upon throughout, no actual comparison of the number of entities or parsimony took place. The special theory was held to possess greater inherent simplicity both before and after any experiment and in spite of the negative results of Kaufman’s work.

    The general case

    There are two major difficulties with Ockham’s Razor. The first is that other things are rarely (if ever) equal, so the ceteris paribus clause is not satisfied. The second, perhaps still more important objection is that the unknown (or additional) entities parsed away may have explanatory power outside the domain of consideration, or they may offer further methodological suggestions which subsequently show that the utility (or even truth) granted to the former explanation was too narrow. The extra terms may be methodologically interesting and stimulating even if they turn out to be completely in error. As Niels Bohr was fond of saying, parsimony is judged after the event. It makes little methodological sense - to hammer the point home - to disallow additional entities before their consequences have been investigated; indeed, the application of parsimony in the examples we have considered above and throughout the history of science would likely have proved disastrous, at least with the benefit of hindsight (and quite plainly in the case of heliocentrism).

    The lack of evidence for a posited entity is hardly a problem for scientists who are both willing and able to continue their efforts regardless. Moreover, this risks putting the cart before the horse: a theory may predict the existence of an entity for which there is no evidence but which is as a result subsequently discovered. While there may be a limitless supply of alternative hypotheses (as asserted by the strong underdetermination of theories), or at least enough to require a decision between them (even if only on practical or financial grounds), not all of them will (or may be suspected to) have interesting enough consequences to pursue. The methodological point is, once again, how can we know the utility (or truth) of apparently un-evidenced or unwarranted theories/entities before the fact? Given that so many have turned out to be of benefit in the past – or so goes the historical argument - why assume to the contrary now?

    Historically (and today) theories are surrounded by anomalies and additional entities are postulated to explain them, sometimes ad hoc (thus maintaining the theory) and sometimes requiring a replacement. The resulting alternative(s) would be empirically equivalent and adequate within the domain satisfied by the current theory, so disallowing hypotheses that fail the requirement for parsimony presupposes that they will also fail to be successful. This what the Church imposed upon Galileo, and so to follow Ockham’s Razor leaves us with a dilemma: should we reject theories that appear to violate parsimony and risk stifling (or ending) their development, which may subsequently show otherwise; or should we instead reject the requirement for parsimony and accept that matters are more complex than the shortest route being the one to prefer?

    If we return to our T1 and T2, we can note that T1 may employ different assumptions to T2 such that a straightforward comparison is not possible (Berkeley's idealism being a good example). Moreover, two hypotheses may be successful in different domains but mutually exclusive within their intersection, if there is one (complementarity, for instance). The believer in God or in aliens declaring an agency other than man was responsible for a phenomenon does not make a straightforward choice but involves their additional entity as part of an entire worldview (incorporating the existence of God or extraterrestrials respectively) which also explains or makes sense of a whole range of phenomena. The ceteris paribus clause here can also turn out to be have failed: perhaps the confirming instances of T1 are apparently refuted (as with special relativity) but the addition of the further assumptions can explain these anomalies, or else neither theory may be satisfactory and the proper response might be to withhold judgment. T2 may in addition have greater predictive or explanatory (or both) power outside the domain of comparison, making the evaluation within D an interesting but not particularly devastating factor. Rather than straightforwardly dismissing T2 because of its auxiliary (and apparently unnecessary) assumptions, it may instead make methodological sense to investigate what consequences these have.

    Moving beyond any actual results or possibility thereof, the additional entities rejected by parsimony may not explain any other data in further domains but still provide a stimulus to work which subsequently uncovers further domains in which they are necessary, or which show the previous theory to have been but an approximation. In short, epistemological considerations are not sufficient to choose between theories and cannot be expected to account for scientific practice. While methodological concerns are also not necessarily of ultimate import, scientists appear to use them more as they press on, blithely unaware of or unconcerned with the philosophical ideas that are intended to provide them with guidance. This is that element of guesswork and certainty of resolve which prompts scientists to continue to work on ideas rejected by many of their contemporaries (plate tectonics, say, or Pauli’s positing of the neutrino), perhaps reminded of the changing fortunes of atomism over millennia.

    Another way to introduce or justify Ockham’s Razor is to assert that parsimonious theories are more likely to be correct. This is a problematic claim. Suppose we take the case of a theory which is regarded by all as highly successful, but which relies upon unobservable entities (such as sub-atomic particles, say). Is the theory true or just a useful instrument? Is it more parsimonious to suppose that these entities do or do not exist? In the absence of an ability to divine the fortunes of a theory in the years to come (or, in the case of atomism, the thousands of years), how are we to decide? To assume, as many apparently do, that parsimony is important because the universe is fundamentally simple, rather than complex (hence the search for grand theories, underlying all others), merely begs the question.

    To get where we are going

    To summarise our discussion, then, the important point which renders parsimony methodologically unhelpful, if not explicitly detrimental, is that the consequences of additional entities or assumptions are impossible to state a priori. Since science is never completed, we are always in the position of before and never get to the after, which Bohr claimed was the only place parsimony could be introduced, much less judged.

    Selected References:

    Einstein, A., Investigations on the Theory of the Brownian Motion (New York: Dover, 1956).

    Einstein, A., Über das Relativitätprinzip und die aus demselben gezogene Folgerungen in Jahrbuch der Radioaktivität, vol. 4, 1907.

    Feyerabend, P.K., Knowledge, Science and Relativism (Cambridge: Cambridge University Press, 1999).

    F�rth, R., Über einige Beziehungen zwischen klassicher Statistik und Quantenmechanik in Zeitschrift für Physik, vol. 81, 1993.

    Guye, C.-E. and Lavanchy, C., Verification experimentale de la

    formule de Lorentz-Einstein par les rayons cathodiques de grande vitesse in Archives des sciences physiques et naturelles, 42: 286–299, 353–373, 441–448, 1916.

    Holton, G., Thematic Origins of Scientific Thought (Cambridge: Harvard University Press, 1988).

    Kaufman, W., Über die Konstitution des Electrons in Annalen der Physik, vol. 19, 1906.

    Kuhn, T.S., The Copernican Revolution: Planetary Astronomy in the Development of Western Thought (Cambridge: Harvard University Press, 1957).

    Laue, M. von, Das Relativitätsprinzip (Braunschweig: Friedrich Vieweg & Son, 1911).

    Laymon, R., Feyerabend, Brownian Motion and the Hiddenness of Refuting Facts in Philosophy of Science, 44, 225-247, 1977.

    Physikalische Zeitschrift, 7, pp.759-761, 1906.

    Shankland, R.S., A New Analysis of the Interferometer Observations of Dayton C. Miller in Reviews of Modern Physics, vol. 31, 1963.

    Wien, W., Über Elektronen (Leipzig: B.G. Teubner, 1909).


    Sign in to follow this  
    Followers 0


    User Feedback


  • Who Was Online

    1 User was Online in the Last 24 Hours