This site is supported by Nobility Studios.
  • Resources

    Our library of interviews, essays and reviews is entirely created and maintained by our members. These resources are aimed at all levels and our content aims to support learning and help people gain an insight into the many areas our members and contributors are interested in. To offer content or suggest an interviewee, please contact us.

    Teaser Paragraph: Publish Date: 06/24/2005 Article Image:
    By Paul Newall (2005)

    Ockham’s Razor, otherwise called the principle of the economy of thought, is invoked often in debate, usually to discount one or more theories on the basis that another exists which is simpler or more parsimonious. In this essay we shall consider this principle, its domain of application and some associated philosophical concerns, using examples from the history of science to illustrate some of the points at issue.

    The Simplest Explanation

    The principle of parsimony is typically stated as Entia non sunt multiplicanda praeter necessitatem ("Entities are not to be multiplied beyond necessity"). Although referred to as Ockham’s Razor after William of Ockham, a Franciscan living at the turn of the fourteenth century, this version has not been found in any of his extant works. The closest match (Frustra fit per plura quod potest fieri per pauciora or "It is pointless to do with more what can be done with fewer") may have been written in quoting others, and indeed the general principle was common among Aristotelians. In brief, the advice is that we should not invoke entities in explaining a phenomenon or developing a theory that are not necessary to do so.

    For example, some people suspect that crop circles are due in some way to extraterrestrial influence, whether directly or otherwise. Others suggest that the patterns are the work of dedicated artists or hoaxers and very much an earthly occurrence. On the face of it, then, especially given that the latter group have been able to demonstrate the construction of a crop circle, there is no need to posit aliens to account for why farmer’s fields are routinely invaded in this fashion. If we wish to hold to economy of thought, we should pick the simpler explanation.

    Ockham’s Razor is a principle; that is, it does not tell us that the simplest explanation is true (or what there is); but instead that we ought to prefer it on methodological grounds. We are counselled to adopt theories which are minimally efficient, insofar as they can do the same with less. Note that there is apparently no reason why we should do so: a direct route to a destination is neither better nor worse than a diversion unless we include the criterion that we wish to get there by the most direct route (and even then it may not be, so we will return to this analogy later.) Nevertheless, it seems plain enough that we are inclined to favour the simpler explanation, other things being equal. It is this assumption that we shall now examine.

    Applying Ockham’s Razor

    Perhaps the best-known example of two competing theories between which a decision had to be made was the seventeenth century controversy over astronomical systems. The long-standing Ptolemaic/Aristotelian model of the heavens was challenged by the Copernicans, who insisted that heliocentrism was simpler than geocentrism. (Note that the question of geostaticism – or the fixed (or otherwise) nature of the Earth itself – was a separate issue.) Since that time much effort has gone into demonstrating (or refuting) the claim that either system was more parsimonious than the other.

    Although Copernicus had believed that a sun-centred universe consisting in circular orbits was the most beautiful that could be created, he did so on the basis of thematic assumptions derived from his neo-platonic influences and not as a result of any new observations, of which there were none until some years later. (Max Jammer has shown that Copernicus’ reasoning resulted in his being faced with having to reject either geocentrism or the Aristotelian conception of space. Having no metaphysical substitute for the latter, he was forced to dispense with the former. Ptolemy had actually considered the possibility of circular motion but dismissed it precisely because it did not agree with what was seen in the night sky.) On making the change to heliocentrism, Copernicus found that he still required the assistance of devices like epicycles to save the phenomenon; that is, to make the predictions of his theory agree with what was actually discerned by astronomers. The issue of comparative simplicity has subsequently been reduced by some commentators to counting epicycles but for our purposes this is beside the point: neither the Ptolemaic nor Copernican system was empirically adequate, leading Kepler to produce another.

    The basic error inherent in the counting approach is to consider theories in isolation. A theory includes a host of ancillary presuppositions and exists within a metaphysical system. A comparison with an alternative implicitly or otherwise assumes that all other things are equal (called a ceteris paribus clause in Latin) when they are not (or, at the very least, no attempt is made to show that this requirement is satisfied). Copernicus himself was wary of asserting the truth of his system and only received a copy of his De revolutionibus orbium celestium on his deathbed. When the issue was forced during the so-called "Galileo Affair", a judgement was sought between two systems whose empirical base was the same and whose practical utility was identical at that time. Galileo sought to delay any choice by invoking the Augustinian principle that it would be folly to ground theological certainties on physical propositions that may subsequently be shown to be false, but his pleas were not heard.

    There are several lessons to take from this historical episode. In the first place, we have two competing theories with the same content, and thus a prime candidate for the application of Ockham’s razor. Upon consideration, however, we immediately note that the ceteris paribus clause was not satisfied, and for many reasons. The theological consequences were (ostensibly) very different; the political outcome moreso, particularly against the backdrop of the Reformation; the implications for morality were easy to predict but harder to judge; and the metaphysical fallout was just beginning to be investigated. The decision made on this basis did not count the number of postulated entities (which were the same to all intents and purposes) and did not include a conclusion on the relative economies of each theory, since they were also equivalent. In any event, Copernicanism was rejected with scarcely a mention of William of Ockham.

    We know now, of course, that a variant of heliocentrism eventually won the day. Galileo’s warning to the Church was not heeded and its choice to assert the reality of geocentrism had catastrophic results for its authority and – later - its credibility. Nevertheless, the history of this change is also illustrative: at no time was there an invocation of the "decisive experiment" of myth, dreamt of by many a philosopher of science. When Foucault’s experiments with his pendulum showed the movement of the Earth, confidence in geocentrism had already been slowly eroded over the years. At the only stage in this entire episode that a comparison between rival theories had been insisted upon, the question was decided by "non-scientific" means (notwithstanding the anachronism implying the inverted commas) with Ockham’s Razor playing no part.

    The general point raised by this brief study is that Copernicanism required time to develop. Attempting to make a straightforward comparison was disastrous for the Church and for astronomy (and subsequently science) in Italy. Kepler was able to refine the basic Copernican insight because the theory was not limited to the narrow domain in which it was judged.

    Theories of gases

    Consider now a theory, which we call T1, say, applying within a domain D. T1 predicts P while the actual state of affairs is in fact P', close to P but their difference being beyond experimental possibilities. That is, there is a difference but it is so slight that we could never notice it by investigation. In such circumstances it would be of little use to hope (or even expect) that an increase in experimental capabilities will lead to the discovery that P' actually obtains because there is no apparent need to refine T1. Suppose instead that we propose additional theories T2, T3, etc, each of which differs from T1 within D and which predicts P'. Ockham's razor cannot help us decide whether or not to pursue these new theories, but when we investigate them further we may find that T2, say, is confirmed where T1 was but also makes novel predictions not given by T1, or else suggests answers to extant problems for T1. In that case, then, we may chose to reject T1 and adopt T2, even though no refuting case has been made against T1.

    Although this hypothetical example may be considered fanciful, it is illustrative of what occurred when the kinetic theory of gases was proposed in opposition to the prevailing phenomenological theory. For the phenomenological theory of gases (i.e. based on describing the behaviour of gases via the laws of thermodynamics), Brownian motion was an instance of a perpetuum mobile that refuted the second law of thermodynamics, which expressly disallows perpetual motion. (In brief, the apparently random movement of the Brownian particle seems to go on indefinitely, suggesting that somehow the particle does not run out of energy. In kinetic terms, however, we now say that it is being "bumped" by other molecules, explaining both its behaviour and where its energy comes from.) Following his studies of Brownian motion, Einstein (see the 1956 edition of his Investigations on the Theory of the Brownian Motion) was able to entirely recast the phenomenological theory in kinetic terms, in spite of having no experimental motivation beyond the known difficulties to do so; after all, the differences in temperature expected, were the kinetic theory correct, were below the range of detection of thermometers (see Furth, 1933). Nevertheless, the new theory prevailed when Einstein used his theory to derive statistical predictions for the behaviour of the Brownian particle by assuming that molecules existed and a mechanical account of the motion could be given. (Feyerabend (1963 (1999, pp.92-94)) made this argument for a different reason, which Laymon (1977) disputed). This decision could later be justified by the eventual successes of the kinetic programme, but this is only to say that parsimony was discussed after the event, if at all. The possibility of applying Ockham’s Razor was again not considered, nor could it be of any use.

    The Special theory of Relativity

    When Einstein published his 1905 paper on special relativity, the first response remarked on how his ideas had been decisively refuted by Kaufman's papers of that year and the next in the Annalen der Physik (in issues 19 and 20, especially his Uber die Konstitution des Electrons (1906, p.487)). Kaufman began, in italics, by saying that the "measurement results are not compatible with the Lorentz-Einstein fundamental assumptions". To see how convincing Kaufman's work was considered, we may note that Lorentz wrote to Poincaré in March of 1906, saying that his theory was "in contradiction with Kaufman's results, and I must abandon it." The latter agreed and could offer no advice. A glance through the journal and the absence of significant (indeed, any for quite some time) response shows how seriously Kaufman's objections were taken. (See Feyerabend, 1999, pp.146-148 for more detail on this and the below.)

    Planck, however, was committed to Einstein's ideas because he thought their "simplicity and generality" meant that they should be preferred, even in the face of experimental refutation. He attempted to re-examine Kaufman's data and demonstrate that there were flaws, but instead he found that they were far closer to Abraham's rival theory. Thereafter he presented his findings at the Deutsche Naturvorscherversammlung in Stuttgart in September 1906, a rather amusing affair in which Abraham drew much applause by observing that since the Lorentz-Einstein theory was twice as far from Kaufman's data as his own, it followed that his theory was twice as good (Physikalische Zeitschrift 7, 1906, pp.759-761). Planck tried but ultimately failed to convince Sommerfeld, Abraham or Bucherer that Einstein's ideas should be given time to develop (ibid). Ultimately, of course, they were accepted because of their "inner consistency" (Wien, 1909) or because Kaufman's experiments lacked "the great simple universal principle" of relativity theory (von Laue - see below), so that the matter was decided well before Kaufman's results were finally shown to have been flawed (Guye and Lavanchy, 1916).

    Thus we find that Einstein's ideas succeeded because of a large measure of rhetoric from him, Bohr, Planck and others, and because of a commitment to the presuppositions of relativity theory, long after there had been very little doubt (on the parts of very many great and distinguished physicists) that experimental considerations had killed it. Indeed, by 1911 von Laue was writing that "a really experimental decision between the theory of Lorentz and the Relativity Theory is indeed not to be gained; and that the first of these nevertheless had receded into the background is chiefly due to the fact that, close as it comes to the Relativity Theory, yet it lacks the great simple universal principle, the possession of which lends the Relativity Theory from the start an imposing appearance" (see Das Relativitätsprinzip, 1911). Physicists were more interested in how they could use Einstein's ideas to explain the result of the Michelson-Morley experiment, even though they were still confusing Lorentz's and Einstein's theories in 1921 significantly enough for von Laue to address it (see the fourth edition of his text, then entitled Das Relativitätstheorie as acceptance of the theory had grown and hence changed its status from a mere "principle"). As a result of these theoretical and thematic factors, D.C. Miller's later (apparent) falsification of Einstein was given very little attention at all, even though it again took a long time (almost thirty years) for Shankland to find the mistake (1955, pp.167ff). (See Holton, 1988, for more discussion of these episodes in the history of physics.)

    We see, then, that even in this instance in which the notion of simplicity was relied upon throughout, no actual comparison of the number of entities or parsimony took place. The special theory was held to possess greater inherent simplicity both before and after any experiment and in spite of the negative results of Kaufman’s work.

    The general case

    There are two major difficulties with Ockham’s Razor. The first is that other things are rarely (if ever) equal, so the ceteris paribus clause is not satisfied. The second, perhaps still more important objection is that the unknown (or additional) entities parsed away may have explanatory power outside the domain of consideration, or they may offer further methodological suggestions which subsequently show that the utility (or even truth) granted to the former explanation was too narrow. The extra terms may be methodologically interesting and stimulating even if they turn out to be completely in error. As Niels Bohr was fond of saying, parsimony is judged after the event. It makes little methodological sense - to hammer the point home - to disallow additional entities before their consequences have been investigated; indeed, the application of parsimony in the examples we have considered above and throughout the history of science would likely have proved disastrous, at least with the benefit of hindsight (and quite plainly in the case of heliocentrism).

    The lack of evidence for a posited entity is hardly a problem for scientists who are both willing and able to continue their efforts regardless. Moreover, this risks putting the cart before the horse: a theory may predict the existence of an entity for which there is no evidence but which is as a result subsequently discovered. While there may be a limitless supply of alternative hypotheses (as asserted by the strong underdetermination of theories), or at least enough to require a decision between them (even if only on practical or financial grounds), not all of them will (or may be suspected to) have interesting enough consequences to pursue. The methodological point is, once again, how can we know the utility (or truth) of apparently un-evidenced or unwarranted theories/entities before the fact? Given that so many have turned out to be of benefit in the past – or so goes the historical argument - why assume to the contrary now?

    Historically (and today) theories are surrounded by anomalies and additional entities are postulated to explain them, sometimes ad hoc (thus maintaining the theory) and sometimes requiring a replacement. The resulting alternative(s) would be empirically equivalent and adequate within the domain satisfied by the current theory, so disallowing hypotheses that fail the requirement for parsimony presupposes that they will also fail to be successful. This what the Church imposed upon Galileo, and so to follow Ockham’s Razor leaves us with a dilemma: should we reject theories that appear to violate parsimony and risk stifling (or ending) their development, which may subsequently show otherwise; or should we instead reject the requirement for parsimony and accept that matters are more complex than the shortest route being the one to prefer?

    If we return to our T1 and T2, we can note that T1 may employ different assumptions to T2 such that a straightforward comparison is not possible (Berkeley's idealism being a good example). Moreover, two hypotheses may be successful in different domains but mutually exclusive within their intersection, if there is one (complementarity, for instance). The believer in God or in aliens declaring an agency other than man was responsible for a phenomenon does not make a straightforward choice but involves their additional entity as part of an entire worldview (incorporating the existence of God or extraterrestrials respectively) which also explains or makes sense of a whole range of phenomena. The ceteris paribus clause here can also turn out to be have failed: perhaps the confirming instances of T1 are apparently refuted (as with special relativity) but the addition of the further assumptions can explain these anomalies, or else neither theory may be satisfactory and the proper response might be to withhold judgment. T2 may in addition have greater predictive or explanatory (or both) power outside the domain of comparison, making the evaluation within D an interesting but not particularly devastating factor. Rather than straightforwardly dismissing T2 because of its auxiliary (and apparently unnecessary) assumptions, it may instead make methodological sense to investigate what consequences these have.

    Moving beyond any actual results or possibility thereof, the additional entities rejected by parsimony may not explain any other data in further domains but still provide a stimulus to work which subsequently uncovers further domains in which they are necessary, or which show the previous theory to have been but an approximation. In short, epistemological considerations are not sufficient to choose between theories and cannot be expected to account for scientific practice. While methodological concerns are also not necessarily of ultimate import, scientists appear to use them more as they press on, blithely unaware of or unconcerned with the philosophical ideas that are intended to provide them with guidance. This is that element of guesswork and certainty of resolve which prompts scientists to continue to work on ideas rejected by many of their contemporaries (plate tectonics, say, or Pauli’s positing of the neutrino), perhaps reminded of the changing fortunes of atomism over millennia.

    Another way to introduce or justify Ockham’s Razor is to assert that parsimonious theories are more likely to be correct. This is a problematic claim. Suppose we take the case of a theory which is regarded by all as highly successful, but which relies upon unobservable entities (such as sub-atomic particles, say). Is the theory true or just a useful instrument? Is it more parsimonious to suppose that these entities do or do not exist? In the absence of an ability to divine the fortunes of a theory in the years to come (or, in the case of atomism, the thousands of years), how are we to decide? To assume, as many apparently do, that parsimony is important because the universe is fundamentally simple, rather than complex (hence the search for grand theories, underlying all others), merely begs the question.

    To get where we are going

    To summarise our discussion, then, the important point which renders parsimony methodologically unhelpful, if not explicitly detrimental, is that the consequences of additional entities or assumptions are impossible to state a priori. Since science is never completed, we are always in the position of before and never get to the after, which Bohr claimed was the only place parsimony could be introduced, much less judged.


    Selected References:

    Einstein, A., Investigations on the Theory of the Brownian Motion (New York: Dover, 1956).
    Einstein, A., Über das Relativitätprinzip und die aus demselben gezogene Folgerungen in Jahrbuch der Radioaktivität, vol. 4, 1907.
    Feyerabend, P.K., Knowledge, Science and Relativism (Cambridge: Cambridge University Press, 1999).
    F�rth, R., Über einige Beziehungen zwischen klassicher Statistik und Quantenmechanik in Zeitschrift für Physik, vol. 81, 1993.
    Guye, C.-E. and Lavanchy, C., Verification experimentale de la
    formule de Lorentz-Einstein par les rayons cathodiques de grande vitesse in Archives des sciences physiques et naturelles, 42: 286–299, 353–373, 441–448, 1916.
    Holton, G., Thematic Origins of Scientific Thought (Cambridge: Harvard University Press, 1988).
    Kaufman, W., Über die Konstitution des Electrons in Annalen der Physik, vol. 19, 1906.
    Kuhn, T.S., The Copernican Revolution: Planetary Astronomy in the Development of Western Thought (Cambridge: Harvard University Press, 1957).
    Laue, M. von, Das Relativitätsprinzip (Braunschweig: Friedrich Vieweg & Son, 1911).
    Laymon, R., Feyerabend, Brownian Motion and the Hiddenness of Refuting Facts in Philosophy of Science, 44, 225-247, 1977.
    Physikalische Zeitschrift, 7, pp.759-761, 1906.
    Shankland, R.S., A New Analysis of the Interferometer Observations of Dayton C. Miller in Reviews of Modern Physics, vol. 31, 1963.
    Wien, W., Über Elektronen (Leipzig: B.G. Teubner, 1909).
    Teaser Paragraph: Publish Date: 06/24/2005 Article Image:
    By Paul Newall (2005)

    Arguments for proliferation as a methodological principle are often associated with the philosopher of science Paul Feyerabend (1999) but they date back at least to J.S. Mill (1869 [1991]) and take the same form.

    In the latter’s On Liberty of 1869, four reasons were given to advocate proliferation of theories and "forms of life".


    The history of science is (often unfortunately) littered with examples of theories that were true without doubt and yet crumbled all the same in spite of this certainty. Although case studies such as the so-called Galileo Affair have shown that the relationship between early science and religious strictures was considerably more nuanced than had previously been believed, such that the claim that science was "held back" by religion is problematic, nevertheless the assumption of infallibility has consequences for the speed with which we can discover an error. After all, why question a surety? It has tended to take people with extreme tenacity like Galileo to adduce doubt when there is little reason to do so before the erroneous nature of the certainty can eventually become clear, of which more below.


    It is now straightforwardly accepted that science is a fallible venture, such that our theories are never certain and are always assumed to contain some errors (although obtaining this admission where it pays in rhetorical terms not to mention it is sometimes a painful process). Given that this is so, we can take two points from Mill's remarks: firstly, that although other theories may be flawed they may still be partly true (or possess some degree of verisimilitude, or truthlikeness); and, secondly, that by bringing theories together that conflict in some or all areas we can use one to identify the flaws in the other, and vice versa.

    Indeed, this "collision of adverse opinions" is for Mill an important means by which to come by knowledge. Even where an opinion strikes us as deluded or wholly ignorant, the very process of setting out why can be beneficial because it forces us to rehearse the reasons and hence to understand how a theory comes to be considered false rather than relying on an insistence that only a fool would think otherwise. This leads us to the next reason:


    Here Mill insists that this business of contesting ideas – no matter how sure we are of them – is valuable insofar as it prevents us holding them without appreciating why they were thought worthwhile originally. There is more, though:


    Not only can an idea unrehearsed become held as a dogma, then, but this state of affairs can also prove a hindrance to further development, whether of the idea in question or others simultaneously or subsequently. It is here that we arrive at the full meaning of Mill's advocacy of pluralism and proliferation: even the best ideas can be improved by their clashing with others, even poor ones, because they are either enriched by their own flaws being highlighted or revealed, or else because the challenge leaves them untouched but better understood, forcing us to articulate them more clearly and to not insist upon them due to the power, prestige or authority of their supporters. Conversely, there is no value in even a true idea that is not continually subjected to challenge by even apparently false ones. Moreover, it has often been the case that the more sure of a theory people have been, the less inclined they are to question its anomalies or continue to work on its development.

    One consequence of the principle of proliferation that may not be immediately apparent from Mill's discussion was elaborated upon by Feyerabend and is that these so-called "poor ideas" cannot be dismissed for the very same reasons that the "good ideas" cannot be accepted uncritically. Not only is the question of demarcating between good and bad ideas a thorny one itself, but the good ones typically started life as bad and those they replaced provide them with much of their content through the process of improvement. Any student of the history of science is also familiar with numerous examples of ostensibly hopeless theories that were regarded with scorn by all right-thinking people only to make a comeback (on several occasions in some instances, like atomism), such that it would eventually be thought preposterous that anyone would have imagined otherwise. At the time of Copernicus, say, the arguments marshalled by the Aristotelians against heliocentrism and geokineticism were so strong that Galileo had to appeal to reason over and above the clear evidence of the senses in order to explain why anyone should doubt geocentrism and geostaticism. This is not to say that a theory will make a triumphant return, of course, but only that it might and that, in the meantime, by keeping it in mind we remain aware why we (tentatively) hold to an improvement on it.

    Another reason to be interested in proliferation is that theory choice is no longer accepted to be a simple matter of agreement with the evidence. The importance of rhetoric, as well as social, political, economic and thematic factors, amongst others, means that the current superior status of one theory is insufficient grounds for supposing that this circumstance is due solely to the merits of the victor (and here Lysenkoism in the former Soviet Union is perhaps the most chilling example of a theory that succeeded thanks to ideology and at the cost of many lives). Notice also that this situating of theories within a wider context is unavoidable: all the models and ideas we develop have some beneficial aspects or we would not come up with them at all, but the questions are to whom and to what end? Where only one option exists we have no opportunity for comparison and hence no way of knowing whether we have the best of the matter.

    Indeed, it can happen that a theory is incorrect in an important way but there is no experimental way of knowing this. An example discussed elsewhere concerns the phenomenological theory of gases, which was replaced by the kinetic theory thanks to Einstein's Investigations on the Theory of Brownian Movement. In this case the consideration of an alternative theory in spite of there being no experimental falsification allowed Einstein to explain the same situation with a new theory that led to novel predictions, which turned the tide against the phenomenological theory and replace it with the kinetic.

    Arguments against proliferation have taken several forms. One important and wider issue is that pluralism runs contrary to one of the prevalent thematic ideals: the search for unity that runs through much of physics. From this perspective, it makes little sense to proliferate theories when the aim of science is (or should be) a small number of laws that can account for all phenomena. At base this approach relies on the same notion that both Galileo and the Church insisted upon; namely, that the truth is singular and hence even if our theories may be fallible they are still getting closer and closer to one reality. Methodologically speaking, then, the suggestion is that we should not be hearking back to old, defeated theories but concentrating on the best we have and striving to improve them. In particular, our best theories (such as evolution or quantum mechanics) may be incomplete but it is unthinkable that they could be discarded at some point in the future, so we should work on the few remaining details and not concern ourselves with alternatives just to satisfy otherwise sound advice on understanding ideas.

    Notwithstanding that many theories in the past were in precisely this situation (consider the certainty with which Copernicus' writing was rejected, for instance), it is here that Feyerabend's argument applies. This singular approach presupposes that our theories can be straightforwardly developed by further application but the example of Brownian motion shows that sometimes this is not possible. More generally, if a theory T1 predicts circumstances C1 but what actually occurs is C2, even though C2 is (currently) experimentally indistinguishable from C1, then we have no reason to look at alternatives to T1 in spite of it being incorrect. If we instead proliferate theories and find that some T2 predicts C2 then we have a justification for trying to experimentally differentiate the two or, where this remains impossible, for studying the merits of the two otherwise. Another possibility, of course, is that the investigation of T2 allows us to tweak T1 slightly such that it does predict C2 while maintaining its other advantages. In this way proliferation leads to strengthening or deepening the content of theories.

    For Mill's part, he was very clear on why it can never suffice to rest content with one theory:


    On this view, it is never enough to know a theory inside out; we also need to understand why alternatives exist, why they are believed to be true and why they fail in order to appreciate the value of the theory as an improvement or more deserving of our attention. The clash between advocates and deniers of the phenomenon of global warming, for example, has pushed both sides to reconsider their arguments and strengthen them, allowing flaws to be amplified (although political and other pressures are such that a good argument is rarely enough to change a policy), while the challenge of creationism and the elaboration of intelligent design have forced biologists to enter the public arena and explain why evolutionary theory is so highly confirmed and the foundation of biology.


    Here Mill goes further, insisting that not only do "the arguments of adversaries" deserve to be heard just as part of learning about the superiority of the current theory but rather in their very strongest form. This is no recommendation of a superficial treatment, then, but a conviction that by supporting and developing alternatives we contribute to the improvement of our knowledge, even where these alternatives achieve nothing when considered in isolation. This is to say that a theory may be preposterous on its own but becomes of benefit to us when taken as providing an ever-present challenge to others. It goes without saying that the stronger it is, the greater our confidence can be that our current ideas have survived critique.


    It was this last principle that Feyerabend embodied, even though there are always enough people who will paint those who apparently depart from orthodoxy as heretics by implication or opponents of clear thinking. On the contrary, it is the effort to expound, support and defend arguments we do not agree with that allows us to truly understand them enough to dissent and prefer an alternative. Proliferation enjoins upon us both a methodological pluralism and a belief that a theory is of no value unless subject to a continuous process of challenge, of which even the most dismissed of ideas is an indispensable part.

    Another objection to proliferation – often the most common – is that it may be a good idea in the abstract but not in practice. Unfortunately for scientists and vacuum cleaner salesmen alike, there is only so much money to go around and hence we cannot afford to allocate resources to any and all ideas that come along. A theory can be challenged by a well-developed and plausible alternative, in keeping with the argument so far, but little or nothing is to be gained (indeed, it would even be detrimental) by taking funding from our best theories to support hopeless substitutes.

    The first point to note about this rejoinder is that it effectively begs the question against the alternatives: if we deny support to an idea we can hardly criticise it later for being undeveloped and not worthy of consideration. Theories start their lives riddled with internal contradictions and partial (or even complete) disagreement with the evidence but over time may – or may not – prove their worth and begin to be taken seriously. Perhaps it is because this process of acceptance is usually slow (even where so-called "revolutions" in science are taken to have occurred, a claim that is increasingly untenable in historiographic terms) that we fail to notice it and forget that there were times when our best theories were themselves rejected as absurd or unlikely? Rejecting an idea because it is prima facie false would thus have been catastrophic methodological advice for science in the past and there is no reason to think otherwise today unless we presuppose that the current state of science and knowledge is approximately the final one, a conceit that seems to affect all ages.

    What is also ignored in this response is that an idea is not only credible in proportion to how much work has been done on it and how much money is behind it. The early quantum theory was rejected with a considerable measure of displeasure by some physicists (Heisenberg told Pauli in a letter of 1926 that "[t]he more I ponder the physical part of Schroedinger's theory, the more disgusting it appears to me" (p.15 in Holton, 1988)) because it disagreed strongly with their thematic preferences, while the notion that there is or is not a higher intelligence involved in the creation or sustaining of the universe is no less unpalatable to some. Moreover, there are power structures involved in science just as anywhere else and those who have an investment in the prestige and financial rewards associated with a successful theory may not be as open to new ideas and the redistribution of funding when appropriate as their claims to the contrary might suggest. Money is thrown at projects with little or no chance of success (for example, missile defence systems) or no proven achievements (such as string theory) not because of some inherent value but because the rhetoric or personalities behind them do a better job of convincing others, while a skilful synthesis can create unity where there was none (Dobzhansky's in evolutionary biology, say) or a consensus can be constructed (like the usage of the DSM in psychiatry). Insisting that time and funding are limited, then, ignores their already unequal distribution and that the relative status of a theory is determined by far more than its empirical support. That the decision between theories is complex and based on many factors, some of them extra-scientific, is no reason to restrict our efforts to few or ignore the arguments for proliferation.

    Tempering this realisation is the frequent (and necessary, for Feyerabend) association of proliferation with tenacity, the tendency of scientists (and people in all walks of life) to persist with their ideas even in the face of the most adverse of difficulties. Once again, the history of science is replete with examples of theories or conjectures that by all methodological standards in use should have been discarded but were maintained in spite of experimental results to the contrary or the most grievous of conceptual problems (see the discussion of falsificationism for more instances). Perhaps the finest illustration of tenacity – and its link with thematic commitment – is provided by Einstein's response to the question of how he would have felt if Eddington's expedition of 1919 had failed to confirm his theory of general relativity: "... then I would have been sorry for the dear Lord - the theory is correct."

    The coupling of proliferation and tenacity thus goes some way to assuaging the concerns of those who would prefer not to waste time on what they consider to be bad ideas. As Mill noted, proliferation requires that we work with the strongest possible version of defeated theories in order to better our own, and vice versa, and the way to achieve this is to ensure we persist with both even when all seems lost or when it would be absurd to withhold assent that a theory is correct. This is not to say that no division of labour can be employed and that we each have to consider every idea in the marketplace, but only that it is in our interest to see that those who wish to study them are able to. This means that scientists working on a theory do not have to reassign part of their time to develop alternatives in the name of proliferation but that we should not condemn these alternatives out of hand. They may compete for funding, of course, just as we would expect given the parallel principle of tenacity, but we should view their rhetoric and behaviour in these terms rather than as indicative of a neutral claim to superiority and financial support over and above the requirements of proliferation.

    When Mill set out his arguments for allowing many "forms of life" he did not have in mind only the laboratory, although proliferation outside science tends to be resisted robustly – especially when it comes to alternative medicines and the spectre of frauds and charlatans putting the health of their victims at risk. However, if we allow the benefits of supporting different ideas covered above then the same applies to methodologies, with the current dominance or pre-eminence of one approach (science) no guarantee of its continuing success – or the demerits of alternatives – any more than this could be said of theories. Extending democracy to all traditions is some way off, though, even where it is agreed that self-determination should have wider application.

    Proliferation is thus a principle that makes our attitude to life and learning inclusive, as well as reflexive and genuinely fallible. It is not a rule any more than parsimony is but functions to keep knowledge an open and unfinished process by never letting us stop and be satisfied with what we have.


    ---

    Selected References:


    Feyerabend, P.K., Knowledge, Science and Relativism (Cambridge: Cambridge University Press, 1999).
    Holton, G., Thematic Origins of Scientific Thought (Cambridge: Harvard University Press, 1988).
    Mill, J.S., On Liberty (Oxford: Oxford University Press, 1991).
    Teaser Paragraph: Publish Date: 06/22/2005 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    In this essay we look at rhetoric, introducing the subject and some of its traditional divisions before providing a guide to common rhetorical figures and their uses. As we progress, we will see why rhetoric is of crucial importance in understanding philosophy and indeed any area of inquiry.

    What is Rhetoric?

    There have been many different definitions of rhetoric over its long history, which stretches back to the Ancient Greeks and Romans in particular. However, it is generally understood as the study of writing and speaking effectively; that is, to appreciate how language is at work when we write or speak it and employ any lessons learned in making our own writing and speaking better. What we mean by "better" is itself up for debate, of course, and it is here that the negative conception of rhetoric comes into play - that of rhetoric as the art of persuasion, where convincing others is seen as the hand-waving and sophistry that is used in place of reasoned argument.

    This distinction between content and form - what is said and how we say it - was emphasised by Aristotle as logos and lexis, or what is communicated and how respectively. Ultimately, though, this distinction proved untenable, based on a view of language as little more than the means by which we share our thoughts and failing to take into account the inseparability of ideas and the language used to express them. Indeed, how we say things is precisely the way in which we ensure that our desired meaning has been transmitted to others, so there can be no passing on of ideas without also taking into account lexis.

    The Divisions of Rhetoric

    Rhetoric has been studied for very many years as a result of its crucial importance, and a number of divisions have been made. The first was a tripartite distinction between the appeals that are possible when speaking or writing, namely:


    Logos, or the appeal to reason;
    Pathos, or the appeal to emotion; and
    Ethos, or the appeal to character.
    Notice that here we can immediately see why the complaint that an argument is "mere rhetoric" is misguided: an argument can contain more or less reasoning and a lot or little emotive language, but both are rhetoric intended to convince. For ethos, Aristotle considered an appeal to character to be any attempt on the part of the speaker or writer to establish his or her knowledge of the subject under discussion and their benevolence towards the audience, both providing credibility for what would follow.

    This brings us to the next division, also three-way, of rhetoric in the larger sense. We distinguish between:


    Kairos, or the occasions for speech;
    Audience, or who will hear or read it; and
    Decorum, or fitting words and subject together.
    Kairos includes considerations like the contexts for a speech or piece of writing, while audience looks at where a discourse may take place. Traditionally oratory was split again into three: judicial (or forensic), deliberative (or legislative) and epideictic (or ceremonial). Different requirements like these would and do occasion different rhetorical approaches. Decorum, lastly, deals with making appropriate use of rhetoric, depending on both kairos and audience.

    There were also five canons of rhetoric:


    Invention, or coming up with something to say in the first place;
    Arrangement, or the order of a discourse;
    Style, or how it is said;
    Memory, or how the orator recalls information; and
    Delivery, or the way in which the discourse in performed.
    Some of these are straightforward but others are quite subtle. Arrangement, for example, involved the study of how to put together a speech of piece of writing. Should we start with the conclusion or only give it at the close? Do we provide counterarguments separately or include them in the main body of our own argument? Do we need to set the scene, as it were, or should the discussion be formal and move straight to the meat of it? And so on. Likewise memory included not just the powers of recollection of the speaker (after all, do we use notes or try to remember all the content, which often looks much more impressive?), but also estimates of how much the audience would be able to keep in mind. Is it necessary to point listeners to remarks made earlier, for instance, or can we assume they would recall them unaided? When in the discourse should reminders be placed? And so on again. Lastly, the delivery of a speech has a great deal to do with its reception, as anyone familiar with comedians will know. Does a situation call for a serious approach, or would some jokes be welcome? Will a dead-pan voice work or should stress and emphasis be placed on words, phrases and particular ideas? If so, which and when?

    All these things have their role to play in speaking and writing, hence the importance of the study and rhetoric. For our purposes, rhetoric is involved in philosophical arguments and discussion just as it is inevitable in all other areas, as we said. With that in mind, we can now analyse specific rhetorical devices that have occurred often enough that their use and effect is well understood.

    A Guide to Rhetorical Devices

    In no particular order, the following guide gives copious instances of rhetorical devices at work and attempts to explain both how they work and why we should be interested. By the end we should have increased our ability to spot them in the speech or writing or others and hence determine how well they have be employed, as well as learning how to use them ourselves.

    Expletives

    We tend to think of expletives as synonymous with swear words but the latter are just one example of this rhetorical device. An expletive is a word or short phrase that we use to lend emphasis to words on either side of it. Compare these two sentences, for example:


    What we find is that the new tax law is fundamentally unjust.
    What we find, then, is that the new tax law is fundamentally unjust.
    Both impart the same information but notice that the expletive in the second (the word "then") signals to the reader that a summation of prior discussion is coming, or that a conclusion is to be given. The contrast is even more apparent if we speak them aloud: in the second, again, the expletive provides the emphasis and actually forces us to slow down as we say the words, providing a cue for any listener to note that the important point has been reached.

    Sometimes an expletive can occur at the start of a sentence:


    In brief, you should be more careful. On other occasions, although less often, it can be placed at the end:


    The result was to be expected, of course.
    In both the expletive adds a power to the statements that would otherwise be lacking (to test this, try repeating them aloud as before). The apparently superfluous "of course" in the second makes the statement emphatic and suggests to the listener or reader that it was so straightforward as to hardly be worth investigating, while the first lets us know that a précis is to follow.

    Expletives are typically used in printed dialogue so often that we barely notice:


    "What I meant", he said, "was that you should do something about it." If we experiment with the placement of the expletive we can see how easy it is to ruin the effect, or even make the line difficult to read at all:


    "What I meant was that", he said, "you should do something about it."
    "What I", he said, "meant was that you should do something about it."
    "What I meant was that you should", he said, "do something about it."
    And so on. Once we understand how expletives work and how to use them, we can begin to spot them everywhere - in your humble narrator's musings, for example. We should expect to find them whenever a writer is trying to lead us through a chain of reasoning, say, but perhaps be more wary when we observe them in a political speech.

    Similes, Analogies and Metaphors

    One of the most familiar devices in rhetoric, a simile involves comparing two things that share a resemblance in at least one way - usually in vividly descriptive terms:


    Their passing cut through the defence like a rapier.
    Her smile was like sunshine, warming me to the core.
    He was as silent as a church mouse.
    As the rock stands fast, so was his will resolved.
    There are so many examples of similes that it would be impossible to list them all here, but they often involve words such as "like", "as" or "does", and their negations. The danger is using them is that sometimes the comparison may not be close enough or accurate at all:


    We need academic consensus like the very air we breathe.
    As the crusaders were shackled and bound, so are the guardians of the freedom of speech today.
    Emotive similes can have a considerable effect but overdoing them can result in incredulity, especially when images of warfare or a struggle against oppression are invoked. They are closely related to analogies (and indeed these may be employed together), which also invite a comparison but use it to explain a difficult concept or idea by reference to a simpler one:


    The man who keeps silent in the face of tyranny is as guilty as him who notices a fire and fails to raise the alarm. The difficulty to be avoided, however, is offering false analogies, which can be fallacious. Nevertheless, a speaker or writing hoping to sway opinion may resort to these and hence the question we must ask ourselves is: are the two (or more) things compared actually analogous?

    When instead we believe that two situations are so close as to be identical, we can appeal to metaphors:


    The lack of subtlety is this discussion is killing all possibility of compromise.
    This debate is a war and we must use all weapons at our disposal.
    Nature is beautiful to behold but seldom gives up her secrets easily. She must be wooed and approached with caution and reverence.
    Like similes and analogies, metaphors are very common but the choice of terms has important consequences for how they will be understood by listeners or readers. For example, consider the effect of these alternatives:


    This silly idea is becoming more popular.
    This silly idea is starting to infect public opinion.
    This silly idea has become an infestation.
    This silly idea is a cancer on our society.
    The implications for action here differ in strength and the emotive appeal of describing ideas as diseases is one that many writers have relied upon. However, the language used often has far more to do with the opinions of the author than the reality of the situation. Thus while selecting an appropriate metaphor (or analogy or simile) requires careful consideration, we also need to ask whether those chosen by others are accurate to their purpose or not.

    Other forms of metaphor include metonymy and personification. The first involves a metaphor where the comparison is with something associated with but not identical to the target of discussion:


    The crown brought the prosecution against her.
    The state cares little for my concerns.
    Plainly no prosecutions are brought against people by the kind of crown a monarch would wear; instead "the crown" is understood it its wider role as synonymous with the workings of government. Similarly, we appreciate that "the state" is not really a person who does or does not care but a metaphor for what we mean.

    Personification, on the other hand, is where we ascribe human characteristics to objects or situations (or even animals, which is typically called anthropomorphism):


    The legislation is fighting me on this issue.
    This steak is still kicking.
    That tackle was unforgiving.
    I've known more trustworthy cats than people, alas.
    Truth is no respecter of hopes.
    Even the very air around me cried out in protest.
    Over doing this can result in strained descriptions, of course, but personification allows us to recast a potentially difficult idea in human terms and hence grasp it more easily. Even though it may make no sense in actuality to refer to the sea as a fickle master, say, those with minimal experience of maritime conditions will easily understand what is meant.

    Hyperbole

    Sometimes we overstate things for rhetorical effect:


    There were millions of people at the bus stop today.
    It took me forever to finish the essay.
    This political measure will mean the end of civilisation as we know it.
    This intentional exaggeration is obviously not to be taken literally and is usually restricted in scope to one aspect of the sentence. Hyperbole is easily the most common rhetorical figure but can lose its impact if overdone. In particular, too much hyperbole can lead to readers or listeners not taking a piece seriously at all.

    Understatement

    The use of understatement is something that satirists have a mastery of, but as a rhetorical device we can use it to try to persuade someone by rewording a sentence in less offensive terms. For example, suppose we believe a person's idea to be in error and wish to point this out:


    I think there may be some additional factors that you may not have accounted for.
    Your analysis is far too simplistic.
    No one will take such an idiotic theory seriously.
    There are many other alternatives we could use, but consider that if we want to convince the person that they are mistaken then we need to pitch our objections accordingly. Perhaps the idea really is idiotic in our opinion and we wonder if the proponent is actually bipedal or has grazed his or her knuckles, but is saying as much likely to incline them to change their opinion? For the second suggestion, it may depend on who we are talking to: a friend, say, may welcome the criticism but a stranger may not appreciate his or her thought being called simplistic, even if it is. Some people might still take offence at the first version, but the determining influences include what we want to achieve and whom we are talking to or writing for. How likely is a person to listen to our critique if they suspect we are talking down to them or dismissing them?

    Sadly there are others who like to indulge in invective, particularly since the advent of the Internet and the risk-free nature of much commentary (that is, we can say just about anything without fear of actual retaliation), and write for a specific audience of those who apparently enjoy the feeling of superiority that comes from joining a group that insults another for whatever reasons. Although the term rhetoric is often applied to such behaviour, in the negative sense we discussed above, this is more a psychological issue than a philosophical one.

    Litotes

    A litote is an understatement formed by the denial of an opposite. This sounds confusing but is actually quite straightforward and a common rhetorical device. For example:


    Performances like that from the All Blacks are not uncommon. Here "not uncommon" denies "uncommon" and therefore implies the opposite - "common". However, compare this with the plainer version:


    Performances like that from the All Blacks are common.
    Although this imparts the same information, there is no understatement - it just reports the situation, and no more. On the other hand, the litote in the first suggests that more could be said and that by describing the performances as "common" we were actually understating the matter somewhat.

    Questions

    The use of questions can take several forms, with different effects depending on what the writer or speaker wishes to achieve. Consider this example:


    What of the possibility that social factors are to blame for the collapse? This criticism is misguided because... Here a question is asked and then answered, which can have several benefits: on the one hand, it allows us to raise issues that the reader or listener may have in mind - anticipating objections, for instance; on the other, we can maintain interest in the discussion and keep the attention of readers and/or listeners with well-placed queries. This latter is a technique teachers often use, since by posing a question and pausing before nominating someone to answer, all the students have to think about it in case they are the one eventually asked. This device is called hypophora.

    Some other possible uses include the following:


    How can we address the economic difficulties in which we find ourselves? Firstly, we can look to...
    What are the consequences of such an approach to history? There are several, of which the most important is...
    In the first hypophora is used to change the scope or direction of the discussion, while in the second it allows the setting out of implications that the reader or listener may not have considered or understood.

    A question that is asked but deliberately does not require an answer is called rhetorical (or erotesis). It can be used to state the obvious, as it were:


    What kind of person would bet against the sun rising tomorrow, though? Alternatively, it can be employed to create a favourable or unfavourable impression of an idea or argument:


    This kind of thinking requires that we give up our sovereignty. Is that what we want?
    In examples like this the rhetorical question can help to gloss over an implication (giving up sovereignty, in this case) that may not follow; moving from a questionable claim to demanding a response ("is that what we want?") can put the reader or listener on the back foot, requiring that they deny a conclusion rather than argue that the reasoning to get to it was unsound.

    Another possibility is that a rhetorical question needs no answer because the preceding discussion has already covered it:


    You know that a vote for my opponent would cost you your job and that you cannot afford to be out of work. Will you support him, then? A potential problem with instances such as this, which is common in political debates, is that they can (deliberately or otherwise) simplify matters in an attempt to force the reader or listener to act in a specific way. For example:


    Do you really want pseudoscience taught in science classes? Do you not care about our children's education at all? Do you want religion in the schools?
    This kind of rhetorical device thus allows us to link a series of complex issues via loaded questions. Instead of inviting debate on what constitutes pseudoscience, the ends to which education should aim or state intervention in schooling, potential discussion is reduced to yes/no and either/or false dilemmas that strip away any subtlety. These tactics are increasingly common, unfortunately, but can be noted easily enough with practice.

    Lastly, procatalepsis is when questions are asked and answered by the writer or speaker, usually by anticipating objections:


    It is typically suggested that this team will lack the strength in midfield to cope with the opposition, but this neglects the experience gained in the recent tour against...
    It is often thought that the only way to address poverty is via governmental initiatives. However, I would advocate a greater role for...
    Possible counterarguments can be presented in their strongest form or as straw men; usually this depends on how charitable the writer or speaker is being to the ideas he or she is trying to improve on, and plainly a meaningful critique relies upon charity far more than a swift and flawed description offered just for knocking down.

    Asyndetons and Polysyndetons

    Consider the following sentence:


    The All Blacks have power, grace, speed, strength. Reading it, we might expect it to have the ending "speed and strength", but this conjunction ("and") is missing. It has the effect of making it seem that the list could have gone on. Another might be as follows:


    I wasted my afternoon reading, writing, thinking, dreaming.
    In like fashion, it almost forces us to skip through the sentence in expectation of more to come. These are examples of asyndetons, when conjunctions are left out to achieve this sense of diversity, or even add emphasis by what seems like an afterthought:


    Spencer was a wizard, a master.
    Spencer was a wizard and a master.
    Of these two, the first conjures up (excuse the pun) an image of the writer's thought process, as though he or she is struggling to describe Spencer and settles on "wizard" before rethinking at the last moment and amplifying with "master".

    The opposite of asyndetons are polysyndetons. This time instead of leaving out conjunctions they are all put in:


    The All Blacks have power and grace and speed and strength. Now the rhetorical effect is one of trying to put into a few words something that is far bigger and too complex to capture in a single sentence. The best place to look for examples of polysyndetons is the Bible, especially the King James version, but we can use it whenever trying to create the impression of describing or explaining something while barely scratching the surface.

    Parallelism and Chiasmus

    Consider the following sentence:


    When the day is over and the deal is done, let me know.
    Here we have a balanced structure, where the first part ("day is over") is paralleled by the second ("deal is done"). This is called parallelism and helps to show a reader or listener that the parts of a sentence have equal import. It can be used particularly to aid with longer, more complicated statements. For example:


    Due to the speed of their passing; because of the lines of their running; owing to the pace of their attacks; and thanks to the structure of their defence, the All Blacks played beautifully again. Here the similarity between the way each of the reasons is given allows us to recognise that they are parts of a list and keep a grip on where the sentence is going, even though it is long (and could be still longer).

    The converse is chiasmus, sometimes called reverse parallelism. Instead of the parallel structure ("day over" and "deal done"), the latter is reversed:


    It was a long day but the night was short.
    The expected parallel ("long day" and "short night") is altered, with the effect that the emphasis is different. Compare:


    It was a long day but a short night. In the former it seems as though a specific contrast is being made, pointing to some significance about the night, while the latter reads rather flat. Chiasmus can be made more complex, involving many layers, and the question of when to use it in place of parallelism is often one of judging how a sentence feels or sounds.

    Apophasis

    Sometimes a writer or speaker will deliberately mention something while claiming not to:


    Luckily we need not discuss my opponent's marital infidelities when evaluating his claim to hold the moral high ground.
    I would call you a liar and a cheat if you weren't my best friend.
    The allusions or references here are called apophasis (or sometimes occupatio) and involve bringing up an issue (usually a damaging one) while maintaining a pretence of ignoring it, with considerable rhetorical effect. We can notice immediately that the first instance is ad hominem tu quoque while both are intentionally disingenuous. Compare these examples:


    I do not mean to imply that a policy of aggressive intervention should be pursued; rather, I advocate...
    I'm sure I don't need to remind you, madam, that there is no smoking allowed on this aircraft.
    Here the purpose is not to cast aspersions but to clarify: in the former, to explain exactly what is being argued; and in the latter, to gently call attention to a transgression without causing too much embarrassment. Both sets of examples are quite easy to spot but instances like the earlier pair are typically found in satires and are often fallacious.

    Enthymemes

    Consider these sentences:


    Great teams need loyal players, which is why ours is always struggling.
    Since she lost the case, she must have been guilty.
    There are only two options available to us and we have seen that the first failed.
    An enthymeme is an informal syllogism in which one of the premises or the conclusion is missing. In the above examples, the major and minor premises and conclusion respectively are left out. Enthymemes are used when the omissions are assumed to be both understood and accepted by the reader or listener, in which case they read as or sound gently understated when compared to a formal syllogism. When a missing premise is not agreed, however, they become unsound; or when the absent conclusion does not follow they turn into non sequiturs. When used skilfully is the wrong way (or right, depending on your perspective), they enable slight of hand in argument because faulty premises can be concealed behind enthymemes - without detection, too, if delivered quickly.

    Metanoia

    If we want to clarify or expand upon a statement, particularly to widen its scope, we can use metanoia (also called correctio):


    Carter is already the best five-eighth of the modern era - no, of all time.
    Your proposal will effect everyone is this area, or even the entire region.
    You fail to realise the impact of these measures - or at least you have not considered the consequences in enough depth.
    The additional information can read or sound like an afterthought or as part of the discussion depending on how this device is used. It may seem quite close to the slippery slope fallacy, but only the second case above is a possibility. When the speaker or writer seems to urge us into concluding more than is actually implied, or else moves from a moderate to a bold claim using metanoia, then we should be wary of this error in reasoning.

    Aporia

    A rhetorical device used to express uncertainty or irrelevance is aporia:


    I am not convinced that the argument for gun ownership has yet been made in a credible form, but what is clear is that...
    I have not been able to come to a decision about the new policy, since there seem to be good arguments both for and against it.
    While I accept that my opponent has offered excellent criticisms of this proposal, this has no bearing on my own suggestions for...
    As we can see from these examples, typically the doubt indicated is of a reserved form and can be employed to move a dialogue forward by admitting indecision or steering clear of areas with no bearing on it.

    Hyperbatons

    When writing or speech involves moving away from the expected word ordering, we say that hyperbatons are used. For example, delayed epithets involve placing an adjective after the noun it is describing:


    His were motives indefinable. Not all possibilities sound "right", though, and hence delayed epithets are a tricky device to use. Compare:


    That was a movie good.
    This is a matter of judgement: there is no difference between the two other than that one "works" while the other does not.

    Another form of epithet is the divided, in which two adjectives are separated by the noun they describe:


    It was a bloody war and brutal.
    It needs a warmer month, less chilly.
    Once again, this hyperbatonic style is a matter of feeling that we have the correct usage, since if overdone it can seem false, affected or needlessly poetic.

    The last instance is parenthesis, in which another phrase or term is inserted parenthetically (hence the name) into a sentence:


    My main concern - and this, at last, is the crux of the matter - is that this proposal does away with the final vestiges of personal responsibility.
    There are times (this may be one of them) when excuses are just not enough.
    These devices are, again, ones used extensively by your humble narrator. Notice, however, that there is a slight difference between the examples: the parentheses (or brackets) are slightly less pronounced as an interruption than the dashes. The latter do violence, as it were, to the flow of the discussion, halting it abruptly to make another, perhaps more important point, while the brackets suffice for short asides. The effect of either is even more dynamic and arresting in speech, since they give the impression of spontaneity - suddenly coming up with a new idea or objection that cannot wait. Often the speaker has actually been working towards such a statement but uses parenthesis to introduce it more dramatically.

    Concluding Remarks

    In summary, then, there is no such thing as "too much rhetoric". We can try to criticise a speaker or writer for including too much pathos at the expense of logos, or vice versa, but the effectiveness of a discourse depends on many other things such as location, audience and style. If a person fails to be convinced by our arguments, it is altogether too quick to assume them to be a textbook example of idiocy on rollerblades; instead, we may have misjudged any number of rhetorical aspects, including focusing too much on reasoning and not taking sufficient account of the many other facets of rhetoric. By familiarising ourselves with the many rhetorical devices we can come to understand why some speeches or pieces of writing persuade while others do not, as well as to notice when others try to use these same devices to influence us.
    Teaser Paragraph: Publish Date: 06/21/2005 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    In our earlier discussion of epistemology we looked at what the term means, some basics of the historical development of the subject, the idea that knowledge could be defined as justified true belief and some problems with this, the problem of induction and some possible ways to come by knowledge. In this second instalment, we will expand on some of these areas and consider the problem of skepticism in particular as means to appreciate why epistemology is important, both in philosophy and everyday life.

    The Problem of Skepticism

    As we noted before, there are several problems in epistemology. We could identify the main ones as follows:


    What can we know, if anything; and
    How can we know it?
    We can further divide the first into two questions: can we know anything at all and, if so, what can be known? Put this starkly, the answers seem obvious: we know plenty of things, and presume many of them before we can wonder about these issues in the first place. Indeed, this seems so commonsensical that doubting it can strike us as academic and/or pointless. Nevertheless, there were apparently plenty of straightforward notions we had in the past that turned out to be mistaken, so we can at least take a look at the matter.

    Before we do so, of course, we need to at least have an understanding of what we mean by knowledge. The best known meaning, as we said, is justified true belief, and we considered some of its potential weaknesses. Notwithstanding these, the justification of beliefs has typically been the most important aspect of any claim to have knowledge. Suppose, for example, we take an ordinary belief:


    I am reading this essay on my computer. This is apparently quite straightforward, so how could we doubt it? There are times, however, when obvious beliefs turn out to be in error. In the past, for example, it was as plain that the Earth does not move as we now consider it to be that it does. What about optical illusions, too, or mirages and hallucinations? A common experience for most people, for instance, is seeing someone we know in the distance, only to find when we get closer that we were mistaken. Likewise, sometimes a bush or a tree can look like an animal or person at first glance. Another problem, often referred to by philosophers, is dreaming. We seem to have vivid dreams in which events that seem real turn out to not be when we wake up (although this basic story can become even more complicated when we ask how we know when we are awake and when we are dreaming). And so on.

    If we wish to be skeptical, then, we can doubt the ostensibly ordinary belief in lots of ways. Perhaps we are dreaming the experience, or else hallucinating it? Notice that the response"if you're not sure, just reach out and touch the thing" is defeated by these possibilities. We could say that there are ways to test for both, such as by the traditional pinch, but why should this work when we can usually "feel" things in our dreams? We might claim that a good pinch has always sufficed before (the kind usually dispensed on the first of the month by overzealous people with good memories for dates), but why should what happened before continue to happen in the same way in future? (This is the problem of induction in one of its forms, of course.)

    Can we know anything if we keep on in this fashion, always questioning what we claim to know when it appears to rest on other pieces of knowledge that can themselves be doubted, and so on forever? We seem to be trapped in an infinite regress, so how can we escape it? Historically there have been two main answers: we break out either via experience (the road taken by empiricism) or by our reason (the path of rationalism). Skeptics, in turn, have been critical of both. We will look at these after we consider some initial objections to skepticism itself.

    Arguments against Skepticism

    Why should we pay any attention to skepticism at all when it seems to run counter to what we assume on a day-to-day basis? There are several basic arguments against skepticism that are usually the first levelled against it.

    Is skepticism self-refuting?

    Having all these doubts about knowledge, we could just say "well, we can't know anything". Suppose we consider the proposition "nothing can be known", though: isn't it self-refuting? This is perhaps the oldest of charges against skepticism; namely, that it defeats itself. After all, if we know that nothing can be known then surely this is one thing that can be known, and hence skepticism is false?

    There are two responses to this criticism. On the one hand, the skeptic can say that nothing can be known except that nothing can be known, which is remembered in Socrates famous dictum that he knew only that he knew nothing. This was sometimes called academic skepticism and probably seems like an evasive rejoinder, but it still works. If there is one piece of knowledge, though, then why not more? Although academic skepticism defeats the self-refuting problem, then, it raises the question of how we come to know that there is one and only one thing that can be known and can feel unsatisfactory.

    The second possibility for the skeptic is to simply admit that even the claim "nothing can be known" can also not be known, which is consistent with his or her skepticism and again answers the difficulty. This is Pyrrhonian skepticism, named after its principle exponent. We now say that nothing can be known, including that nothing can be known. If we cannot show that this is the case, though, why should we be worried about it? The skeptic can answer that just as we prize our arguments to show that knowledge exists, we can equally well use similar arguments to show that it does not – so we use the tools of opponents of skepticism against them. This need not commit us to actually believing that arguments can establish knowledge, even though we use them, particularly if we use reductio ad absurdum tactics. Moreover, even if we reject total (or global) skepticism, it does not mean we are any closer to answering the problems associated with knowledge and our ordinary beliefs, as we considered above.

    Lastly, the skeptic can use the theory of descriptions to rewrite the claim, as we discussed in our initial look at epistemology and, in more detail, in our investigation of analytic philosophy.

    The impracticality of skepticism

    Even if we accept that skepticism cannot be dismissed outright, is it not a highly inconvenient – if not downright impractical – position to hold? Suppose we have to make our way to the top floor of a building and are thoroughgoing skeptics. We could take the lift, but how do we know it will work? Shouldn't we climb up instead? Then again, how do we know climbing will work, or even that the building is there at all? What about when we want to get down again? Isn't jumping just as sensible an option as taking the lift or stairs, given that we don’t really know anything?

    These are the kinds of questions that were and still are raised to skeptics, and they are usually intended to be reductios just like the ones skeptics use themselves. It seems like a ridiculous idea to jump rather than use the stairs, but the suggestion is that this kind of impractical (if not absurd) idea is what skepticism leads to. How can we answer it?

    This is a difficult objection and few people have led consistently skeptical lives. It is said that Pyrrho did, and stories are told about him getting into all kinds of scrapes because of his refusal to "know" anything (usually he was rescued by his followers and – amazingly, perhaps – respected for his dependable behaviour). We would probably be skeptical ourselves, though, that he was truly consistent, since a decision to fall into a ditch is somewhat different from walking off a high cliff.

    Those skeptics who were or are not quite like Pyrrho tend to say that they merely act in accordance with tradition and familiar patterns of conduct. In that case we try to avoid walking out in front of cars or not eating because we were taught these things as children before we began to think about skepticism. When we eat, then, it is not so much because we know that food or some form is required to sustain life but because we have fallen into the habit, or else because it tastes good and is enjoyable. After all, do we really involve knowledge when we go to a restaurant or make a sandwich? Opponents of skepticism would probably say that we do, since how do we know the sensations of eating will be the same as they were, that the food even exists at all or even that we should do as others (and we) have always done? A skeptic might say, in response, that we eat because our stomachs start to rumble and that we do not jump off buildings because we become afraid, not because of any claim to know anything about either. Why be afraid, though, unless we know what might result?

    In more recent times these criticisms have been used against the idea that all notions are equally true. Nevertheless, notice again that even though these concerns may seem to count against skepticism about everything, they once again do not answer the problems we identified before we straightforwardly claiming to have knowledge. Not worrying ourselves about global skepticism does not mean that these problems go away, so we have to be careful not to throw the baby out with the bathwater.

    Is skepticism irrelevant?

    If skepticism seems impractical and self-refuting, though, and only of concern to people analysing their knowledge claims in depth, why should we care about it at all? What relevance does it have to everyday life? The usual answer to this query is to say that skeptics do not form inquisitions (although this relies on a somewhat inaccurate conception of what the famous inquisitors actually did); that is, the effect of skepticism is to undermine confidence in certainties. People who are sure of themselves are sometimes keen to impose their ideas on others, but those who doubt are usually rather less determined – after all, what certainties would they advocate if they are not convinced that they really know anything? The same goes for events like wars, typically fought on behalf of ideals or political goals that were dogmatically held by their advocates. Although we could object that this is a simplistic understanding of why battles happen, especially since there are more factors involved, the principle involved is clear enough. In general, as Russell said, skepticism can help us avoid extreme positions.

    Another way that skepticism is relevant, however, is when we consider the possibility of error. Suppose we cannot come by certain knowledge, as skeptics claims; how, then, can we explain the occurrence of mistakes? Theories of error seem to implicitly rely on dogmatism, since only those who believe we can know have to explain why we often miss the target. Skeptics, on the other hand, can just remark that of course we would expect errors, since we don’t really know in the first place.

    The philosopher of science Karl Popper mocked what he called "conspiracy theories of error", in which the blame for mistakes is laid at the feet of people making them. If we know something, that is, and someone makes a blunder all the same, it must be due to their carelessness, refusal to face the facts or outright stupidity. We hear hints of this notion wherever someone declares that "ignorance is sin" or that we have no business dissenting from majority opinion. However, why should the thought of lots of people conspiring to make similar mistakes be any more plausible than skepticism?

    Appearance and Reality

    If we grant that skepticism is something worth discussing, we can return to our earlier example. How do we know that there is a computer in front of us (or paper if you printed this) on which we are reading this essay? The obvious answer is that we can prove it: we can see the computer and reach out and touch it – in short, we can rely on our senses. There are three major objections to their reliability, though – we might be deceived by:


    Dreams;
    Illusions; or
    Hallucination.
    There is an easy response to these: if our senses are functioning normally, then what we see, hear, touch, taste or smell is real; if not, we can be deceived. What we see in the latter event is just an appearance, not the reality behind it. This distinction was crucial in epistemology, but it gave rise to other problems. What is "normal" function, for example? How do we know that what the "normal" person sees is reality while the other possibilities are mere appearances? If we hallucinate, say, and see a goblin in the corner of the room muttering about whether or not he can trust his senses, we might bang our heads against the nearest wall and find that it disappears. However, why should the reality of the situation be determined by cranial trauma? After all, that there is no goblin in reality is precisely what we are supposed to be showing, not presuming it to do so and hence beg the question. Why should the experience that occurs least often be assumed to not be the reality? That we can interact with "reality" is no guide, since we do fine in our dreams. We can appeal to scientific explanations but these have epistemological issues of their own, as we have discussed elsewhere.

    What all this means is that our commonsense or naïve form of empiricism is untenable. We cannot distinguish between true and false experiences solely on the basis of our senses, but need to use other knowledge to help us. The question is: where did this knowledge come from and how certain can we be of it? Francis Bacon’s solution to this problem was to try to look upon the world free of preconceptions, claiming that "the understanding must be completely cleared and freed" of them. Can this be done? Unfortunately for empiricism, this tabula rasa (or blank slate) approach cannot be achieved due to theory-ladenness.

    The sense-datum theory of knowledge

    As a result of the difficulties posed by the skeptics, philosophers interested in epistemology made a subtle move: instead of arguing that the sense could tell about reality, they claim that they provide us with knowledge of appearances. Notice what this seems to achieve: we couldn’t rely on our senses for accurate knowledge of reality, but surely what appears to them is – obviously – what appears to them and hence we have certain knowledge of these appearances, even if we can say nothing about the reality we suspect to be underlying them? After all, if we are only talking about how things appear to be, how can we err?

    This means that our earlier example has to change to something like:


    It appears that a computer exists, on which an essay may be found. What this also does is resolve any contradiction between conflicting appearances. If we have an apparent hallucination that includes the goblin and another experience that suggests it wasn't real, the two are consistent with having certain knowledge of appearances:


    It appears that now there is a goblin in the corner;
    Now it appears that there is not.
    These describe successive appearances and hence cannot be contradictory, so dreams, hallucinations and illusions are no longer the problems they were before. We can still make errors, of course, but now these are mistakes in interpreting the appearances rather than in the knowledge itself.

    We can take a further step by making concrete these appearances so that they are experiences we are aware of:


    I am having an experience of the appearance of a computer. Although this may strike us as a clumsy way of expressing what is going on, it reifies the situation and makes no reference to the real existence of the computer or even being aware of it. This is helpful because it clears up many of the difficulties associated with skeptical arguments. If we say that we are aware of the appearance of the computer, for instance, this is not certain if there actually is no computer; but if we just say that we are having the experience of its appearance, this no longer depends on the existence of the computer at all. This is the sense-datum theory of knowledge, developed by Locke, Berkeley and Hume and persisting until its eventual defeat in the twentieth century. It holds that what we are aware of is not real objects but sense-datum in our minds that we experience, so we have:


    I am experiencing a visual sense-datum of a computer.
    Notice that this analysis is consistent with more recent scientific accounts of perception, wherein we "see" things because of light entering the retina and resulting in brain activity. As a result, the sense-datum theory seemed to be confirmed and to answer the objections of the skeptic.

    There are some objections to this new account, however. Suppose, firstly, that we experience a sense-datum of a computer and then moments later experience another of a television – in brief, that we were mistaken about the former and later realised the error. This suggests that even though the knowledge of the computer was certain, it only lasted a short time. What is the use of a theory of knowledge that has an unspecified duration?

    Secondly, when we first notice the computer there is a delay between the experience and the sense-datum report, even if it only takes as long as the lapse from sensing something to having the input processed by the brain. There is then another delay before we can say something about the experience. How can we be sure, then, that we have remembered the experience correctly between having it and commenting on it?

    These may seem like splitting hairs, but a more important criticism is that this sense-datum theory avoids skepticism only at the price of accepting the distinction between appearance and reality, and conceding that we can only ever know the former. Can we get past this demarcation? Can we discover anything about reality via sense-data? We can if we adopt idealism, the view that only sense-data and the minds experiencing them exist. This deals immediately with the problem of skepticism but at the cost of the external world, which is why many philosophers and laymen alike have rejected it.

    The Problem of Induction

    We discussed reasoning from particular instances to general ones in our earlier look at epistemology so there is no need to repeat the issue here. Nevertheless, we can note the huge problem that induction poses for empiricism, wherein we are supposed to be deriving knowledge from our experiences. If we cannot reason in this way, are we not being irrational in claiming to know anything inductively?

    One suggestion for avoiding these difficulties is to adopt induction as a basic principle if we want to reason at all. We cannot justify it, but we also cannot do without it. After all, if we try to imagine a situation in which we were refused to make any inductive inferences, it quickly becomes ridiculous – some might say the limit of skepticism. This almost leaves us caught between a rock and a hard place: induction can apparently not be justified, so we would be irrational to use; but if we do not we are crippled and cannot reason anyway, which is no less irrational a position to be in.

    Rationalism Instead

    Given these problems associated with empiricism, it should be little wonder that some thinkers either rejected it to begin with or looked elsewhere for a basis for knowledge. The alternative, of course, was rationalism. What knowledge of our world can we gain from reason alone?

    Rationalists drew their inspiration from mathematics, looking to the way Euclid was apparently able to build an entire structure of proofs on the foundation of a few, apparently self-evident propositions, and in general the way mathematicians seemed to be able to arrive at certain knowledge in this way. Famous theorems like that being Pythagoras' name could be deduced in a straightforward fashion, and when we look back to the groundwork that is needed to get started then we find assumptions that look, on the face of it, to be self-evident.

    Skeptics, however, were not convinced: what does self-evident mean? Who decides whether something is self-evident or not? After all, there were plenty of things about the empiricist approach that seemed obvious but later turned out to be far more complex. The rationalist can respond that there are plenty of propositions that people consider to be self-evident, but the skeptic wants to know why these provide us with knowledge – why, for instance, must something self-evident be true? What if it were false, in spite of how many consider it plain? Some rationalists replied that it is impossible for something self-evidently true to be false, but notice the subtle sleight-of-hand involved here: the rationalist has made truth a criterion of something being self-evident, but it was precisely the question of whether being self-evident implies truth that the skeptic was criticising, and hence this is unsatisfactory. Other rationalists appealed to the notion of an ideally rational being (much as some economists were to do later in developing their theories), but this is fraught with the same difficulties as the "normal" experiences of the empiricist we looked at above.

    Descartes tried to find a way around these issues by using a method of systematic doubt, according to which he would try to doubt everything until he came upon certainties that could not possibly be question and hence would provide the bedrock for knowledge – arriving at his famous cogito ergo sum as a result. His general principle was given in part four of his Discourse on Method as follows:



    Descartes consider the possibility that he might be mistaken about this, though – that instead of a just God there might be an "evil genius" who constantly deceives him and causes his clear and distinct conceptions to be false. He tried to counter this by proving the existence of a true God who was not a deceiver and who would ensure that his faculties functioned correctly, but his argument was criticised as circular by Antoine Arnauld. Although there were other objections, Arnauld's was perhaps the most damaging: he noted that Descartes relied upon his criterion of truth (quoted above) to demonstrate that God existed, and then used the existence of this God to show that he could not be deceived by an evil genius and hence could rely on his conception of truth to show further truths, which seems to be circular reasoning. Arnauld could thus accept everything that Descartes argued subsequently but undercut the very basis for it to begin with.

    There are other problems with Descartes' methodology, including a variant of the more general problems with rationalism that self-evidence is no guarantor of truth, but it is generally agreed that his epistemology ultimately failed. A subsequent attempt, far too deep and detailed to enter into here, was due to Kant. It is difficult to say exactly what Kant argued and held, since the interpretation of his writing is varied and still disputed, but he considered the possibility of synthetic a priori knowledge (terms we discussed in our look at analytic philosophy, concluding in his Prolegomena (after much detailed investigation) that "nature and possible experience are the same", so that "the understanding does not draw its laws from nature, but prescribes them to nature".

    Kant was particularly concerned to counter the work of Hume and thought that we would always fall prey to his arguments if we continued to conceive of the mind and its experiences as distinct. Instead, we should give up knowing anything about things-in-themselves (even though he accepted that this reality existed) and note that our minds are involved in organising our experience via categories, and hence that our knowledge is limited to things-as-they-are-experienced – or the realm of appearances again. The difference is that Kant’s philosophy, which he conceded was a form of idealism but argued strongly that this transcendental idealism was different from Berkeley's, falls victim to the same problems we identified before – except that Kant did not believe these were problems at all. He argued that the role of reason was and is only to give structure to our experience of reality, not to try to go beyond it. This appears to answer the skeptical objections we noted but to limit us in a way that Kant accepted but we might not. Kant was also not a complete rationalist and recognised that we gain many of our beliefs from experience, and hence his philosophy is often held to be a compromise between empiricism and rationalism.

    Some of the synthetic a priori truths that Kant found, such as Newton's laws in physics, were (as we know realise) not the complete picture. Non-Euclidean geometries were elaborated by mathematicians like Gauss and Lobachevsky, which showed that the certainty of reasoning from self-evident propositions did not have the domain the rationalists had thought. In particular, they knocked over the rationalist argument that we could have synthetic a priori knowledge of the world on the basis of Euclidean geometry. Although this defeat of rationalism did not imply that empiricism was the victor in the quest for a secure epistemology, it led mathematicians to inquire into the nature of their own discipline and its foundation. Logicists like Frege and Russell tried to prove that mathematics could be derived from logical truths, but ultimately failed in their efforts; Platonists argued that mathematical abstractions like numbers really exist; and intuitionists like Brouwer hoped to start mathematics anew via intuitive proofs (hence the name). These are all the domain of the philosophy of mathematics and hence are beyond the scope of this discussion, unfortunately.

    The Fallibilist alternative

    The sense-datum theory of knowledge collapsed in large part due to theory-ladenness when it was advanced by N.R. Hanson, Thomas Kuhn and Paul Feyerabend, while rationalism struggled to cope with developments in science and mathematics. Is there no answer to skepticism, then? Philosophers, particularly philosophers of science, wondered if there was an alternative – one that differed from rationalism and empiricism while retaining their insights about how we reason.

    The key to their efforts is to realise that all the preceding attempts to derive knowledge were based on searching for certainty, or justifications that could not be doubted. Instead of these, though, fallibilists (who sometimes refer to themselves as criticial rationalists or critical realists, depending on other slight differences) recognised skeptical objections that we cannot be sure of our knowledge and hence called it provisional. We can approach potential knowledge from two directions: we can try to justify a belief or we can criticise it. For the fallibilist, a belief that has withstood serious scrutiny (hence critical rationalism) is a reasonable one to hold as provisional knowledge. We might learn in future that further criticism shows it to be mistaken, but we can hold it for now. A belief that has not been criticised, on the other hand, has little value to the fallibilist and is not a reasonable one to hold.

    Notice that this is not to say that a belief is justified if we have criticised it and failed to find any flaws; on the contrary, it just means that we are justified in believing it. Successfully standing up to scrutiny does not imply truth, not least since many beliefs in the past have met this criterion but still been rejected ultimately, but only that we are able to believe them to be true provisionally. This, then, is a fallible epistemology that does not fall victim to skepticism: the justified true belief account of knowledge is modified slightly so that the justification is not of the claim itself but that we are justified in believing it. Fallibilism also dodges the unreliability of the senses by accepting that we cannot use them to attain certainty, but only reasonable beliefs. This means that the fallibilist trusts his or her senses unless there is good reason to doubt them, again not insisting on certainty and conceding that mistakes are possible. Moreover, it is consistent with an evolutionary account of knowledge in that if our senses provided us with false information then we would expect this trait to be a disadvantage, whereas if the information were accurate then it would plainly be advantageous and hence be propagated via natural selection. However, this is potentially a circular argument in that evolutionary theory is itself justified by a fallibilist epistemology and hence cannot then be appealed to in order to justify this epistemology.

    If fallibilism seems an improvement on both empiricism and rationalism, does it have any weaknesses of its own? Unfortunately, perhaps, it does. Why we should adopt fallibilism itself? As we have seen, it seems to withstand criticism quite well, so we can reasonably adopt it as a justified belief. However, this means we are using fallibilist standards to justify our usage of fallibilism as an epistemological standard – or arguing in a circle again. There does not appear to be any way around this: if we appeal non-fallibilistic justifications then fallibilism is incomplete and we are back where we started from. Some philosophers have responded that expecting a non-circular justification was too high a demand and we have to settle for less, but this leaves the fallibilist in a rather uncomfortable and – for some – unconvincing position.

    Can we know anything?

    We have seen, then, that epistemology is a deep and subtle area of philosophy with a long heritage. It starts from very basic certainties but finds that they collapse quickly under inspection. Many epistemological questions are still open but it is easy to see that they have not remained the same over the years as philosophers built upon the work of their predecessors and developed new objections or proposals. The advent of science has also had a considerable impact, which shows us that philosophy is not cut off from other areas of inquiry. Can we be sure of these things, though? Well, that's the point.


    Dialogue the Sixteenth

    The scene: Several months later, Trystyn and Steven are deep in conversation at Anna's place. She and Trystyn are now an item.

    Steven: So why is this a big issue for anyone other than philosophers?

    Trystyn: Well, tell me something you know and we’ll see.

    Steven: (Loudly) Okay: I know that Anna is a better cook than you.

    Anna: (From the kitchen) Thank you! You get extra!

    Trystyn: Funny how you think with your stomach. (He winks) Anyway, how do you know that she is?

    Steven: I've sampled the evidence, and let me tell you that it wasn’t always pretty. Your roast tastes like an offering to the god of charcoals.

    Trystyn: Ah, you didn't say you'd become a believer…

    Steven: Sure, but (raising his voice…) your cooking still doesn't compare to our host's.

    Anna: (From the kitchen again) You can stay…

    Trystyn: Okay, so you base your opinion on past experiences?

    Steven: Unfortunately, yes.

    Trystyn: Well, how do you know you remember the experiences accurately?

    Steven: Oh, it's my own personal tragedy that I remember it all…

    Trystyn: Heh. Still, you base your knowledge of my cooking on your memory of it. How do you know your recollections are accurate?

    Steven: I suppose I could've embellished the details slightly due to the requirements of mocking you constantly.

    Trystyn: And I'm not likely to forget that, eh? Or maybe you're letting one offering colour your thinking?

    Steven: Do you have a particular one in mind? My money's on that roast.

    Trystyn: (Ignoring him) Still, you can see the problem: how can we know that our memories are accurate?

    Steven: I might have dreamt it, you mean?

    Trystyn: That's a possibility.

    Steven: Why would I dream that roast and not that you're a great cook?

    Anna: (From the kitchen) He has a point…

    Trystyn: (Ignoring them both) It's a basis of trials these days that so-called "eye-witness" testimony is generally unreliable. Let several people watch the same events unfold and they can give inconsistent accounts of what happened, even down to mutually exclusive interpretations.

    Steven: Yeah, I read that somewhere.

    Trystyn: So why is your memory a reliable guide?

    Steven: I guess I could remember things not quite as they were, but still pretty close.

    Trystyn: Ah, well that's the question: what can we say about these memories? Are they pretty close to reality? What relationship do they have to things as they really happened?

    Steven: I heard something about this before. Next you're going to tell me we Kant know reality but have to be content with appearances.

    Trystyn: What a wit. (He groans)

    Anna: (Coming into the room, bearing coffee) I thought it was funny.

    Steven: So we have to be content with the appearances – is that what you were going to say?

    Trystyn: Well, that's what I was asking you. Are your memories a reliable guide to what happened? If not, what can we say about them?

    Steven: I guess not. There’s always a distortion: events happen but I interpret them at the time, then forget about them. When I dredge them up again later I probably reinterpret whatever I can recall on the basis of what I'm currently thinking, too, so the pure events as it was is lost. (Trystyn nods) Even then I suppose I could've made mistakes.

    Trystyn: How so?

    Steven: Well, I'm assuming my senses are reporting the event to me accurately. Are they? If I’m drunk I might see things differently to a sober guy. I might be dreaming, like you said. I might be hallucinating, I suppose. Quite often I see your culinary exploits and imagine I've finally found the holy grail – something I can eat without holding my nose. (Anna laughs)

    Trystyn: Many have searched for it but it was well hidden and is safe yet… (Everyone laughs; Steven spills coffee on himself)

    Steven: (Cleaning himself off) Thanks for that. Anyway, the trouble is that I only have my senses to go on. If I can't rely on them then what am I supposed to do?

    Trystyn: Well, notice that your senses can only deceive you if you're hoping to get at reality. If you're satisfied with appearances, or reality as it seems to you, then the problem disappears. Things get slightly more technical, of course, but it works. It's a heavy price to pay, though.

    Anna: Because we give up on knowing anything about reality, which is what we mean when we talk about knowing something in the first place.

    Steven: I guess it depends on what you're aiming for. Some physicists have said much the same thing: that we can never know reality, but only how it appears to us. Some even say that this is unavoidable, because when we perceive it we have to realise that we are involved in the very act of looking, not just passive observers.

    Trystyn: That's probably a subject for another day.

    Steven: Probably. (To Anna) So how come he gets you to do the cooking?

    Anna: Skepticism only goes so far, you know. His cooking is awful.

    (Curtain. Fin.)
    Teaser Paragraph: Publish Date: 06/20/2005 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    In this discussion we’ll look again at metaphysics, covering (in more depth) some of the same ground as the previous instalment but also considering some new aspects. In particular, we’ll study metaphysics insofar as it is the attempt to investigate Being – especially those categories into which philosophers have suggested everything that exists must fall. What are these categories, though? How do we distinguish between them? How should we characterise them? These are the kinds of questions we’ll examine, alongside classical and contemporary metaphysical problems.

    Being

    Since the Ancient Greeks, Being has not been considered the same as existence. The former was understood to include not just those things that exist but also the various categories that such a thing could have: being tired or being scared, for example. Perhaps the most famous treatment of Being was in the dispute between Parmenides and Heraclitus in Plato’s dialogues. According to the former (in Kenny’s translation):


    From here a simple argument with devastating consequences could be developed. Being had to be, by definition. Likewise, non-Being could not be. These agreed, what of change? If Being were to change, it would have to become non-Being – which cannot be. The conclusion had to be that there is no change. (A similar but later version of this argument would be used in theology: if there were a perfect being, such as God or the Absolute, how could He/it change? After all, a change from perfection would have to be to non-perfection.) We even find this problem in everyday life when asking someone “what are you thinking about?” and receiving the answer “oh, nothing”. It was relied upon by philosophers then and now asking about creation: how can nothing be? If we agree that it cannot, how can something come from nothing? Some, like Aristotle, relied on a version of this thinking to argue that the universe had to be eternal and uncreated.

    These were – and remain – difficult arguments to counter. The basis of it all, Being as fixed, was opposed by Parmenides contemporary Heraclitus, who insisted to the contrary that the fundamental nature of the universe was change: everything is forever in flux. This was famously stated as “you cannot step into the same river twice”. These two positions formed a metaphysical battleground for subsequent philosophers. Plato tried to reconcile them, firstly by separating the universe into a realm of ideas which was timeless (or Parmenidian) and a realm of the senses which was in flux (or Heraclitan). This was the Theory of Ideas, subject to critique in the middle dialogue called Parmenides and separating the ideal (“Good”, for example) from its approximations in the intelligible world (such as conduct we call “good”). It was eventually superseded by the theory of forms, found in the Sophist, which added to Being four additional forms: same, difference, motion and rest. The second of these allowed the possibility of avoiding Parmenides metaphysical straightjacket: when we talk of that which is not, we do not speak of non-Being but instead something that differs from what is. The collection of all “non-x”s, such as non-righteous, non-circular, and so on, give us non-Being, which is (by construction) just as real as the set making up Being. This clever solution nevertheless provided ample scope for continued study of Being, not the least of which was the status of universals.

    The Problem of Universals

    Suppose we take a proposition like the following:


    Hugo is wise. (1) Suppose further that (1) is true. Recalling our previous discussion, the subject of (1) is “Hugo” and the predicate is “wise”. Moreover, we say that “Hugo” refers to something: the subject, Hugo. What about “wise”? Does it refer to anything and, if so, what?

    Universals

    Although there are many species, the basic contention of metaphysical realism is that the predicate “wise” also refers. In other words, the truth of (1) results from a match between a linguistic (the proposition) and non-linguistic (the way the universe is) arrangement, the proposition in (1) picking out the circumstance that Hugo really is wise. Following on from this, we can straightforwardly say that there must be something called “Hugo” in the world for (1) to be true. Likewise, suggests the metaphysical realist, there must be something corresponding to what we describe by “wise”.

    This is not quite the full story, however. Suppose we take another proposition:


    Paul is wise. (1*)
    Since the structures of (1) and (1*) are the same, the metaphysical realist concludes that both are pointing at the same thing: the quality of “being wise” that is present in both Hugo and Paul. Indeed, it is because of the existence of this quality that (1) and (1*) are true. However, what is it that the predicate in these propositions is referring to? The word “wise” does not name a referent (the thing it is pointing to) because what is actually at issue is a more general concept: wisdom. In that case, (1) should be read as


    Hugo typifies wisdom. (1a) The way to understand propositions like (1), then, is to adjust them slightly in this fashion and read them as stating a match (or approximation, perhaps) between the subject and predicate. Since we want to be able to say and make sense of propositions like (1), we are committed to the metaphysical machinery that allows us to – and that, says the metaphysical realist, involves accepting that “wisdom” and other concepts actually exist.

    There are many qualities that might take the place of “wisdom” in (1a) – such as folly or ineptitude, to give some more realistic examples – and these are what we term universals. A universal can be a property (as “wisdom” functions in (1a)), but there are other possibilities. Consider:


    Hugo is a male. (2)
    Hugo is the father of Trystyn. (3)
    (2) matches the subject (“Hugo”) with a kind (“male”, “human” or “rugby player”, for instance) while (3) gives a relation (“teacher”, “son” or “team mate”, say). For (3) to be true, are we committed to the existence of “fatherhood”? This is the question asked by the problem of universals and which metaphysical realists answer in the affirmative.

    A major criticism of this account, however, is that it leads to an infinite regress. For (1) to be true, we agreed that (1a) had to also be true; that is, that Hugo typify wisdom (in the literature this is sometimes called exemplifying or epitomising). This suggests that for (1a) to be true we further require another universal – typification – as a relation, leading to another proposition:


    The typification in (1a) is a relation. (1b) (1b) checks whether (1a) enters into the correct kind of typification, but to be sure that (1b) is true we would need a (1c), and so on: an infinite regress. This seems to commit us to an ontology of universals piled upon universals, which is unsatisfactory even to many realists.

    There are several ways around this objection. One is to bite the bullet and accept that it must be so, rather than lose the ability to make sense of propositions. Another is to deny that there is any infinite regress by saying that the initial analysis is all we need to understand what propositions mean. The subsequent levels, then, need not commit us to the existence of anything else because the process of typification does not require additional levels. This is tantamount to saying that the metaphysical realist’s account does not apply everywhere, so it has a restricted domain of validity. A third option is to suggest that (1a) is just another way of saying (1), so that the difference is only grammatical and does not require a separate round of analysis.

    An altogether different complaint against the metaphysical realism we have so far considered is to ask about predicates like “married” and “unmarried”. Suppose we take the following propositions:


    Hugo is unmarried. (4)
    Hugo is married. (5)
    If (4) is true then (5) is false, and vice versa (unless we adopt dialetheism from our discussion of logic). Why, then, do we need both “married” and “unmarried” to be universals to make sense of a proposition like (4)? If we agree that some universals are superfluous, however, how do we decide which ones are necessary and which are not? Some metaphysical realists (who are usually also scientific realists, which we will come to later) claim that the predicates we require are those needed for a final physical theory, but the objection made by Hempel’s dilemma (considered in our look at the Philosophy of Mind) makes this problematic.

    Because of its association with several of Plato’s dialogues, metaphysical realism is also often called Platonism. An area of disagreement among realists is whether all universals necessarily exist; that is, are there universals that might have existed but do not, or do all universals exist regardless of whether we come across them in our universe? For example, take a proposition like:


    Hugo is a married bachelor. (6) Although we might say that (6) is false (or indeed meaningless – see our discussion of Analytic Philosophy) because there can be no such thing as a “married bachelor”, our metaphysical realist reading of (6) seems to imply that “Hugo” as subject and “married bachelor” as predicate must exist and not match. We arrive at what Aristotle called a two worlds ontology wherein some universals are typified by particular instances and some are not, and we can ask how we can ever know anything about the latter or, more importantly, how there can be any connection between the two. Thoroughgoing Platonists suggest in response that we can learn about those universals that are not typified from our experience with those that are.

    Nominalism

    The problem of universals has a long and distinguished pedigree, having been studied by philosophers through the ages. Those who have rejected universals have traditionally been called nominalists (from nominal, meaning “name”), the most famous being William of Ockham. This historical link provides us with the main objection raised against universals by nominalists (in addition to some of those already considered): the principle now known as Ockham’s Razor.

    According to the nominalist, we can understand propositions like those we have considered above solely by reference to particulars. Although the realist account employing universals may seem convincing, too, it requires additional entities: universals. The role played by these in explaining propositions like (1) may be interesting but is inessential. Since universals represent a metaphysical theory and Ockham’s Razor enjoins us to accept the most parsimonious theory, universals are eliminated by nominalists.

    Nominalism, then, involves a claim that a metaphysical theory is possible which only involves particulars; and this is a claim that needs to be justified. At this point the nominalists part company in offering differing accounts. An austere form suggests that the realist’s story does not achieve anything and that propositions like (1) are irreducible. “Hugo is wise” is true because Hugo is wise. To claim that (1) holds because it can be understood as (1a) does not achieve anything, since “Hugo typifies wisdom” means only that Hugo is wise. This apparently trivial reading is all that is required, says the austere nominalist, and the appeal to a universal is no less so.

    Problems with this approach arise as soon as we consider a proposition containing abstract concepts, such as:


    Honour is praiseworthy. (7)
    Here the nominalist may wish to translate (7) to make sense of it other than by appealing to universals. Suppose we do so and take the new proposition to be irreducible:


    Honourable people are praiseworthy. (7a) The nominalist may presume that (7a) requires no further analysis, so that it is true because honourable people are praiseworthy. However, we could imagine a person who is honourable but also a murderer, say – a quality we would likely agree is not at all praiseworthy. Thus it is possible for (7) to be true while (7a) is not; so (7a) cannot be an accurate translation of what we mean by (7).

    Another possibility is the following:


    Other things being equal, honourable people are praiseworthy. (7b)
    This is easy to understand but much harder to clarify: these “other things” are precisely those the nominalist is proposing to eliminate as superfluous. How can this austere approach be simpler, then?

    Another approach takes the discussion so far to be mistaken in supposing that propositions are talking about non-linguistic entities; instead, they are just linguistic expressions that we employ to talk about sentences having similar forms. This, it turns out, is much the same as the view held by Roscelin, Ockham and Abelard in the twelfth century, according to which it is only names that can be universal – not predicates (whence nominalism). A recent form of this meta-linguistic nominalism was detailed by Wilfred Sellars, according to which the discussion of a universal is really only talk about linguistic expressions. For example, the use of “wise” in (1) should properly be understood as saying that all instances of this predicate are adjectives, describing a particular characteristic of Hugo. The correct way to analyse them is by their use in language, not by reference to universals. However, critics have noted that the function of “wise” in English is the same as (translated) terms in other languages and therefore have suggested that Sellars’ account would commit him to the existence of “linguistic roles” as universals.

    Much of modern nominalism derives from the insights of Wittgenstein and the suggestion that words gain their meaning from their use, as hinted at above. The realist insists that a proposition like (1) requires the existence of “wisdom” in order to make sense of it, but the nominalist can disagree and say that we know what (1) means because we learn to understand terms like wise. To declare that Hugo is wise, then, is just to remark that his behaviour resembles that which we have come to call wise, and nothing more. The debate continues, which is why the problem of universals has held a fascination for thousands of years.

    The Problem of Realism

    Since the development of science, the problem of universals has taken on a new aspect. Faced with an array of scientific theories that apparently work extremely well (insofar as they make correct predictions or allow us to control phenomena), philosophers of science have asked why this is so. Explaining this success is a genuine challenge, one possible response to which is to say that it is due to our theories accurately getting at reality.

    In basic terms, there is a division between those who believe that this reality exists independently of us and those who are not so sure. According to the realist, an explanation of planetary orbits invoking gravity works because there really are planets and a force we call gravity; and, moreover, that this is so whether we are here to notice and remark on it or not. They account for this conception by a theory of meaning much the same as that we have already covered, whereby true statements about the universe work because they get at real things – like quarks, aardvarks and philosophers.

    Anti-realism

    It is important to realise that opposition to scientific realism does not consist in the denial that reality exists. This suggestion is a straw man of a complex set of arguments and it is hard to see why it should be worth considering. Instead, anti-realists maintain that what we refer to as reality is made up at least in part by our perceptual apparatus or the way in which we experience it. They do this in a variety of ways, from the idea that esse est percipi (“to be is to be perceived”) from Bishop Berkeley’s idealist account to the more recent semantic anti-realism of Hilary Putnam and Michael Dummett. The instrumentalist form relates more specifically to the philosophy of science and will thus be covered in our next essay, particularly its connection with nominalism.

    The realist account relies on the principle of bivalence, according to which the reality described by a statement either obtains or it does not. This is so regardless of our epistemological capabilities: if we say that bodies are attracted according to a law of gravitation described by a certain equation, then this is either true or false in the final analysis, whether or not we can ever know it to be so. The combination of this principle and the metaphysical apparatus discussed above in the section on universals is what the realist uses to ascertain the meaning of a statement.

    By contrast, the anti-realist employs a theory of meaning according to which we know the meaning of a statement insofar as we have a warrant for it; that is, we know what it would take for the statement to be considered true or false. The obvious corollary, however, is that a statement which is impossible to justify in principle would thus violate the principle of bivalence. The rejection of this principle thus characterises the anti-realist – at least according to Dummett.

    Although this excursion into the philosophy of language may seem like hair-splitting, it is easy to find propositions to illustrate the difficulty. For example, consider the following:


    Plato enjoyed whistling. (8) For the realist, either (8) obtains or it does not. If true, its meaning is plain: Plato did enjoy whistling. This type of statement, however, is just the kind of (apparently) undecidable one that might be expected to cause us trouble. Its meaning is seemingly straightforward, but how can we say that Plato did (or did not) enjoy whistling when this is unjustifiable? We know its meaning implicitly, says the anti-realist, but we cannot do so explicitly unless we assume its truth beforehand (thereby begging the question) or becoming trapped in an infinite regress (as before with universals).

    Another version of anti-realism relies on the inscrutability of reference. Suppose we take a word from a new language which we are trying to translate into our own. If we point to the thing we believe it to denote, say, the speaker may nod enthusiastically but the precise meaning of the word is under-determined. This is because the native speaker may understand us to mean the object as a whole, the collection of its parts, the general concept it embodies, and so on; just as if someone indicated a tiger and said “cat?”

    The point of this for the anti-realist is that it suggests that a direct translation between language and reality is impossible, and that some kind of mediation is required. If this is so, we would have to give up the idea of a mind-independent reality. Realists respond by saying that if there is an inscrutability in talking of reference in realist terms, the same must apply to anti-realist conceptual schemes. The realist can also remark that the under-determinancy of reference might apply to some terms but not necessarily all. A distinction like this is made in the philosophy of science, which we will return to later in this series. Being the modern counterpart of the problem of universals, the problem of realism continues to be debated.

    Particulars

    Whatever the status of universals, another issue for metaphysics is the make-up of the particulars relied upon by both realist and nominalist alike.

    Bundle and Substratum Theories

    Particulars are, as we have said, “things” (people, objects, critters and the like), but what can we say about their structure? One ontological theory holds that a particular is constituted by the many properties we associate with it and an underlying substratum, existing independently of the properties overlaying it. Bundle theorists, however, who have tended to be empiricists, disagree that any substratum exists and suggest that particulars are no more than “bundles” of their properties, arguing that substrata have no empirical content (being beyond the reach of any experience in principle) or that there is no need to posit a substratum to explain particulars.

    There have been interesting objections made to bundle theories. Firstly, suppose that a particular changes. If we believe that the particular was but a bundle of its attributes then the changed particular would no longer be identical with itself. That is, the Hugo of tomorrow is not the Hugo of today. We will return to this difficulty shortly in considering time. Secondly, however, consider a list of propositions describing Hugo:


    Hugo is cowardly. (9)
    Hugo is slow. (10)
    Hugo is boring. (11)
    (etc…)
    We have discussed above the question of what exactly each of the predicates here are picking out (“cowardice”, “slowness”, and so on), but what about Hugo, the subject? If Hugo is no more than a collection of his attributes then we can recast each of the propositions as tautologies – “A cowardly, slow, boring … thing is cowardly” in (9), for instance; and are thus not really saying anything about Hugo. What we require, according to the substratum theorist, is something underlying all these propositions about Hugo in order to make sense of them at all.

    The bundle theorist can respond that this is as much a difficulty for an account relying on substratum. Since nothing can be said about this fundamental character of a particular, beyond its attributes, we are no closer to understanding (9). Moreover, why should we presuppose that we need to know everything about a particular in order to describe it via propositions like (9) – (11)?

    Another criticism of bundle theories relies on the identity of indiscernibles. If two particulars share all their attributes then this principle states that they must be identical. In that case, if Hugo and Paul alike satisfied the propositions above we would be forced to accept that they were not distinct individuals unless we allow that there is something additional about them – their substrata. This is a much more difficult objection, one which has led to much recent work in metaphysics. Nevertheless, the bundle theorist can ask what the substratum beyond attributes can be, since it has no attributes. How can we describe it, then? We appear stuck between an inability to discern individual particulars sharing the same attributes or the impossibility of characterising the supposed substratum that distinguishes them.

    To avoid this dilemma, Aristotelians have attempted to demarcate kinds from properties. The former are what particulars belong to (so Hugo is a human, or a man) while the latter are what they have (Hugo being cowardly, and so on). We could then have several instances of the same kind (Hugo and Paul), sharing the same properties but nevertheless being distinct. Although it is difficult, perhaps, to see why this should solve the problem, the claim is that membership of a kind is what individuates particulars. While humans share properties, it is membership of a kind that marks them out as distinct. The elaboration of the full Aristotelian account, however, is beyond the scope of this introduction.

    Time

    The subject of time throws up a host of metaphysical questions, even before we get to the physics associated with its study. Does the future already exist, along with the past? Do we “live for the moment”, as many a romantic has suggested while crooning below a balcony? If the universals we considered above really do exist, do they do so forever? What about particulars, if we happen to be nominalists: when do they come into existence and subsequently pass away? Are we the same person as we were yesterday? If not, what happened to that person?

    Presentism and Eternalism

    There are typically taken to be two theories of time, upon which differing ontologies are based. Presentism is the view that only the present exists: the past has gone and the future is still to come. This kind of thinking is implicit in figures of speech, like saying “tomorrow has yet to pass”: it passes the now and thereby becomes the past, the point at which it does so being the present. It asks how “the past” and “the future” can meaningfully be said to exist as we do now.

    The alternative view is called eternalism and denies that there is anything (ontologically) special about “now”. When we talk of the present, we merely provide a reference point to help us say that one event happened before another; and so Galileo exists just as surely as we do, only within a different context.

    Endurantism and Perdurantism

    Typically associated with these theories of time are two theories of the persistence of particulars. Endurantists hold that the Hugo of yesterday is identical with the Hugo of today and hence the two Hugos are the same, persisting (or enduring) over time. Conversely, the perdurantist believes the Hugos to be different, often talking of stages or of yesterday’s Hugo as a part of his temporal development. The consequences of these pairings of theories is significant: for the endurantist only the now truly exists and hence talk of possible worlds or states of affairs is just that. The perdurantist, on the other hand, grants no metaphysical privilege to any specific time and so possible worlds (of the past, the future or our imagination) are equally as real as the one we find ourselves in now. If there are any number of possible worlds, however, each slightly (or considerably) different to this one, what does that mean about the Hugos in them? Does it imply that each of them is real or perhaps that they are all aspects of a universal Hugo?

    Since perdurantism is contrary to our commonsense view of the world, arguments against endurantism are required. They take the form of either an appeal to a four-dimensional view of existence drawn from physics, according to which existence is across time as well as space; or the suggestion that endurantists cannot account for change, especially when it involves the loss of a part of a whole. Perdurantists ask whether a person who has lost a leg in an accident, say, is the same person they were beforehand. If so, the suggestion is that this commits the endurantist to explaining why having legs mattered in the first place; and so on until little is left.

    Whatever ontology we find tempting, the nature of time is so problematic that perhaps the search for its nature is mistaken to begin with? Even so, it is easy to see the relationship between these investigations of time, the metaphysical problems introduced above and the old Parmenides/Heraclitus dispute in Ancient Greece, which is why many philosophical questions are considered timeless (to employ an awful pun).


    Dialogue the Fifteenth

    The scene: Anna and Trystyn are sat in the park, talking quietly.

    Anna: It’s amazing how much hurt we do unintentionally.

    Trystyn: Sometimes the truth takes us places we don’t want to go, I guess.

    Anna: (Shaking her head…) That’s not what I mean. We talk to one another but the translation is never quite right. We misunderstand, it gets amplified, and people get hurt.

    Trystyn: Can it be otherwise?

    Anna: Well, I wonder if there really is anything at base, grounding these things we struggle with.

    Trystyn: At base?

    Anna: I said you were wrong before – dishonest with Steven. Now I wonder what I was getting at.

    Trystyn: I thought it was for the best at the time.

    Anna: Maybe you didn’t think at all? (She sighs.) Anyway, what is dishonest? I invoked a match between your conduct and something I called “dishonesty”, but where is it? In my head or in the world?

    Trystyn: I don’t follow.

    Anna: Suppose that it really was wrong – what you did – and not just my opinion. Where are these things? “Right conduct”, I mean – which is what some call it, I think. When I call you dishonest, is it the same dishonesty as when I charge it of someone else?

    Trystyn: You mean a match between someone – me – and what you say about them?

    Anna: I suppose so.

    Trystyn: There’s a correspondence, they reckon, between the world and what we can say about it. Some insist that when you say “Trystyn was dishonest” you are just making a specific remark; but others that you hit on something universal – “Dishonesty” with a capital “d”, perhaps. What a person does when they’re dishonest is but a particular manifestation of a fundamental characteristic of the universe.

    Anna: But where are these universals?

    Trystyn: You want to test for them?

    Anna: Hardly.

    (There is a silence.)

    Trystyn: This is partly why they say there is no science without philosophy – or without metaphysics, really. We can say that something exists or doesn’t exist on the basis of experiment but why does experiment decide such things and what does existence mean in the first place?

    Anna: Does dishonesty exist? That’s what I’m asking.

    Trystyn: Sure, but the other question is prior. If dishonesty exists, where is it? If it is just an attribute we give to certain conduct, what makes it up? What attributes does it have? If existence is a collection of these properties, what’s left when we take them away one by one? Maybe this universal dishonesty you’re thinking of is the sum total of behaviour we would describe by the terms that make it up? And so it goes.

    Anna: I don’t know the answer to these questions. Stop trying to tie me in knots.

    Trystyn: (Quietly…) You asked.

    Anna: So say you don’t know.

    Trystyn: It’s not me. I got it wrong, but you don’t need an ultimate justification for saying so. If you want to hang me from that tree then you’ll have to be prepared to see the ground fall away beneath you.

    (Another long silence.)

    Anna: So what do you suggest?

    Trystyn: Hold on to something.

    Curtain. Fin.
    Teaser Paragraph: Publish Date: 06/19/2005 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    History may not seem to have much to do with philosophy but—just as we have already seen with science, politics and art—it relies on philosophical assumptions and concepts as much as any other subject. In this discussion we'll introduce some of the philosophical issues within history and hence try to gain a deeper appreciation of it. First, however, we need to know what we're dealing with.

    What is History?

    This may seem like a straightforward question but often an equivocation is made between two distinct uses of the word:


    History as the past; and
    History as an account of the past.
    These are quite different. The first is what we mean when we say "it's all history now", which becomes obvious if we just rephrase it as "it's all in the past now". The second, on the other hand, is implied when we talk of the history of the Great War, say, or the history of science. This distinction is sometimes quite subtle: when we refer to the history of a period or event we mean not just what happened (the past) but also how and why. Some thinkers have suggested that a way to clear this up definitively is to use history for the second meaning and simply call the past the past.

    What is history, then? In the first instance, the past would seem to be just the past: what happened before, whether in a specific period or just generally before now. (An interesting related question is to ask whether the past exists or not.) The problem arises when we try to decide what history is in the second sense. According to the historian Elton:


    As a consequence of this perspective, we could say that history is the true account of the past. We have already seen that there are different understandings of truth, but in this case we are speaking of a correspondence between what actually happened in the past and an account of it. Later we will look at whether this conception of history stands up to scrutiny and, if not, what could replace it.

    Another question we could ask is "what is the purpose of history?" That is, what is it for? Why do we study history in the first place? There are several possible responses:


    For its own sake;
    To find out the truth about the past;
    To try to understand where we came from;
    To try to understand why a particular event happened;
    To find historical laws;
    To justify actions in the present.
    We will consider difficulties with some of these below.

    What is the Philosophy of History?

    The philosophy of history is concerned with the concepts, methods and theories used in history; on the other hand, historiography is the study of the writing of history. When we analyse these we can begin to say something about what history is, as well as what it is not or cannot be. A distinction is generally made between two branches of the philosophy of history: speculative and critical. The latter is concerned with investigating those things already mentioned, while the former tries to find a pattern behind historical events—hidden from sight, as it were, until the historian discovers it.

    To appreciate where the philosophy of history differs from and expands on history itself we can refer to Hayden White's explanation:


    Although this may seem confusing, the important part is the emphasis on "conceptual apparatus": according to White, the philosophy of history brings to light the implicit assumptions that historians rely on and that - more importantly, perhaps - have consequences for their accounts. We shall examine some of these now.

    Whose History?

    If we go into the history section of a good bookshop and look around, we tend to find plenty of titles on the same familiar subjects: wars, revolutions or other so-called defining moments. In a large or particularly high quality store we can see that there are histories of all sorts of things and all kinds of people (although we search in vain for a copy of the much sought after academic volume Funny Things Hugo Said). However, we do not see all of history: people, places, events and periods are left out—as they must be, given that there are only so many historians, so much time and so many records to look to. This to say that history is always less than the past. After all, who is writing the history of what we are doing right now?

    How do we decide which histories are written, then? Obviously there are commercial considerations to bear in mind, but the academic papers that tend to be the basis for the more popular accounts are not so constrained. How do historians choose what to write about (and how to do it - historiography), apart from the straightforward criterion of something that interests them? For some historians this is an easy question: they work on significant issues from the past. Why the French Revolutionaries decided to act is significant, while what they ate for breakfast is probably not.

    An objection raised in recent times, especially by so-called postmodernists, is to ask who decides what is significant: who or what is worth the historian's attention? Although the example above may seem trivial, they say, not everything is so clear-cut and the allocation of significance is a value judgement. In particular, some groups are very much underrepresented—such as women and minorities. Indeed, given the sheer number of women who have lived in the past, it is hard to argue with feminist claims that women have been excluded from history in almost systematic fashion.

    Already, then, we can see that some of the high aspirations for history may not be so easy to maintain. Nevertheless, there is another issue that follows immediately: how do we address this imbalance in history, deliberate or otherwise? Feminist historians, for example, are trying to reappraise the role of women in the past; but this means that they are writing with a purpose in mind. Some philosophers of history suggest that this is not limited to marginalised perspectives but that ideological positions are inevitable. Later we'll consider some of the arguments for why this is so, but for the time being we can note that it would imply that our original "what is history?" becomes "what is the aim of a particular history?"

    Explanation and Description

    Another distinction made in the philosophy of history is between history as description and history as explanation. Those advocating the former suggest that the role of history is only to describe what happened in the past - this much and no further. Others say that history does (or must) do more: it must go beyond description and explain why an event happened as it did (or at all). Thus an account of what occurred in (and before) the French Revolution is not enough—it also has to explain why the Revolution happened at all, not least because there appears to be no contradiction or impossibility in supposing that it might not have.

    According to some such thinkers, history as description is like bookkeeping; but someone else has to come along and check the figures to see what the sales mean and to understand why people bought one thing and not another. Although the entries (or "what happened") are vital, they are not enough to be history.

    Historical Causes

    If we take it as given that the historian has to provide an explanation for an historical event, does it make sense to talk about historical causes? As we saw in our thirteenth discussion, causation is a difficult concept with many associated philosophical problems. Even so, one place we can start is to distinguish between necessary and sufficient causes via the more general notion of necessary and sufficient conditions.

    A necessary condition is one that must be satisfied before we can say that something belongs to a class. Much like a guessing game, then, if someone is thinking of an animal that happens to be a horse, we could ask lots of questions that give us the conditions that are necessary for something to be a horse. For instance, a horse has:


    Four legs;
    Hooves;
    A mane;
    ... and so on. If an animal is to be a horse, these conditions must be satisfied. An animal without hooves cannot be a horse (unless some notorious wit is thinking of a seahorse). A question like "does it have a mane?" answered in the negative would tell us that the animal cannot be a horse (or a male lion, and so on) because a necessary condition for being a horse is having a mane.

    A sufficient condition, on the other hand, is one that is enough to conclude immediately that we have—in this example—a horse. If someone asks, say, "does the animal compete with rider in show jumping?" and receives an answer in the affirmative, we know it must be a horse without any need for further questions. Thus this answer suffices to conclude that we have a horse.

    This is a simplistic instance because we do not say that a horse with only three legs is no longer equine. In general, a necessary condition for x to be a y is one of potentially very many that have to be satisfied before we can say "x is a y", while a sufficient condition is one that includes all the necessary conditions and is enough on its own.

    To return to historical causes, how far back do we need to go and how wide do we need to look before we can speak of what caused an event to happen? Suppose we take an example like the advent of science and ask, "what caused the rise of science?" Historians of science say that this is a vague question, but necessary causes would take the form of a list of things that were, in the judgement of the historian, required before science could develop. A sufficient cause, however, would be a single event that could bring about science on its own. Almost immediately we can see that the latter course is too ambitious: historical events, it would seem, are complex; that is, they are the result of many different factors, so that to look for just one as a cause is perhaps a mistake (although we might speak of more or less important factors).

    Nevertheless, another problem with historical causes is that the notion of causality has been brought into history from science and some philosophers of history feel that this was a mistake. The main difference, they say (apart from the epistemological problems we will come to later), is that the actions, motives and other foibles of people are involved in historical events, unlike causal chains in science. When we say that an illness was caused by a virus, for instance, we mean that there was a link between the two that did not depend on the political opinions or upbringing of the person getting sick, say. If, on the other hand, we want to say that the French Revolution was caused by Royal excess, it doesn't explain much. Why did Louis XIV act in one way and not another? What was the influence of his childhood, or his advisors? What of all the other people involved? And so on. The causal chain is rendered far more complex by the involvement of the human factor, or so the argument goes.

    Since history (or, more accurately, the past) is continuous, when can we stop and say that a cause has been found? The difficulty lies in ending the quest for causes in a way that is not arbitrary or according to the whim of the historian. One response is to suggest that we have a cause (or set of causes) when we have enough to offer an explanation of an event. The philosopher of history R.G. Collingwood proposed that a necessary cause in historical investigation is one such that without it the subsequent actions would make no sense. Similarly, a sufficient cause is one that would make the course of events that followed considered "rationally required". That means, for example, that a necessary cause of the Boer War would be one any explanation of the war must include to be convincing; while a sufficient cause would be one that, once it happened, would seem to make the war inevitable.

    Historical Laws

    Expanding on the question of historical causes and continuing the parallels with science, some historians and philosophers of history have claimed that it is possible to find historical laws, meaning much the same as we do when we talk of scientific laws. An historical law might take the form "whenever x happens, y is bound to follow"; so that, for instance, it could be claimed that "states always turn to war when their resources are insufficient for their population" is an historical law. For those who suppose that it is meaningful to talk of such laws, historical investigation would be the way to check the claim.

    Several objections have been made to the very idea of historical laws, of which Popper's The Poverty of Historicism is perhaps the most famous (historicism being, in this case, the belief that historical laws exist). We have already seen that some philosophers find laws to be problematic. Another complaint is to say, with Oakeshott, that history is always concerned with the particular, not the general. In reply, it is said that occurrences in science are no less unique; but what is sought is the general case that can be described with general concepts. Since history uses these just as science does—with terms like "revolution", "conflict", and so on—there is no reason to suppose that the search for laws must fail.

    A further criticism is to say—again—that history is concerned with the actions of people and that hence an historical law would have to account for the reasons why a person acted as they did. In response it is said that laws have the form "a person, acting in a rational way in situation A, will invariably do B". In this way A and B constitute the reasons for acting and the action itself. This is not to say that an irrational person may not do otherwise or that other reasons may change the situation, but only to generalise empirically.

    Karl Popper took a distinct line of attack. The error in supposing historical laws to exist, he suggested, lies in supposing history to be similar to science when it differs in one crucial respect: scientific laws apply to closed systems, whereas history—composed of the actions of individuals—is neither closed nor even a system at all. Moreover, the growth of scientific knowledge added to this point: since knowledge has an effect on human behaviour and hence history, we can only predict history via laws if we can also predict the growth of knowledge. If we could do that, however, we would already know it. As a result, there can be no historical laws.

    Facts in History

    Given the importance of "what really happened" to history, it makes sense to ask if matters are as clear-cut as perhaps some people (including historians) suppose. Here we'll look at the uses that facts in history are put to and if we can say that there are such facts in the first place.

    Facts and Interpretation

    It seems a commonplace that we have historical facts to work with, such as "there was a world war between 1939 and 1945". Even so, these apparently simple facts are not the business of history; instead, it is their combination as explanations that we have seen is taken (usually) to be the historian's task. However, a question asked by philosophers of history is how much of history is fact and how much interpretation? Since facts themselves are silent, goes the argument, the historian must interpret them to understand their meaning. This interpretive dimension is unavoidable and is added by the historian—it is not "already there", like the facts are supposed to be. This suggests that we can never get past interpretation to the ultimate meaning or definitive account of the past.

    Generally speaking, working historians tend to be unaware of this concern or remain unconvinced by its import. Although interpretation goes on, they say, most facts are not disputed or subject to contention and there is wide agreement about the majority of historical issues. When debate takes place amongst historians, it is at the margins—around a central core agreed by (almost) everyone. For example, most of the facts about the Second World War are known, with discussion not really calling much of this body of knowledge into doubt.

    The difficulty with this response is that it overlooks a glaring assumption: namely, that this centre is fixed. Instead, it lies on a spectrum of possible interpretations of the same facts. An example given by Jenkins is that of historical accounts in the old Soviet Union, in which the facts about the Second World War were interpreted from an agreed centre that differed significantly from the centre used by Western European historians. The mistake lies in supposing that a particular centre is the only possibility. The problem of interpretation comes up again on another level when we ask how one centre comes to dominate historical discourse, rather than another.

    Historical Facts

    A difficulty of an altogether different order arises when we begin to look closely at historical facts. To begin with, the term "facts" is loaded: what historians are actually confronted with are fragmentary accounts or traces of the past that are subsequently organised into facts. As we saw in our sixth discussion, facts are theory-laden; and for historians they are doubly so, as it were. The historian constructs an account of the past from other accounts, the evidence he or she refers to consisting in the accounts left by others. These accounts record not facts but what people in the past considered important, selected, interpreted and given from their particular perspective.

    We will dwell on this area because of its importance. Consider:


    The records we have of the past are incomplete and must always be so.
    People in the past did not record everything, any more than we do today.
    The historian relies on the observation and memories of others in the past for the accuracy of these records.
    The past has gone and hence cannot be recalled to check the accuracy of our accounts of it.
    The past is studied from a modern view, using contemporary concepts and understandings.
    Several of these are specific concerns that we will return to later.

    The problem for the historian is that there is no way around this epistemological issue. If he or she tries to check the truth of an account by it correspondence with "what actually happened", this appeal is found to be empty. Unlike science, where reference is made to reality, there is no historical reality within reach: all we have are traces of the past, accounts of others that may or may not be accurate. In the absence of any way to say whether they are or not, can it be meaningful to speak of historical truth? We will come to this question below, but for now we can note that the only way to check an historical account is by comparison with others. Thus the historian is forced, as it were, into retreating to a coherence theory of truth. The traces we have can function as limits to interpretation, such than any history has to take them into account (whether by incorporating them or discounting them, with reasons for both), but they cannot determine which of a multiplicity of possible histories within the boundary provided is more accurate. In a sense, then, we have the problem of under-determination from the philosophy of science that we studied before, only much worse.

    Language in History

    These philosophical concerns may be all very well, but do they really impact on history in a significant way? One way to see that they do is to look at the language used in historical accounts and ask if it possible to use a neutral, value- (or theory) free language to discuss the past. The answer, perhaps unsurprisingly, is no: the words we use reveal perspectives because of the epistemological problems identified above.

    A well-known example is the adage that "one man's terrorist is another man's freedom fighter". Should an historian call the crossing of an army from one state to another in the past a war, a disagreement, a liberation, or any number of other possibilities, none of which are theoretically neutral? Is an internal conflict an uprising, an insurrection or a revolution? Is calling it a conflict already to prejudge it? Even something as apparently straightforward as a World War is only obvious to those that share the interpretive framework and may not have the same meaning for everyone—Bushmen, for instance. We can say that the historian describes the event in a way enjoined upon him or her by the evidence, but—as we said before—the records from the past are silent and do not insist on any particular reading. Moreover, the same problem was present for those who recorded events in the first place.

    The historian can try to tread a fine line, attempting to avoid describing events from the past in loaded terms, but the very act of composing an account reveals choices made. Consider, for instance, an art historian: by deciding to give the history of a painting, he or she presupposes implicitly that the work is art—not trash. We have seen in our seventh piece, however, that deciding what is or is not art is far from simple. As soon as the historian opens his or her account, decisions are made about what to include or exclude. This leads us, then, to the question of historical method.

    Historical Method

    According to Hayden White:


    In this section we'll look at the situation within history and see if it is as bad as White insisted.

    What Method?

    When we look for the historian's method we are faced with the same problem as the similar quest for the scientific method: an overabundance of choices. Jenkins makes this painfully clear when he asks:


    Each of these (and more besides) is an example of a methodology that is consistent, gets results and is profitable for its users. Unfortunately, however, the epistemological difficulties identified above make a choice between them a tricky matter: what criteria should we use to decide which, if any, is the "best" method? We cannot compare their accuracy in getting at the past because there is no such beast.

    Unlike science, then, where we can at least try to say that experiment is better than guesswork by reference to something like reality, with history we have nothing to appeal to but other accounts. We might propose that the structuralists explain something better than the feminists, say, but that can only mean that the explanation accords with most or all of the available records of the relevant past and that the account "makes sense", explaining matters satisfactorily. None of these terms ("accords with", "makes sense" or "satisfactorily") can be given a rigorous definition precisely because a history can only convince subjectively within the boundary set by the traces of the past we have. It can never go beyond them and invite comparison with "what actually happened."

    In summary, there are historical methods but no historical method. The same goes for science and hence this should probably not be surprising, reflecting the breadth of history rather than a shortcoming.

    Ideologies

    Sometimes we hear the complaint that an historian is not ideologically neutral. What we can learn from the discussion of method, however, is that there is no neutral position from which to do history. It may be the case that an historian distorts (or outright lies about) his or her sources, thus going beyond the boundary set on his or her account by the records of the past, but otherwise history from one perspective is no closer to the past than from another. The complaint that a particular history is based on ideology is rather hollow, then.

    Perhaps a less ambitious understanding of the role of ideology in history is to note that people—not just historians—use history as a means to ground or legitimate themselves? Where we have come from can tell us where we are going or justify claims we want to make in the present. We see this practice often enough in attempts to validate the assertion that a country (or crown) justly belongs to one group and not another, or even in the popularity of family trees.

    We might want to call a Marxist history of Europe ideological, but why are the alternatives any different? Each seeks to understand the past from within an inevitable framework. As we touched on above, the choice of one word ("invasion", say) instead of another ("liberation") only makes sense within a perspective that leads us to choose one and not the other. Rather than dismiss certain ideologies, then, perhaps it would be better to examine them and hence try to counteract the unavoidable influence of our own?

    Empathy

    The historian has a potential way out of these concerns, however: empathy. By studying his or her sources in great depth and at length, it is said, the historian can begin to empathise with his or her subject(s) and gain an understanding from their perspective. This is the historical skill or tool that helps avoid many of the epistemological and other difficulties and grants the historian a privileged ability to say what motivated people in the past and why they acted as they did.

    There are several reasons why philosophers of history find this wholly unconvincing. The first is the general philosophical problem of other minds, in which it is asked how we can ever know the content of another mind; that is, what someone else is (or was) thinking. This is compounded by the distance between the past and the historian. Another objection is revealed by Croce's dictum that "all history is contemporary history", which is to say that although historical sources are from the past they must nevertheless be read in the present. This makes the historian a translator of meaning, but he or she has to do so from his or her own perspective that—as we have seen—is never neutral. In like fashion, Dewey wrote that "all history is necessarily written from the standpoint of the present". Given that the historian is using contemporary concepts, methodologies, epistemological assumptions, modern understandings of words, and so on, how can these be fully (or partially) shed to empathise with those in the past?

    Anachronism

    A charge often made against historical accounts in criticism is that they are guilty of anachronism. Perhaps the best way to appreciate what this means is to use an example.

    Some historians of science point to the work of Newton and note that, in addition to his work on mechanics, mathematics and other areas for which he is famous, he also /index.php?/page/resources?record=24">spent the better part of his time studying alchemy and biblical prophecy. According to some, this is at best a shame and at worst a tragedy: imagine what Newton could have achieved if he had not wasted his time on the latter subjects, putting all his efforts into the former.

    The problem here is that contemporary ideas or values are projected backwards: although we may think that alchemy is a hopeless endeavour (or we may not), that is not to say that Newton did. A similar question asked in his time ("think you alchemy a waste of time, sir?") may or may not have been answered differently, but since we do not know what he thought (except insofar as we could guess that his efforts suggest he would not agree) we cannot say that he should have acted otherwise without being anachronistic.

    From the discussion of empathy we can see that a certain amount of anachronism is unavoidable. Nevertheless, the value judgement that alchemy is worthless is not forced upon the historians by the records he or she has of the past, hence the objection that to say so is anachronistic.

    Truth in History

    At this point in our discussion, the notion of truth in history seems to have taken a battering. Now we'll look at possible ways to save it and see if we can breathe life back into it.

    Truth as a goal

    Earlier we learned that some historians consider their task to be the search for the truth. In spite of the apparent impossibility of ever achieving that, they still maintain that it is worth aiming for. However, if—as we have seen—the truth is not a meaningful concept in history, how can striving for it fare any better?

    Thinking back to our long look at truth in our tenth piece, what we see is that these historians are employing a correspondence theory—trying to match up the past and our accounts of it. Whatever we think of correspondence (or semantic) theories in general, it is at least clear that they are inappropriate for history. Instead, the realisation that the only way to test historical accounts is by comparison with others suggests that history requires a coherence theory, with Joyce, Appleby and Hunt calling for "well-documented and coherently argued interpretations that link internally generated meanings to external behaviour".

    Given that the historian is faced with nothing but traces of the past, combined and recombined into accounts but never any more than that, he or she can try to construct a new account that coheres with what is available. As further sources are found, the process begins anew and some previous accounts may be shown to be false. As we found when discussing truth, this gets the historian no closer to "what actually happened", but what it does do is follow the way he or she works with the available material.

    Critics of this understanding suggest that the historian is actually working with a pragmatic theory of truth. History is linked, like truth, to power, with accounts serving to support or undermine dominant or marginalised histories. On this view, truth and falsity serve to shut down interpretations that do not accord with what is useful for a society or group.

    Bias

    Another important concept in history is bias, the idea that traces of the past or accounts of it can be intentionally distorted to serve the purposes of the historian. However, bias only makes sense alongside the similar existence of unbiased accounts; that is, with the assumption that true stories exist that correspond to the past and from which biased versions differ. Since this has been thoroughly undermined, there being no neutral position from which to judge the degree of difference, where does it leave bias?

    In some sense, as we said, we can identify where an historian has gone beyond the limits of interpretation given by his or her sources. However, histories that do not rely on a correspondence theory of truth can speak of failing to cohere with other accounts or say that using history in different ways need not be biased but just a difference in goals or methods. In general, if the problem of bias is present within all histories then—again—perhaps a diversity of approaches can help appreciate what historians can achieve instead of striving after correspondence?

    Philosophies of History

    In our final section we come to speculative philosophies of history—attempts to find patterns in or a structure to history. We'll consider two general approaches to take to history and then look at two classes of theory in the philosophy of history.

    Historical Realism

    The notion of historical realism is analogous to its scientific counterpart and supposes that the concepts and theories employed in history get at reality—in this case, historical reality or "what really happened". In particular, the past exists independently of what we think of it. It relies, as we might expect, on a correspondence understanding of truth: even if a particular theory (or account) may not be true, it is more or less accurate by comparison and the aim of historians is (or should be) the truth.

    As we have seen above, and as a survey of the scholarly literature within historiography would show, historical realism is a thoroughly discredited position, often disparaged as naïve realism (in the pejorative sense). Nevertheless, there are still very many historians who adopt it and some philosophers of history have lambasted their unwillingness to face up to the failings of realism. However, still others advocate a much-reduced conception of the kind of objectivity that is possible ("defined anew as a commitment to honest investigation, open processes of research, and engaged public discussions of the meaning of historical facts" for Joyce, Appleby and Hunt) and point out that few practising historians today ever believed in this kind of realism in the first place.

    Historical Anti-representationalism

    In opposition to the realists, having accepted the criticisms given, historical anti-representationalists contend that the correspondence theory of truth within history has to be given up and the constructs of historians understood as fictions, not closer and closer approximations of the past as it happened. They may suggest that a coherence theory of truth is more appropriate or that talk of truth should be dropped completely, "what actually happened" being ultimately meaningless within history since it is forever inaccessible. Historians' accounts are to be read as attempts to organise the available traces of the past in a coherent way, not to latch on to something that cannot be found.

    Much work is still to be done in responding to anti-representationalist ideas, particularly with questions relating to the ancient world. Anti-representationalists hope that a history that can come to terms with its limitations will provide us with more interesting and significant accounts of the past.

    Linear Theories

    Some philosophers of history, most notably Hegel, have proposed that history proceeds in a line—hence linear—and so is directional, or "going somewhere". For those holding to a linear theory, history is a process that unfolds towards a final goal. This is a progressive view in which what came before was in a sense more "primitive" than now, while what will follow will be an advancement, until such time as the limit is reached. A quote from Hegel that gives a nice example is his remark that:


    On this view, then, the development of the notion and application of freedom is an instance of a linear advancement.

    Although the concept of teleology (discussed in our fifteenth piece) has come in for much criticism when applied to life, many people do seem to feel that we can justifiably say that we have progressed from the past and, moreover, that this is likely to continue into the future. For linear theories this is an inevitability—the playing out of historical laws or plans—which is separate from the idea that progress is contingent: it has occurred but need not have. A further distinction is to ask whether we should say that progress is strictly linear or whether a civilisation (or history in general) can advance and regress, showing a pattern of progress overall but not necessarily in all specific periods. The objections made to historical laws also apply to any speculative philosophy of history.

    Cyclical Theories

    Another class of theories holds that history proceeds in cycles. The philosopher of history most commonly associated with cyclical theories is Toynbee, who suggested that all civilisations showed a similar pattern of growth, dominance and decay. Using examples from ancient history, he divided the past into several complete civilisations and tried to demonstrate that they each arose through responding to challenging circumstances, developed into fully-fledged societies before eventually crumbling. He used these case studies to look for patterns and hence derive historical laws.

    In criticising his work (which, at ten volumes, is far too extensive to effectively summarise here), it was pointed out that it is unreasonable to suppose that general laws could be found on the basis of at most thirty-two examples. Another, more significant problem is that civilisations—not clearly defined by Toynbee—do not exist in isolation and continuation between them is not accounted for in positing their demise. Perhaps the most damning aspect to his work, however, was his refusal to announce the doom of our own civilisation when his studies—if we accept their conclusions—pointed to that conclusion with no likelihood of reprieve.


    Dialogue the Fourteenth

    The Scene: Trystyn, Steven and Peter are still deep in discussion, having moved to the park.

    Steven: So are you part of an order of some kind?

    Peter: I am. I could tell you about it, but...

    Steven: ... then you'd have to kill me? You swore an oath of silence? Meaning goes beyond the bounds of language?

    Trystyn: Heh.

    Peter: Actually, I was just going to say it'd be pretty boring.

    Steven: Maybe not. History is always interesting. Your order must have a story behind it, surely?

    Peter: It does, but there's quite a bit of dispute about it. There aren't many of us, few records from the old days, and we didn't come into contact much with other groups.

    Steven: Still, you could reconstruct the past from what you've got—as near as possible, anyway.

    Trystyn: No. History doesn't work that way.

    Steven: I thought it was pretty straightforward: to find out what the past was like you go to the documents and other sources and piece it all together?

    Trystyn: Well, lots of people tend to imagine it that way but it falls apart rather quickly under analysis.

    Steven: That accursed word again!

    Peter: (He wags a finger, grinning...) I told you so.

    Trystyn: Let's see how it unravels. Take this conversation we're having now and suppose that an historian is trying to give an account of it many years hence.

    Steven: Ah, you mean our inevitable biographers. I guess I'd better say something clever soon.

    Trystyn: Why break a habit? (He grins, too.) In any case, the historian could conceivably have several different records to use—let's say each of us wrote something in a journal about the talk and what happened.

    Steven: So I could make my contributions look weightier after the fact?

    Trystyn: Well, that's the point entirely: we would each remember different things and, unless we had exceptional memories, would record our individual perspectives. Perhaps I'd remember that you tried to avoid paying—again—while you would recall my asking far more "but what does that mean?" questions than I actually did. In any case, the historian would have distinct sources to work from, although all apparently describing the same event.

    Steven: What else?

    Trystyn: Next, there may be other sources available—perhaps fragmentary recollections from others here, written down long after the fact.

    Peter: After you've become famous, he means.

    Steven: Of course.

    Trystyn: The problem for the historian is to put together what actually transpired from these pieces, but at no stage can he or she compare what's been decided to what actually happened because the past has gone.

    Steven: What if all the accounts bar yours say "and then Steven said something incredibly witty that had everyone in stitches"? Isn't it reasonable to conclude that you were just bitter and distorted your version?

    Trystyn: Sure, but this is still an interpretation of the accounts—a coherent version of what they describe, based on other internal factors.

    Steven: Internal how?

    Trystyn: Internal, as in coherent with other accounts—like a diary entry in which I said "Steven is just not funny at all" and multiple entries from you saying, "I just can't fathom why he doesn't laugh at my jokes. It's probably because I keep beating him at pool."

    Steven: Ah.

    Trystyn: So the historian can call his or her version coherent, or say that it lies within the boundaries imposed on any interpretation by the documents to hand, but calling it a true account of what transpired is meaningless because it's never possible to compare them—as you might compare a scientific theory by testing it.

    Peter: Essentially, this opportunity of testing a theory isn't available to the historian.

    Trystyn: What's more, as we said, the sources are never "the facts" about what happened but always someone's view of it. This means that the historian is doubly damned, as it were: first, he or she can't compare what they come up with to what actually happened to test it; and second they can't do that either for the sources they have to rely on.

    Steven: So it's like theory-ladenness twice?

    Peter: Pretty much.

    Trystyn: Not only these...

    Steven: Uh oh.

    Trystyn: ... but the historian is also working backwards, bringing his or her own perspective unavoidably into play.

    Peter: He means that if they were a disciple of yours by then, they would read events differently than if they were one of his followers.

    Steven: Would anyone be so foolish?

    Trystyn: As hard as it may be for you to compute, the issue here is that our historian of the future would have to view the past through contemporary glasses, if you like. His or her assumptions would colour the issues, as would the way in which he or she understands terms we use that may have changed in meaning or that may have been given a special meaning by us. And so it goes on...

    Peter: In brief, the epistemological problems are too great. No matter how hard they try, historians are trapped insides these limitations.

    Steven: So they should just give it up and play rugby?

    Trystyn: It depends on how you view these difficulties. Are they unfortunate, something to be avoided or ignored or somehow worked around; or are they liberating instead? After all, when you realise that everyone has their own perspective and that there's no neutral one to pour scorn on yours from, so long as you're not making things up as you go that aren't within the bounds of what you have to work with, it seems more like history is something people do for a reason—to justify their place in life, where they've come from and where they're going.

    Peter: Thus the history of my order is not uniquely determined by the records we have, so we have a certain leeway to write it such that it provides us with opportunities for the future instead of trapping us in the past.

    Steven: What about my biography?

    Trystyn: I'll write it.

    Curtain. Fin.
    Teaser Paragraph: Publish Date: 06/17/2005 Article Image:
    By Paul Newall (2005)

    When Steven Soderbergh's version of Stanislaw Lem's novel Solaris opens with the sight of Kris Kelvin sat on his bed, listening to the disembodied sound of his dead wife's voice, it is immediately clear that the script has departed from the text in significant ways.

    Kelvin has been in this position for an unspecified amount of time; his surroundings, including his apartment, are functional; and as we spend more time following his life it is apparent that he pays little attention to those around him – quite an irony, given that he works as a psychologist. He appears to be a man lost, making just enough effort to get by.

    Lem's 1961 book is his most famous, the subject of a previous film by the legendary Andrei Tarkovsky and part of the old tradition of science fiction that dealt with the ideas involved in – and consequences of – space exploration, rather than the new technologies actually or potentially associated with it. Lem's Solaris is an enigma, a planet that occupies a twin-sun system and has somehow achieved a stable orbit, ostensibly by its own efforts. The subject of a century of studies and exploration, its ocean is reckoned by some to be alive and conscious, as well as aware that it is being examined and reacting accordingly. Kris Kelvin, our narrator, is part of this tradition and explains it at length in the text. In Soderbergh's reworking, however, Solaris is only under assessment as a possible source of energy; there are no lengthy scientific digressions to set the scene and this is the first of many departures. When Kelvin docks at the Prometheus station at the request of his friend Gibarian, two people are dead and one missing – the former including Gibarian himself who has committed suicide. In Lem's book there were only three crew onboard when Kelvin arrived – the late Gibarian, Snow and Sartorius (a male, replaced by the female Gordon) – while Gibarian's son (whom Kelvin chases) is not mentioned at all. There are other discrepancies that will be noted as their relevance becomes apparent.

    There are several themes at work in Solaris, the first of which is a critique of the ideology of exploration that Lem had Snow expound upon at length, words that Soderbergh gives to Gibarian in his soliloquy intended to help Kelvin understand what is happening on the Prometheus. It is worth quoting in full:


    Writing at a time when the conquest of near-space was considered a "race", Lem's Snow saw it instead as a testimony to our arrogance at wanting to spread our human conceits unto the very edge of the universe rather than make contact with other forms of life in order to improve our own. This is central to Lem's work as he has Kelvin go over the various attempts by scientists to understand Solaris and attribute a status (alive and/or conscious?) to its ocean. For Soderbergh it is of lesser import, and yet he introduces the problem in a different way through Rheya's struggle in Kelvin's cabin to piece together her memories. She recalls a conversation with Gibarian and others, in which she was "talking about a higher form of intelligence". Gibarian had responded dismissively with "you’re talking about something else. You're talking about a man with a white beard again. You are ascribing human characteristics to something that isn't human." As viewers, we can contrast his certainty here with his temporally later realisation that this very character of his attempts to study Solaris had ensured he could never understand it, warning Kelvin that "there are no answers, only choices." At this early stage, though, Kelvin is himself quite sure that "given all the elements of the known universe, and enough time, our existence is inevitable. It's no more mysterious than trees, or sharks. We're a mathematical probability, and that's all." It seems we are again invited to note this confidence as we come to watch it erode over the course of the movie.

    In the novel, the dead Gibarian visits Kelvin – just as he does in the film version, but for longer. During their conversation he explains why there is and must remain a barrier to any understanding of the polytheres (his term for the apparitions):


    Here we see an extension of Lem's critique: just as we invariably talk about God from a human perspective, we extend the conceit that we can fathom the non-human to Solaris and indeed anything we might find as we advance the frontier through space. We assume that our reason is sufficient to comprehend what motivates God or an entity like Solaris but this presumption remains, for Lem, a barrier to any genuine understanding. Some have argued that we simply have no alternative but to judge the actions of gods or "higher forms of intelligence" by our own standards, but it seems Lem is making the further suggestion that it is our viewing life as a mystery to be solved that is the deeper problem (an idea found in Wittgenstein, most famously). This is the lesson of several so-called Eastern schools of thought, too, insofar as the immediate experience of life is broken into pieces in the attempt to make sense of it, creating a puzzle that was not inherent in the experience itself. The behaviour of Solaris defies the efforts of all the "Solarists" to comprehend it, as Kelvin discovers for himself when he tries to achieve the same via a polythere drawn from his own memories. All he can learn, it seems, is about himself (whence the suggestion some have made that if and when we finally complete the jigsaw puzzle we will be confronted with an image of ourselves looking out at us). When we reach the close of the movie, perhaps the only motivation we can guess at on the part of Solaris was to give Kelvin the chance to do so.

    The second aspect of both film and book is given by the lines spoken by Gibarian that follow those quoted above:


    The counter-argument to this position is stated forcefully by Gordon, who tells Kelvin that "we are in a situation that is beyond morality." The question is whether or not the polytheres are fully human, and even if they are not whether they should be accorded the same rights. (These issues are especially relevant given contemporary concerns about the consequences of cloning.) In particular, Rheya is identical with her "real" counterpart except that she is formed from Kelvin's memories of her, some of which may be inaccurate (a point we return to below). This is complicated by doubts over whether she can survive away from Solaris, but is an incomplete memory or the effect of the polytheres on the crew of the Prometheus sufficient to justify their destruction? By extension, Lem was asking the same of any forms of life we might encounter in our exploration of the cosmos. That Snow is eventually revealed to be a polythere himself (this does not occur in the novel) – having killed his progenitor by accident – renders the issue still more difficult in Soderbergh's reproduction because it strikes us that action is only mooted for those known to be copies. That is, there is an epistemic dimension: no one advocates the demise of Snow because they do not know that he is a polythere, nor have any reason to suspect it, but Gordon (and later Rheya herself) poses the question regarding Rheya and her own visitor because she is certain of their status.

    By making the change to Snow's character in such a significant way, Soderbergh seems to be undermining any possible case for destroying Rheya. Likewise, the scene involving Snow and Gordon in which the latter lets slip that Kelvin sent the previous version of Rheya away does not appear in Lem's work (indeed, in the book it is Snow who eventually uses the device on Rheya at her insistence), and we may ask why it was given such prominence? What it achieves is to force us to reflect on Kelvin’s earlier choice and how his failure to understand his situation was dispelled by banishing the problem, just as Gordon's desire to rid herself of her visitor seems driven more by her own discomfort than any desire to learn about how Solaris sustains them. That Rheya is upset upon discovering that she is a copy only serves to underline this failure to distinguish meaningfully between polythere and reality, which Kelvin comes to appreciate when she asks him "but am I really Rheya?" He replies, making the irrelevance of this demarcation plain, by saying "I don’t know anymore. All I see is you."

    The third strand to the story is the fascinating question of how well we can ever know someone (sometimes part of the so-called "problem of other minds"). Although we the viewers are already aware that this Rheya is not really Kelvin's dead wife but an imitation, she is not and has to piece the realisation together herself. "I do remember things, but I don’t remember being there. I don’t remember experiencing those things." Obtaining her memories from Kelvin's, she has content but not context. From our privileged vantage point we watch as this inevitably leads to tragedy:


    Although we can observe that Kelvin would be expected to have largely negative memories of his wife given her suicide and the part he feels he played in it, there is something more important at stake in the inadequacy of his recollections; namely, that all memories are inherently incomplete – even those we have of ourselves. Given that Rheya is dead and therefore must be reconstructed from his memory, the question is not why this should happen but how it could be otherwise? The implication of the polytheres, it seems, is that there can be no total knowledge or understanding of another. Drawn as they are from the thoughts of the crew, they are nevertheless recognised as incomplete renderings by their "real" double; but rather than this being a comment on a supposed failure by Solaris to achieve a perfect copy, instead they speak of the failure of our own conceptions of others to match them. That is, it is we who fall short, not the polytheres or their originator.

    Kelvin appreciates this failure, at least in part, by rejecting the idea that his memories should dictate how life with the new Rheya must play out:


    He thus increasingly conceives of Rheya not as a copy of his wife but an opportunity to atone for his previous errors, with his admission ("all I see is you") pointing us in the direction of acknowledging that complete knowledge of others is both impossible and what we yearn for nonetheless. When Rheya says "I wish we could just live inside that feeling forever", it is difficult indeed to recall that she is supposed to be composed of Kelvin's memories and sustained by Solaris, rather than a new person in her own right. He is, as it were, on almost a level playing field with this Rheya because while she came into existence with an incomplete recollection of her past, Kelvin comes to realise that he is handicapped in exactly the same way as we all are. The desire of lovers to slow time or live in a perfect moment then becomes not a hopeless dream but exactly the response we should expect given that this feeling can neither be recorded as it is in our memories nor expressed in a way that has the same meaning to anyone else.

    This, of course, is the last and greatest theme of Solaris: love. Soderbergh has straightforwardly admitted that he intended to tell a love story and it is here that his version departs most significantly of all from Lem’s novel. Back from Solaris, Kelvin tries to describe what "home" means to him:


    This, however, is where the two tales diverge. Lem’s Kelvin declares that he will "find new interests and occupations" but "not give myself completely to them, as I shall never again give myself completely to anything or anybody", a profoundly negative lesson drawn from his experience with Solaris that he continues as follows:


    He closes by saying that he lives in expectation, persisting "in the faith that the time of cruel miracles was not past". Soderbergh's Kelvin, on the other hand, takes another route by ending his discussion of home with a desperate admission:


    Notwithstanding the incompleteness of his memories, Kelvin cannot shake the feeling that those things he did recall were inaccurate, born of his grief rather than a genuine reflection of what occurred between him and Rheya. Rinsing his finger under the tap, he realises that he did not cut it; turning to the fridge, he sees a picture that he did not remember was there (leading the new Rheya to form the wrong impression of their home); and so Kelvin learns that his grief is conditioned by his inability to let go of his guilt and focus on the beautiful aspects of their relationship. When, with a start, he realises that Rheya is with him and asks her if he is alive or dead, she replies:


    In Soderbergh's Solaris, it is not clear whether Kelvin and Rheya are really back on Earth or whether Kelvin is dreaming this scene as he dies on the Prometheus. What is apparent, though, is that Kelvin has been given another chance precisely because life ends but love does not.
    Teaser Paragraph: Publish Date: 06/17/2005 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    Expanding on our fourth discussion, we'll now look at the kinds of moves—rhetorical or otherwise—that can be made when setting out or defending an idea and countering others. We'll also consider some common errors in reasoning that come up in philosophical arguments from time to time, like anywhere else. The purpose of this piece is to provide a toolbox of concepts to use or refer back to when reading through and evaluating pieces of philosophy.

    Making an argument

    Although often we make arguments to try to learn about and understand the world around us, sometimes we hope to persuade others of our ideas and convince them to try or believe them, just as they might want to do likewise with us. To achieve this we might use a good measure of rhetoric, knowingly or otherwise. The term itself dates back to Plato, who used it to differentiate philosophy from the kind of speech and writing that politicians and others used to persuade or influence opinion. Probably the most famous study of rhetoric was by Aristotle, Plato's pupil, and over the years philosophers have investigated it to try to discover the answer to questions like:


    What is the best (or most effective) way to persuade people of something?
    Is the most convincing argument also the best choice to make? Is there any link between the two?
    What are the ethical implications of rhetoric?
    Although we might take a dim view of some of the attempts by contemporary politicians to talk their way out of difficult situations with verbal manouevrings that stretch the meaning of words beyond recognition, hoping we'll forget what the original question was, nevertheless there are times when we need to make a decision and get others to agree with it. Since we don't always have the luxury of sitting down to discuss matters, we might have to be less than philosophical in our arguments to get what we want. This use of rhetoric comes with the instructional manual for any relationship and is par for the course in discussions of the relative merits of sporting teams.

    In a philosophical context, then, we need to bear in mind that arguments may be flawed and that rhetorical excesses can be used to make us overlook that fact. When trying to understand, strengthen or critique an idea, we can use a knowledge of common errors—deliberate or not—found in reasoning. We call these fallacies: arguments that come up frequently that go wrong in specific ways and are typically used to mislead someone into accepting a false conclusion (although sometimes they are just honest mistakes). Although fallacies were studied in the past and since, as was said previously, there has been something of a revival in recent times and today people speak of critical thinking, whereby we approach arguments and thinking in general in a critical fashion (hence the name), looking to evaluate steps in reasoning and test conclusions for ourselves. Hopefully this guide will help in a small way.

    Fallacies

    As we discussed above, some mistakes in reasoning occur often enough that we now have almost a catalogue of them to consider. Here we'll look at those that turn up in everyday situations, whether uttered by politicians hoping to win our votes or the guy at the bar selling his theory as to why his team lost again as a result of poor refereeing. We already looked at a sample in our fourth discussion, so some of the content should be familiar.

    There are two kinds of fallacy: formal and informal. If we look back to the introduction to Logic, a formal fallacy is an argument wherein the structure reveals the flaw, while an informal fallacy is one wherein the structure may seem fine but the content is somewhere in error.

    The plan of this treatment will be as follows:


    An example of the fallacy
    An explanation as to what's wrong
    Another example
    A more technical explanation, where possible
    Hopefully by the end of the discussion these fallacies should be easier to spot and will probably be found all over the place. Although there is a certain amount of skill in noticing and countering them, they may also give us a grudging respect for those master rhetoricians who employ them with such cunning.

    Argumentum ad hominem

    This is a fallacy we studied before but it bears repeating, not least because it's perhaps the most frequently charged and least understood, in spite of its relative simplicity. Consider the following example:


    Now, whether or not the characterization of the so-called liberal's beliefs is accurate (that question will be asked when we look at another fallacy to come), the point is that it isn't relevant: either the plans really will leave the health service under-funded or they won't (or, perhaps, the situation may be considerably more complex), but the political persuasion of the person making that criticism doesn't impact on the claim itself. That means that the complaint against the liberal is against him or her, not the claim; and that is what the Latin phrase means: an argument against the man (or woman—more accurately, "argument to the person"), rather than an actual counter-argument. In general, there are three kinds of ad hominem:


    Abusive—the person is attacked instead of their argument
    Circumstantial—the person's circumstances in making the argument are discussed instead of the argument itself
    Tu Quoque—the person is said to not practice what he or she preaches
    Notice what the ad hominem is not: it doesn't say that the political beliefs of the liberal don't motivate his or her criticism in the first place, or that he or she wouldn't want to remove health care altogether (although it doesn't seem likely), but only that these things are not relevant to the point at issue. For this reason it is usually grouped as one of the fallacies of relevance. It also is not equivalent to an insult, as many people seem to suppose.

    Consider now some other examples:


    This is an ad hominem abusive, since it attacks a (perceived) quality of the claimant(s) instead of the claim itself. It has the form:


    P1: A claims B;
    P2: A is a C;
    C: Therefore, B is false.

    This is an ad hominem circumstantial, since it brings in the circumstances of the claimant when they are not relevant to the claim at issue (even if they might explain his or her interest). It has the form:


    P1: A claims B;
    P2: A is in circumstances C;
    C: Therefore, B is false.

    This is an ad hominem tu quoque, since it draws to our attention an inconsistency in the argument: if the claim is true, then the claimant should either change his or her ways or admit that the claim doesn't have to apply to everyone after all. It has the form:


    P1: A claims B;
    P2: A practices not-B;
    C: Therefore, B is inconsistent with A's actions.
    Note that this differs from the first two examples in that they are instances of formal fallacies while the third is sometimes an acceptable move to make in any argument. Pointing out an inconsistency in someone's thinking does not show their position to be mistaken but it may show their advocacy of it to be hypocritical. If we change the form slightly, it becomes fallacious:


    P1: A claims B;
    P2: A practices not-B;
    C: Therefore, B is false.
    That someone may be a hypocrite, of course, does not show their ideas to be false. The first form of tu quoque is fine but the latter is fallacious.

    In summary, then, the ad hominem fallacy brings irrelevancies to a discussion and distracts from the real point at issue.

    Argumentum ad populum

    This is another instance of a fallacy of relevance. For example:


    The problem here is that the number of people believing in an idea has no impact on its truth. An interesting other example shows this nicely: a common presumption, it seems, is that people in the past almost universally believed the earth to be flat, while we now know that it isn't. The fact that so many people allegedly believed that it was flat didn't change the shape of the earth accordingly, and if someone in those days had asserted that "everyone says the earth is flat" in defence of that claim then we would say that this didn't make it so: no amount of belief in a false idea can make it true. The irony is that historical inquiry teaches us that this example is also false, even though plenty of people seem to believe it: the belief in a flat earth was not widespread and the studies of historians have overturned this myth, even though many still hold to it.

    The general form is as follows:


    P1: A is claimed;
    P2: x many people believe that A is false, where x is large;
    C: Therefore, A is false.
    Reading beyond this argument, we can see that there are hidden assumptions to do with the ability of people to determine the truth of such questions on their own. For example:


    P1: A is claimed;
    P2: A majority of people is able to judge questions outside their area of expertise or knowledge with a high degree of validity;
    P3: It is possible to accurately gauge the collective opinion of people on such matters;
    P4: x many people believe that A is false, where x is a majority;
    C: Therefore, A is false.
    Even here there are still presuppositions that remain implicit and could be drawn out by further analysis. Appealing to the masses—which is what the Latin term means—is irrelevant to the truth or otherwise of the claim. There are more complicated examples we could consider, like this one:


    Here a normative moral claim ("you shouldn't use racist language") is justified by appealing to the number of people who agree with it. Is this an argumentum ad populum, though? As we saw in our discussion of ethics, some moral thinkers suggest that issues of right and wrong are decided by intersubjective agreement; in that case, the claim would actually read something like this:


    P1: Moral issues are decided by intersubjective agreement;
    P2: Intersubjective agreement suggests that racist language is wrong;
    C: Therefore, it is wrong to use racist language.
    Put in this form, it seems like a reasonable argument to make. For those who disagree about intersubjective agreement, however, P1 would be disputed and the attempt to justify the conclusion by appealing to P2 would be regarded as fallacious.

    A slightly different version of this fallacy is the appeal to tradition, where reference is made not to the number of people who hold a belief but the (alleged) fact that it has been believed for so long (or that the belief is an integral part of a society or culture) that to question it is folly. For example:


    Here the traditional belief among a significant number of people that war is a reality of life is used to justify a claim about defence requirements. However, this is not obvious and needs to be argued in turn; the fact (even if true) that people have always believed war to be an inevitability of life does not make it so, nor does the number of people who might believe it now or in the future. Once again, though, the matter is much more subtle: this could be a self-fulfilling prophecy, since if a majority of people feel war to be inevitable then they may be less likely to avoid it than those who are convinced just as surely that there is always a peaceful solution to any potential conflict. Appealing to tradition may be a reasonable thing to do if the tradition is true.

    In summary, the argumentum ad populum uses numbers to support claims when an inductive justification is insufficient to prove them.

    Argumentum ad vericundiam

    This is a move in argument that may or may not be fallacious, depending on the circumstances. It means an appeal to authority, an example of which could be thus:


    Here the speaker refers to the authority of the professor to counter the claim that philosophy is important. The problem is that the presumed authority may or may not be relevant: if the professor is (or was) a lifelong student of philosophy and decided after years working in the field that it really is a waste of time, then perhaps we should look into his reasons for saying so? On the other hand, if he is a professor of mineralogy, say, then—on the face of it—his opinion bears no more or less weight than anyone else's. It may be that additional factors are important: perhaps this professor has also studied philosophy or is known to us to be a particularly trustworthy and astute individual whose opinion we have come to value?

    In short, appealing to authority where the authority does know (or is expected to know) what he or she is talking about is a legitimate move in argument, but when the authority's expertise is not relevant then it is fallacious—indeed, a fallacy of relevance, as before.

    Matters are not always so clear-cut, though. Even if the authority in question really is an authority in the field, it may be that the question under consideration is one of much controversy among his or her fellow academics. In our example, other philosophy professors may be found who say that philosophy is important, so that appealing to authorities on one or other side or an argument does no more than appraise us of what they think. Take another instance:


    Here the implicit idea behind the criticism is that with only a finite amount of money to go around and other deserving causes in need of support, why should we support a quest that academics like Professor Y agree is very likely to fail? Is this argument fallacious? It depends: we would need to know more information, such as whether the professor is an expert in the appropriate area of biology and if there is any controversy among similar experts. If the professor's opinion is indicative of the relevant biological community, then perhaps this is information we should keep in mind when forming an opinion on the issue? On the other hand, if the professor is something of a maverick and the weight of biological opinion goes against him or her, then appealing to him or her as an authority could be seen as fallacious, distracting us from the point at issue. In general, we need to be careful in assessing the value of expert testimony, as well as its relevance.

    Argumentum ad baculum

    Consider the following argument:


    Here an appeal is being made to the consequences of not accepting the argument for raising taxes. The fallacy itself means an appeal to force (although here we consider also the argumentum ad consequentium, since they are so similar), and here the claimant is implying that a consideration of what will (allegedly) follow from not raising taxes ought to force us to accept the proposal. That means the general form is thus:


    P1: Not doing A will result in B;
    P2: B is undesirable;
    C: Therefore, we should do A.
    The fallacy occurs when the threat is in fact not related to the proposed action; in this formation, that would be challenging P1. In our example, perhaps not increasing taxes really would lead directly to the country falling apart (whatever that means), but it isn't obvious. Indeed, it sounds more like a rhetorical tactic to discount all the alternatives. What we want to know is if P1 is true; if not, then the argument is fallacious.

    Take another instance:


    Here, once again, the force of the undesirable consequences is intended to make us accept the argument that we should vote. Is this fallacious, though? If we were to put it into syllogistic form, this time P1 would seem much more plausible. The important point is that the threat appealed to must be relevant to the issue at hand.

    Argumentum ad misericordiam

    This fallacy is concerned with an appeal to pity, usually for the circumstances of the claimant. Consider this example:


    The problem here is that a bad idea is so whether the result of five minutes or five decades of effort; the fact that someone may have spent a great deal of time coming up with it says nothing at all about its truth or otherwise, so asking someone to take account of the particular factors that went into it and the disencouraging thought of so much time wasted is simply irrelevant. One way we could set this out is as follows:


    P1: If A is false, all the work put into it would have been wasted;
    P2: Wasted effort is to be avoided;
    C: Therefore, A is not false.
    When we look at it this starkly, it seems obvious that the conclusion does not follow.

    Now take another example:


    Although this is close to another fallacy we'll consider later, we can see that here an appeal to pity (for the less fortunate countries) is intended to distract from the fact that there are other ways to help people, some or all of which may be better than donating aid. That some people may be in unfortunate circumstances does not imply that aid is the best way to help them, and indeed the fact that people elsewhere are in need of help is irrelevant to the question of whether aid is a good strategy, except insofar as it provides the problem in the first place. It may seem heartless to note this, but that is precisely what the appeal to pity intends to do: by hoping that we will want to avoid appearing overly concerned with the logic of argument instead of the people affected, the existence of alternatives is ignored.

    In general, then, we once again have a fallacy of relevance.

    Argumentum ad ignorantiam

    The argument from ignorance usually involves assuming that something is true because it has not yet been proven false. For example:


    The implicit idea at work is that since the existence of faeries has (allegedly) not been disproved, it follows that they do exist. This is not relevant, however: that this disproof has not been forthcoming says nothing about actual existence or otherwise. Even if nothing disproving faeries ever comes about, this cannot form the basis of a proof of their reality.

    To see some of the issues involved in the argument from ignorance, we can also look at a more involved example:


    Here the assumption is made that for evolution to be a successful theory it must be able to explain how life itself came about in the first place; since it is supposed that no one can do this at the moment, it follows (allegedly) that evolution fails. We can try to put this in syllogistic form:


    P1: A successful explanation of life must be able to account for the development of life itself;
    P2: Evolutionary theory cannot do so;
    C: Therefore, evolution is not a successful explanation.
    We can agree that P1 seems reasonable, but the problems lie with P2. It may be that evolutionary theory can provide an explanation, but that this is insufficiently understood by the person making the argument and hence thought to be unsuccessful. However, even if we suppose for the purpose of discussion that P2 does hold, the conclusion still need not follow. What we require is an additional premise, to the effect that evolutionary theory currently cannot provide an explanation and, moreover, that we have good reason to believe that it never will be able to.

    Here we arrive at the crux of the matter: even if evolutionary theory cannot help us at the present time, it may be that tomorrow, next week or in several years with more research and study that the hoped-for explanation can be found. That we are ignorant of such an explanation now is no reason to suppose that we always will. In the syllogism, then, we might have:


    P1: A successful explanation of life must be able to account for the development of life itself;
    P2: Evolutionary theory currently cannot do so;
    C1: Therefore, evolutionary theory can never do so.
    C2: Therefore, evolution is not a successful explanation.
    Viewed like this, we can readily see that C1 does not follow from P2. We would require another premise, such as:


    P3: There are strong reasons to suppose that evolutionary theory can never do so. This, of course, is just the kind of premise that would be disputed and it would require a good argument of its own. The argument, without this expansion to understand what is going on, relies on current ignorance to justify a conclusion about the future.

    Post hoc, ergo propter hoc

    This Latin term means "after this, therefore because of this" and the fallacy involves mistaking a subsequent event for a consequent event. For example:

    There are plenty of other sporting superstitions like this one we could look at. Although one concern here is that if the lucky hat didn't "work" we might attribute the run of losses to something else, the main issue runs thus: after I found my lucky hat the losing streak stopped; therefore, it was because of it that the team started doing well again—post hoc, ergo propter hoc. We have two subsequent events—the finding of the hat and the ending of the losing streak—that are assumed to be consequent, the former causing the latter. There are plenty of other ways to account for events, though: perhaps the team was missing several key players, or playing away from home? The objection is to note that it need not follow that two subsequent events mean that one caused the other.
    Take another example:

    The argument here is that people are motivated to migrate to one country rather than another because of the assistance it can provide them with; the fact that the number of immigrants went up after the amount was increased is supposed to prove this theory. If we set it out clearly, we can see what is going on:

    P1: Benefit levels went up;
    P2: Immigration levels then increased;
    C: Therefore, immigrants chose which country to migrate to on the basis of benefit levels.
    In fact, we would expect the matter to be far more complex, with potential migrants—both those who chose to leave their home country and those who are forced to by circumstances—to weigh up many factors. What is missing, then, is another premise—something like:


    P3: All other factors remained the same. If we take P1 and P2 as given, P3 still requires a strong argument of its own, especially since—on first inspection—it's hard to see how such dynamic factors could remain constant long enough to make this assessment.

    In general, the picture we have is as follows:


    P: B follows A;
    C: Therefore, A caused B.
    We could replace A and B with all manner of instances to see how plainly this argument fails; we would need that crucial additional premise that all other factors remained the same if we want to talk about causation. Since it assumes too much, this fallacy is usually called one of presumption.

    False dilemma

    This fallacy typically involves asking a question and providing only two possible answers when there are actually far more. It seems to be a favourite of politicians, especially when trying to win support for a none-too-plausible policy. Take this classic example:


    The implicit argument here is that two possible positions exist with regard to the matter at hand: in favour or opposed. If we are not in favour, then, it follows that we must be opposed; and vice versa. The use of such tactics often give us the opportunity of appreciating fine—if overblown—rhetoric, too, like "do you support this war to defend our way of life or are you a cowardly, treasonous blackguard?" To expose the question as a false dilemma, all we need do is show that an alternative response exits. Other names for the same thing are the black and white fallacy, which immediately calls our attention to the shades of grey that are ignored, or the bifurcation fallacy.

    Take another example:


    The person presenting such a choice presumably advocates the lowering of taxes and is offering us a choice of two options. Since the second one seems unpalatable, he or she assumes we will lend our support to the policy. Taking the best possible reading of this situation, we might have the following:


    P1: We can lower taxes or the country can go to the devil;
    P2: No other options exist;
    C: Therefore, a person not agreeing with lowering taxes is content to see the country fall apart.
    Even this does not precisely address the statement as given; for instance, we could hold no opinion at all on the matter, or be insufficiently informed to do so sensibly. These are alternatives, so the choice given is a false dilemma. In the above formulation we could challenge P2, since it seems unlikely that only one policy has been proposed. A single alternative would again make the choice a false dilemma. As before, this is a fallacy of presumption.

    Slippery slope

    This fallacy occurs when a person is too quick with what they suppose to follow from various stages in their argument. Take this example:


    The slippery slope is supposed to run from the acceptance of restrictions on free speech to the arrival of a totalitarian regime, so that once we start on this road there is (allegedly) no turning back—totalitarianism would be inevitable.

    To check if the argument is fallacious we need to look at the initial premise and the conclusion and see if the latter follows. In our example this would give:


    P: Freedom of speech is to be restricted;
    C: Therefore, totalitarianism is inevitable.
    Put so starkly, it doesn't seem very convincing. Moreover, it is by no means obvious that the premise need lead to anything other than what it states; to show otherwise, the person making the argument would need to add more detail in the form of additional premises, explaining why the conclusion necessarily follows. Without that, the fallacy lies in claiming that a slippery slope exists where it doesn't.

    Complex question

    This fallacy occurs when two or more questions are asked at the same time as though they are related, when in fact they need not be. For example:


    Here we are asked two questions ("do you agree that we should lower taxes?" and "do you agree that we should increase prosperity?"), but they are linked together as though reducing taxes and increasing prosperity are the same thing. Sometimes, of course, that is the point: the questioner wants to say that lowering taxes will lead to increased prosperity, so the question is actually asking if we agree that one follows the other. Instead, we can separate the two and perhaps agree with one and not the other. For instance, we might want to increase prosperity but disagree that lowering taxes is the way to go about it.

    Often the rhetorical purpose of a complex question is to associate a proposed course of action that might be rejected with a desirable consequence, suggesting that the latter depends on the former. This challenges the reader/listener to reject both, which would be hard to do without accepting the loss of the desirable part. The way around this strategy is to separate them. Take another example:


    There are again two questions being asked here: "do you want to study philosophy?" and "do you want to waste your time?" The implication we are supposed to draw is that studying philosophy is a waste of time, but we can ask if it is possible to answer "yes" to one question and "no" to the other. In this case, we can: we might think that studying philosophy is not a waste of time, but agree that wasting time is something to be avoided. In that case, we can give the "yes" and "no" answers and hence we have a fallacy of complex question.

    In general, then, a complex question involves being asked something in the form "do you believe/agree with/disagree with A and B (and C, etc...)?" and realizing that the question can be separated into "do you agree with A?"; "do you agree with B?"; and so on. If A and B are related, then there may be no fallacy; but if it is possible to answer the separate questions with different answers, then a complex question has been used fallaciously.

    Accident

    The fallacy of accident is sometimes also called a sweeping generalization and this latter name for it gives an indication of what is going on. It occurs when a general rule is misapplied to a particular situation. Take an example:


    Here the argument is intended to show that the Biblical injunction is mistaken, since killing is unavoidable if we hope to survive. To untangle it and find where the error lies, we look for the general rule and try to see if it has been correctly applied or not. In this case, the rule is easy to spot: "thou shalt not kill". Next we need to ask where (or to whom) the rule is supposed to apply, and here we find the error: it is clear from the context that the rule is for humans and prevents them from killing other humans. Since it's possible to survive without needing to kill other people (although much of world history tends to suggest otherwise), to extend the rule to animals or plants, say, is to misapply it—to make a sweeping generalization that goes far beyond the original intent in an effort to defeat it.

    If we fill in the implicit suggestion and put the argument into a syllogism it immediately becomes clear:


    P1: Thou shalt not kill (other humans);
    P2: We need to kill other animals and/or plants to survive;
    C: Therefore, following the rule "thou shalt not kill" would prevent our survival.
    The conclusion simply does not follow.

    Consider now another example:


    Here the person is taking the same general rule and applying it to the particular situation in which (it is implied) we must "kill or be killed". Thus we have the same rule and the application seems to be reasonable, but this time the sweeping generalization lies in supposing the rule "thou shalt not kill" to read something like "thou shalt never kill, under any circumstances". By taking an uncharitable reading of the principle, the person has over-generalized the rule and applied to areas not included in its original formation.

    In summary, the fallacy of accident usually involves trying to disprove a generalization by finding a particular example to the rule and assuming that the rule was supposed to apply universally. It occurs when we move too quickly from the general to the specific.

    Hasty generalisation

    This fallacy is often called the converse accident because it is the opposite to the fallacy of accident above; that is, it involves moving too quickly from the specific to the general. For example:


    If we replace "murder" by any other social ill and "men" by a minority group, we can see that we have the kind of argument that has historically been used to justify organized or individual violence against them. The fallacy lies in making a general rule of a few particular cases, hence the hasty generalization. In this case, we need only find a single counter-example to show that the general claim is false, such as a murder by a female.

    Another example could be as follows:


    As before, the single specific instance of a friend lying has been used to justify a general rule that all friends (or indeed anyone at all) are liars. One or more friends who are not liars would serve as counter-examples to defeat the claim. To avoid the hasty generalization we have to be careful not to come up with a general rule from too few particular cases.

    Red herring

    This is not an obscure delicacy but a fallacy that involves bringing irrelevant ideas to a discussion as though they can add to it. For example:


    Even though we could say that by suggesting that prisons are currently ineffective we are not saying that they should just be closed down and everyone inside let out (that would be another fallacy—a straw man), the point is that none of this is relevant to the issue at hand: if prisons do not work as they are, then that is so whether or not we have in mind some improvements, a better idea or are just making a criticism of an imperfect system. By introducing this objection, attention is drawn away from the prison question and onto something entirely different.

    In general, if a claim about A is countered by referring to B, the important question is to ask whether B is relevant to A. If so, it may be an objection worth considering; if not, the objection is a red herring.

    Straw man

    This fallacy takes its name from the image of someone stuffing some clothes with straw and then beating seven bells out of the resultant opponent, supposing thereby that they have somehow won a fight. The fallacy occurs when an argument is countered by taking a weaker form of it and showing where it fails, assuming that this means the original argument has also been defeated.

    Take an example:


    We could render this as a syllogism as follows:


    P1: Investing more in public services is equivalent to taking everyone's money and deciding how it should be spent for them;
    P2: This is equivalent to totalitarianism;
    P3: Totalitarianism has been refuted previously;
    C: Therefore, the idea of investing more in public services is refuted.
    Even if we accept P2 and P3, which we needn't, the important point is that P1 is false and does not accurately describe what was originally claimed. By making two different ideas equivalent the argument becomes easier to address but, since the refutation deals with one idea and the argument with another, nothing is actually accomplished. The argument is mischaracterized or misrepresented in order to make it easier to tackle, but by doing so it isn't tackled at all.

    Another example could be this:


    Here the idea of what the death penalty involves is mischaracterized (we would hope) by supposing that anyone advocating it is actually asking that people be publicly hung on meat hooks. Since (again, we would hope) this measure would not be accepted, the argument is considered defeated. A simplistic and deliberately repugnant version of the death penalty is used to discredit the idea when the person suggesting it probably said nothing of the sort; as a result, the refutation is unsuccessful.

    This fallacy is unfortunately very common and some politicians tend to be adept at its use. It can be used in humour but perhaps the most important lesson to learn from it is not to unwittingly or otherwise make straw men of other people's ideas ourselves.

    Equivocation

    The fallacy of equivocation occurs when an important term in an argument is used in two (or sometimes more) senses. An example might be:


    Here the word "kill" is being used in two different ways: the first time it is employed as a figure of speech, where "killing time" means to use up some spare moments in one way or another; in the second it takes on a more specific meaning, the kind we normally associate with it. The person asking the question has confused these, so that something else we could ask with the word would mean different things depending on which sense we adopted. For instance, we could inquire, "how did you kill time?" and "how did you kill the person?" The first would give us a reply that describes an action and could be all manner of things; the second, though, would have to specifically be about the way in which someone was murdered. Asking the question, then, shows a misunderstanding in the use of the word.

    In general, we can tell if someone has equivocated by finding a term used in two or more contexts, such that its meaning in one is different than in the other(s). Take another instance:


    This time the word "free" has been implicitly equivocated, with it meaning "free of charge" in the first instance but "free of restrictions" in the second, resulting in a confused argument. If we set it out again, this time removing the problematic term and replacing it with synonyms, we might get the following:


    P1: Tuition at my school does not cost students any money;
    P2: There are restrictions on course content, etc;
    C: Therefore, the tuition does cost money after all.The conclusion does not follow and the error is plain to see. Rewriting an argument in this way is sometimes the best way to note (or to demonstrate) that an equivocation has occurred.

    Affirming the consequent

    This is a fallacy we looked at in our sixth discussion, an example of which might be:


    The problem here is that there is an implicit assumption that the only way to have gotten wet is via the rain, when instead we could think of many other possibilities. For instance, suppose I had fallen into a swimming pool on a sunny day and, in order to give the impression that I was not embarrassed at all, I decided to start musing philosophically by making the above claim. We can immediately see that there in another reason for being wet, so the argument fails.

    The general form taken by affirming the consequent is as follows:


    P1: If A then B;
    P2: B;
    C: Therefore, A.This fails because, as with the example, we might have another possibility:


    P1: If A then B;
    P2: If C then B;
    P3: B;
    C: Therefore, A.
    The fact that we have B fails to tell us if we should suppose that we have A or C also, so we cannot make the decision either way on the basis of the information available. There could be more than two possibilities, of course. When someone makes an argument that seems to suffer from affirming the consequent (assuming they are not doing so deliberately) they are assuming an extra step, namely that there is only one possibility:


    P1: If A then B;
    P2: Only A can cause B;
    P3: B;
    C: Therefore, A.
    Unless P2 is sound, though, the fallacy of affirming the consequent has occurred. A typical example from politics might be someone taking the credit for some positive news:


    The apparent claim here is that the policies were responsible for the lowering of unemployment, so we have:


    P1: If my policies are an effective measure for tackling unemployment, unemployment should go down;
    P2: Unemployment went down;
    C: Therefore, my policies were effective.
    As we know from experience, however, there are many factors at work in the economy and there could be several possible reasons for the change in employment figures; but a quick-thinking politician can perhaps hope that we are not paying attention and use the fallacy of affirming the consequent to take the plaudits.

    The opposite to this fallacy is affirming the antecedent and is a sound argument. This takes the form:


    P1: If A then B;
    P2: A;
    C: Therefore, B.
    In the context of our example, this would be like saying "if my policies are effective, unemployment will come down. My policies are effective, so they will lead to a lowering of unemployment." In Latin, it is known as modus ponens.

    Denying the antecedent

    This fallacy looks similar to affirming the consequent. An example might be:


    The error here is immediate: the "thing" under discussion could be anything at all and is perhaps red; the fact that it isn't a tomato doesn't tell us anything about its colour, but only about one thing that it cannot be. We have:


    P1: All tomatoes are red;
    P2: This isn't a tomato;
    C: Therefore, it isn't red.
    The item being considered could be a UK postbox, say: the premises would both be true but the conclusion would be false. That suggests we have a formal fallacy. In general:


    P1: If A then B;
    P2: Not A;
    C: Therefore, not B.
    To use the political example above again, we could have another instance of the same thing:


    As we discussed, there could be several other reasons why unemployment does go down in spite of the bad policies, so the argument fails and is an example of denying the antecedent.

    The opposite to this is denying the consequent, a sound argument that takes the form:


    P1: If A then B;
    P2: Not B;
    C: Therefore, not A.
    For our example, this would give us something like "my policies will lead to a lowering of unemployment, but unemployment didn't go down so my policies were not effective." In Latin this is called modus tollens.

    Begging the question

    Sometimes people use the conclusion of their argument to prove it, whether accidentally or not. For example:


    This is called begging the question, or assuming what is to be proven in order to prove it. In Latin the fallacy is known as petitio principii. For this example, the question we could suppose was asked might be "why is theft illegal?" The person inquiring could be wondering why it is wrong to steal a loaf of bread to feed him- or herself, for instance. The reply states that theft is against the law, and hence illegal, which amounts to saying, "it's against the law because it's against the law"; so the conclusion (that theft is illegal) is used to answer the question ("why is theft illegal?").

    Another example could be:


    Here, once again, the conclusion (that my friend is reliable) is assumed beforehand (I trust him). There is no attempt to show why my friend is reliable, other than—ultimately—to say that he is reliable, so we end up with "my friend is reliable because he is reliable". In general, if we can recast an argument in the form "A is so because A is so" then we have reasoning that goes around in a circle and hence begs the question.

    Unfortunately the phrase "begging the question" is frequently misused, particularly to mean "but this raises the question that..." This is something to be aware of and hopefully avoid.

    Composition

    The fallacy of composition occurs when the whole is assumed to have the same qualities as a part. For example:


    As many sports fans know, a team full of world class players does not make a world class team; often they simply cannot play together, or don't get along. The mistake lies in supposing that the qualities of the individual players will be carried over to the team composed of them. Another example could be:


    As we all know, we can drink water and so this argument fails. It does so because it assumes that a quality shared by the two separate elements will be retained by their composition. Sometimes it happens that such qualities are carried over when a collection of individual facts is made into a group (for instance, individual racehorse owners typically have more horses than non-racehorse owners and we might expect the total number of horses owned to be higher for the grouping of the former than the latter), but there needs to be a convincing reason why the step can be made. Without justification we find the fallacy of composition.

    To conclude, there are many pitfalls to be on the lookout for when reading, writing or discussing philosophy, politics and other subjects. As we learn to recognise them and realise that they share a structure or form we can understand, however, they become easier to notice and address.
    Teaser Paragraph: Publish Date: 06/16/2005 Article Image:
    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    (... continued from part 1...)

    Other issues in the Philosophy of Religion

    Wittgenstein

    An interesting current in the philosophy of religion concerns the philosopher Wittgenstein and how some of his remarks and ideas might apply to religion. Later in his life he suggested a conception of meaning via "language games", according to which the meaning of a term is decided by the use to which we put it within a given context. The word takes its meaning from this and so we have to be careful to appreciate which "game" we are playing when using a particular term. He wrote that:



    ...while we can ask questions about justification within a language game it is a mistake to ask about the justification of "playing" the game in question...Suppose, then, that we ask the question "does God exist?" The answer we give depends which game we are playing and what the terms mean within them. Similarly, if we ask, "was Jesus God?" we would get a different answer from Christians, Jews and Muslims, as well as others. Even non-theists sometimes mean different things by the term "God" and we need to understand how it is being employed before we can make sense of the question. Likewise, if we say, "where is the evidence for God?" we have to remember that not everyone means the same thing by "evidence" or evaluates the same "facts" in an identical fashion.

    Philosophers taking a Wittgensteinian perspective argue that asking for justification of a religious belief can only be done from within the language game, since each game has its own standards for deciding what we can or cannot say, as well as what is meaningful or rational. It would be a mistake to apply the methods of the natural sciences to a religious proposition, then. In opposition, many people feel that religious claims do say something about the universe, irrespective of what language game we are playing. That would suggest that the question "Is there a God?" should be answered "yes" or "no" (or possibly "don't know" or "I'll tell you next week"), as well as understood to be making a claim about the universe, much like "there is a limit to how much of Hugo I can take"; and hence not avoided by saying that it means different things to different people.

    Prayer

    According to Jim Morrison, "you cannot petition the Lord with prayer." Nevertheless, many people do. Some pray for good health for themselves or others, or for world peace, or perhaps for the strength to cope with some particular adversity. However, the traditional understanding of prayer gives rise to several philosophical problems.

    Some claim that prayer requires a miracle on each occasion that something is prayed for, but that is not obvious. We discussed above the idea that miracles are impossible; nevertheless, many of the things people pray for do not require a large-scale intervention in the laws of nature (if we suppose there are any). A second concern is that if God already knows what we might pray for, as well as whether He will or will not bring it about, then what is the point of prayer? One way responses to this difficulty have gone is to say that God is outside of time. In that case, it makes no sense to say that He has determined a course of action before the prayer. A counter to this is to ask how God can have an effect within time if He is outside of it, to which a rejoinder could be to simply ask why not?

    Perhaps the most serious objection to prayer, though, is to wonder why we should pray at all to a God who is supposed to be benevolent? If God is good, as well as all-powerful, why would he create a world that is deficient in goodness to such an extent that people have to pray that it be made better? We can see that this objection is related to the problem of evil: if the prayer could make the world worse then He would not grant it; if it would make it better then why was His creation deficient? A possibility that has been suggested is that prayer bridges the distance between us and God, achieving something that even an all-powerful God could not otherwise manage: the good that comes from personal relationships with His creations.

    The Plurality of Religions

    It appears to be an empirical fact that there are many different religions. Indeed, not just separate religions, like Christianity, Judaism and Islam, but distinctions even within these such that two Christians, say, may disagree about many aspects of their faith. This plurality of belief leads to a problem: which of the forms of religious belief is the correct one?

    This difficulty is an important one. If Catholicism in its current form is true, for example, then every other belief, past and present, would seem to be straightforwardly wrong, whether wholly or in part. Nevertheless, most religions make their own truth-claims and no more than one of them can be correct in so doing (or so we would usually say, with the alternatives and what truth means considered in our tenth discussion). In general, people have a whole host of religious experiences that differ and diverge widely; can these possibly be the basis for believing that any of them actually gets at reality?

    One answer to this question is to say that it is rational to trust our experience of the world except for where we have reason not to, and that religious experience comes under this rubric like anything else. Thus, if we happen to have experiences that are explained by supposing Christianity to be true, then it is reasonable to suppose that it is and act accordingly. However, by exactly the same argument it would be reasonable for others to suppose that conflicting religious beliefs are also true if that is what their experience enjoins upon them. This is a severe problem for the idea that a given religion can be considered properly basic, or that it is rational to hold particular religious beliefs.

    To respond that it is reasonable to stick with the religious beliefs we have because they form a guide to the world and can be expected to continue to do so is undermined by the fact that it would follow that religious experience for almost everyone else (that is, anyone not sharing our religious perspective) gives rise to false beliefs. We would start by insisting that it is reasonable to trust our experiences, and hence our religious experiences, too, and finish by saying that actually we should only trust those that fit our religious perspective and distrust those that do not. To take an example, it would be reasonable to be a Christian because our experience can be characterised that way, but we only know that such experiences are to be trusted because they can be called Christian and not something else—those other characterisations that are not to be trusted. That this strange situation arises is indicative of how troublesome this problem provided by plurality is.

    Several thinkers have, over the years, provided a rejoinder to this difficulty in various forms, each having a similar structure. According to this, there is only one true religion after all but many circumstances combine to give the appearance of plurality. There is an esoteric (or "hidden") core to religions that is ultimately the same, but the exoteric (or "outer") forms differ because cultural concepts, practices and other factors mean that each of us interprets this reality in his or her own way, sometimes incompatibly so, even though the reality itself is actually the same. In this case, then, the conflict due to pluralism does not come about because each form of religious belief is just a different way of seeing the same thing.

    To conclude this discussion, we can see that why people believe is often as interesting as what they believe. Although disagreements on religious issues continue to feature significantly in contemporary politics and society, perhaps a philosophical approach has some value after all?

    Dialogue the Twelfth

    The Scene: Trystyn, Steven and Anna are walking to the university campus to hear a talk entitled "Is rugby more important than God?" A vocal minority is protesting the event and our intrepid philosophical threesome is accosted by a serious-looking individual in a habit who seems to be observing the protest but playing no part in it

    Steven: Here we go...

    Brother Peter: Hail, friends. Are you going to listen to the talk in yonder building?

    Trystyn: We were considering it.

    Brother Peter: Do you think questions of religion are best tackled in this fashion? Look at the protest it's drawn.

    Steven: It's an interesting proposition. I understand the same guy has already proved rugby to be more important than sex.

    Anna: Sadly that's far too easy to believe. You men are too long on rhetoric and not...

    Brother Peter: (Cutting her off...) If I may... You're rather missing the point. This talk mocks beliefs that people hold to be very important. You can see that lots of local folk have taken offence to it. Why should our Lord be subject to ridicule in this way?

    Steven: Well, shouldn't we hear what he has to say first?

    Brother Peter: Perhaps. Do you believe in God, friend?

    Steven: No.

    Brother Peter: Do you mean that you just don't believe or that you've determined there is no God?

    Steven: I don't believe, but I don't say it's impossible. In any case, how come you're here? Are you protesting, too?

    Brother Peter: No; I'm just watching. I thought it might lead to a chance to discuss religious ideas. You can see from my garb that I rather make a hobby of it.

    Anna: How is it that you came to believe?

    Brother Peter: That's an interesting question. For me, personally, it isn't because of any one specific thing I can point to. I've read all the arguments for and against God, of course, and disputed them with others until I'm blue in the face. Still, you must understand that belief is something you come to, gradually as it were but then all of a sudden, as though it makes sense of everything else.

    Steven: What about all that's wrong in the world? Why does God let bad things happen—earthquakes that kill thousands, diseases, wars and famines that kill millions? Why does he let innocent children or animals be murdered? For what?

    Brother Peter: I appreciate what you're saying, friend, but that would be to misunderstand. I don't believe in denial of these things, but in spite of and because of them; because I seek to make sense of them and fathom whether the world can ultimately be a just and good one, even though we seem to make such a mess of it each day and even though it often seems so far away from how I feel it could be. Do you see how these things can lead to a person seeing the world in a different way?

    Trystyn: I can.

    Anna: Don't you take it all a bit too seriously if you worry about a talk like this, though? I mean, surely God exists of not irrespective of whether some harmless fun is taken seriously or not?

    Brother Peter: Of course I take your point, and it may seem a triviality. Even so, I feel as though I would be helping others in showing them what I have found in the way I now see the world.

    Steven: Aren't you presuming to tell others what to think? What business of yours is it whether people turn up here tonight or not?

    Brother Peter: No—that is to miss the point entirely. Do you see the dilemma someone like me is placed in? I feel as though I've caught a glimpse of a profound truth that seems as though it would make the lives of others incalculably richer. At the same time, I want to respect their decisions and I hope that they can come to a similar realisation on their own. That leaves me trapped between a respect for your privacy and right to do as you choose, within reason—rights I very much accept as a member of society—and a desire to see everyone get the most they can out of this life, which—for me—includes helping them to understand God.

    Anna: Can we understand God? I seem to remember reading about uncertainty in this area.

    Steven: (Indicating Trystyn...) Probably heard it from him, I'll bet.

    Brother Peter: I was too quick there, friend, and you are quite right to pick me up for it. Insofar as I can know or be certain of anything, I feel that God exists. This belief is basic to me, and to my experience of the world. Even though I appreciate that there are problems with my understanding, and that any intellectual arguments I may call upon can fail to convince you, nevertheless my belief is somehow more than the sum of these parts that we might say make it up. (He pauses.) I guess it's hard to explain.

    Trystyn: Isn't that the point? If you could explain it, I rather suspect it would fall short.

    Brother Peter: (He is smiling.) I think you know exactly what I mean, friend.

    Steven: Shall we go and see this guy talk or are you stopping here, Trystyn?

    Anna: (To Brother Peter...) Would you mind?

    Brother Peter: Of course not. Perhaps I'll speak to you about it afterwards and go and see it myself. I just hope you think about what we've discussed, as I shall think about you. I hope we can learn from one another.

    Steven: Amen.

    Curtain. Fin.
  • Who Was Online

    0 Users were Online in the Last 24 Hours

    There is no users online