This site is supported by Nobility Studios.
  • Resources

    Our library of interviews, essays and reviews is entirely created and maintained by our members. These resources are aimed at all levels and our content aims to support learning and help people gain an insight into the many areas our members and contributors are interested in. To offer content or suggest an interviewee, please contact us.

    Teaser Paragraph: Publish Date: 06/15/2005 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    In this discussion we'll look at the philosophy of religion, along with some aspects of theology. The importance of this area of philosophy needs little introduction: people have struggled for very many years to understand what religious ideas and experiences mean or do not mean, and this is so today just as surely as it was in the past and likely for the foreseeable future. Later in this series we'll look at so-called Eastern philosophies. So there will be an inevitable focus here on Western religious ideas.

    Philosophy of Religion

    The philosophy of religion looks at God and the gods, philosophical arguments for and against them and analyses of them as concepts. It also considers the meaning of religious ideas and experiences as well as what we can say about them. Claims about God are traditionally split into two areas: natural theology, according to which we can use reason to argue for the existence of God; and revealed theology, which holds that statements about God are revealed to us in religious experiences or scriptures. Sometimes there is an overlap, but this is a useful distinction to bear in mind.

    In this section we'll examine belief in God and its justification, looking at some of the main approaches to this issue.

    The justification of belief

    As we saw in our discussion of truth, there are many ways to approach the question of whether a particular religious belief is true or not. We can try to refer to evidence that suggests a positive answer, or other evidence that speaks to the contrary; we can set out arguments that do likewise; and we can seek to explain why a belief coheres with what we already think we know, or why it makes sense of our other beliefs and provides a framework for them. We can also make a distinction between showing something to be categorically so and arguing that it is reasonable to believe it, even though there may still be good objections. Another alternative is to ask how likely a belief is to be true, based on the probability of it.

    Later in this discussion we'll consider some of the arguments for the existence of God, together with one of the most important that suggests otherwise. One thing it's important to understand, however, is that the philosophy of religion is far more subtle in its study of such arguments than some critics of religion appear to suppose: none of the potential justifications of belief in God are taken (or intended) to be proof; instead, religious beliefs are a complex interaction of ideas and to suppose that a single argument could ground them all is not only unreasonable but contrary to the way in which we decide questions in everyday life. Thus the modern justification of belief is cumulative and complaining that a particular argument fails to make the case for the entire network of beliefs is to miss the point. Indeed, although there is general agreement that the five main arguments fail to prove the existence of God, some philosophers of religion claim that this is not what should be aimed at; instead, their combination makes it more likely than not that God exists.

    Should belief be justified by proofs at all? When it comes to religion, some argue that it need not be. There are three main suggestions as to why it might be better to think otherwise:

    The rationality of belief
    Belief and faith
    The meaning of "God"
    In the first case it is asked what it means to say that an argument (or arguments) for the existence of God should convince a rational thinker. After all, what is a "rational person"? How do we determine what is rational and what isn't? Some philosophers, particularly Wittgenstein, have proposed that rationality depends on what we use as criteria for making decisions about ideas and arguments, noting that these can differ from person to person. Indeed, we saw in our sixth discussion that the theories we hold can affect how we interpret evidence, so the framework we approach a religious concept from can have an important influence.

    The second view objects that if we were to believe because of arguments, or even if we could show that the existence of God were certain or rationally justified, there would be no room left for faith. Religious belief is to be taken not as something that can be proven or disproven but instead as a boundary condition or principle through which we interpret life and our experiences. Critics of this perspective note that we do pay attention to experiences or arguments that purport to count against belief, so there must be some measure of considering the evidence and arguments for and against and deciding on the balance of probabilities. It is also suggested that God would not make it unreasonable for us to believe in Him, so there must be some value in the proofs of His existence, whether or not we find them convincing. Some take a probabilistic view in that belief in God is more likely than not (or vice versa) after considering the arguments and evidence for and against, with the result that discussion focuses on how best to evaluate and understand this probability.

    The third idea is that coming to believe in God adds nothing to our store of facts about the world but instead involves a different way of seeing the same things. That is, the existence of God is not a fact to be proven like other entities we take to exist, but a new way of understanding the universe. In that case, trying to prove existence is missing the point; when we say "God exists" we are not saying "x exists" but rather changing our way of thinking about everything else.


    A still-popular current in the discussion of religion is evidentialism, the seemingly plausible epistemological idea that we are only justified in believing things we have evidence for. The most extreme form was set out by Clifford in 1879 when he famously asserted that "it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence", so that not only would religious belief be unjustified, but it would also be wrong—a failing that (presumably) ought to be punished in some way. In fact, Clifford was not directing his arguments at religious belief, but plenty have since done so.

    Not many philosophers of religion take Clifford or evidentialism seriously these days. One way to see why is to ask what evidence we would need to believe that we should do as Clifford said; that is, evidentialism seems to be self-refuting. Another is to look at trust, which poses very difficult questions for evidentialism: do we need evidence that a trusted person is trustworthy before we can justifiably trust them? More importantly, perhaps, we believe things every day without evidence and if we extend the insistence that only propositions for which we have evidence may be believed from religion to our wider experience of the world then there will be few things left to believe.

    None of these objections mean that no evidence is required to believe something, but instead that we need to ask whether "where's your evidence?" is even the appropriate question all the time, or whether the absence of evidence is a decisive refutation of an theory. For religious ideas, the way in which people justify their beliefs may be very different from the manner in which we interpret the results of an experiment. As the philosopher Alvin Plantinga wrote:

    We'll look at where this question leads next.

    Properly basic beliefs

    A relatively new current in the philosophy of religion is reformed epistemology, originating with the writings of Plantinga, Wolterstorff and Alston. It objects to the foundationalist view in epistemology (covered in our fifth discussion) and its application to the question of whether or not it is rational to believe in God. On that view, a properly basic belief is one that is held in an immediate, basic way; not on the foundation of other beliefs but because it is certain for us. An example would be the belief that "it feels to me as though I am in love". Note that this is so whether or not the unlucky victim actually exists, if I am confusing love with some other emotion or if—as targets of Hugo's affections invariably claim—there is no such thing as love. The belief is different from "I am in love with her" or "there is a girl with whom I am in love", both of which rely on other beliefs or assert things that are not immediate.

    In the past it was held that a belief—such as belief in God—was rationally justified only if it could be justified on the basis of evidentialism and other beliefs. For instance, if the arguments for the existence of God (covered below) were found to be stronger than the counter-arguments, or the arguments against His existence, then it would be rationally justified to be a theist; and vice versa.

    Reformed epistemology challenges this perspective by saying that belief in God can be properly basic. It does this in two ways: firstly, by disputing the claim of evidentialists like Clifford that we have a duty to not believe without sufficient evidence; and secondly by asking how a person who believed in God—after considering the matter at length and perhaps taking into account significant religious experiences—could possibly not be following their duty to only believe what they feel is justified? After all, if a person holds basic beliefs in God and finds evidence of him (such as religious experiences, moral order, or purpose) all around them, there could hardly be anything strange in their being rationally justified in believing. We might dispute the soundness of their experiences, of course, but it makes little sense to say that he or she is mistaken in their belief on that basis.

    The further condition that Plantinga found important in arriving at justified beliefs is warrant. When is a belief warranted? He identified four conditions that a warranted belief had to satisfy; namely, it would have to be:

    Produced by cognitive faculties (like memory and perception) that are working properly;
    Produced by these same faculties working in the proper environment;
    Produced by aiming at true beliefs; and
    Successfully reaching their target, or at least with a high probability of having been successful.
    The difficulty lies in evaluating these conditions. Suppose we assume that there is a God; in that case, we would have to assume that He has created us (whether via evolution or special creation, say) such that our faculties work correctly, thus enabling us to learn that He exists. It would seem to follow, then, that belief in God is warranted. Conversely, suppose God does not exist; then any perceptions, memories of religious experiences, and so on, would be mistaken, perhaps due to delusions or a failure of our faculties to work correctly. Belief in God would then not be warranted. Reformed epistemology points to the influence of our prior decision on the existence of God in this assessment: belief in God seems to be warranted/unwarranted if and only if it is true/false. While this means that we cannot say it is straightforward that belief is or is not properly basic, it appears that it can be for individuals.

    Forms of religious belief

    People believe many different things about God, with some saying they do not believe at all. In this section we'll look at some of the attempts to say things about Him, including if we can say anything at all.

    In the first place, how can we know God? If He is ineffable or indescribable, then how is it that people have sought to give accounts of Him within religious texts throughout the years? One answer is to say that we can take a negative approach and only say what God is not. To some, God is even too holy to be named; and perhaps He is beyond human language and its limits? Others suggest that God could be known from His effects, hence talk of His being all-powerful, just, all knowing, as well as the converse of these. More recent answers include calling religious language symbolic, such that it is not to be understood in the normal sense but as evocative of deeper meaning; as metaphor, so that we talk of God through metaphor; and myth, perhaps giving timeless insights into the human condition but often through the interpretations and context of a particular age. As we saw in our discussion of truth, it could be that religious language is intended to correspond to the world and hence tell us something about it; or instead that it coheres with our experiences and hence makes sense of them.

    The basic form of belief is theism, the belief in God as traditionally understood in the monotheistic (that is, single-God) religions of Christianity, Islam and Judaism. Pantheism takes a different perspective in that God is identified with the universe, so that they are identical. Opinion is divided as to whether this makes pantheists true theists or atheists (see below): if God is no more and no less that the sum total of nature, then can we say that God exists or not as theism does? Polytheism holds that there are many gods, whether as a pantheon as in Ancient Greece or otherwise. Deism takes there to have been a God who created the universe and, as it were, "set it to running", but who otherwise plays no further part in it. Panentheism is perhaps best understood as taking God to be to the universe as the soul is to the body—more than equal to the sum of its parts. We assume here that enough is known about particular forms of belief that they can be briefly introduced before passing to philosophical analysis.


    There are several different theologies that provide unique perspectives on some of the problems within the philosophy of religion. One of the most significant in contemporary theology is process theology, which comes from the work of A.N. Whitehead and Charles Hartshorne. According to process theologians, God and the universe are interdependent: anything that happens is a result of cooperation between God and His creation. Although process theology tends to look nothing like the traditional understanding of God and His relationship with the world, it avoids issues like the problem of evil (see below) because He is considered to already be intervening as best He can, such that evil cannot be prevented further.

    Another possibility is postmodern theology, which is obviously related to the issues we considered in our thirteenth discussion. It tries to take theology beyond the metaphysical and other assumptions we looked at there, which some so-called postmodernists find untenable.

    Liberation theology is both an approach to theology and a social movement—primarily within Latin America but also elsewhere throughout the world—that attempts to understand and expand on the implications of Christianity for personal and public life. It seeks to ask how the Church can be relevant to everyday life and get involved in liberating people from poverty and oppression.

    Feminist theology tries to seek out any biases in religious stories and texts, trying to understand if their relevance is to all people or in fact at the cost of women. In particular, issues such as the ordination of women within Christian churches or the role assigned to them in Islam are major concerns, with more emphasis—in general—being placed on liberation than being saved.

    The phenomenology of religion asks whether religious phenomena can be distinguished from others in a meaningful way. What is it—if anything—that makes them different? Even if we perhaps cannot ultimately answer religious questions definitively, we need to be as clear as possible what it means to be religious if we are to choose one of the many religious ways of life.


    Not everyone believes in God. The etymological roots of the term show that atheism was originally understood as the denial of the existence of God; that is, a positive assertion. It was also historically used to denote believers in a different God. Another perspective, however, has come to prominence in more recent times according to which it is taken as a negative statement—merely an absence of belief in God. Often these two meanings are called strong and weak atheism respectively. The latter includes, say some atheists, those people who have never heard of or used the concept of God.

    As it stands, weak atheism would be little more than autobiography; saying "I don't believe in God" seems much like declaring "I don't believe in true love" or—to take a more important example—"I don't believe Carlos Spencer has an equal". To make it mean more, then, weak atheists tend to understand it in terms of the burden of proof. Much like the way in which a defendant in a court case is—or is supposed to be—innocent until proven guilty, the weak atheist suggests that a person would not believe in God until a convincing argument (or arguments) has been made. After all, we wouldn't convict someone on a lack of evidence or with reason to suppose them guilty (although this statement may unfortunately appear naive in the "modern" world), so why—asks the atheist—would we do otherwise when it comes to belief in God?

    As we have seen, we may criticise this approach via the idea that belief in God is properly basic. If philosophers in the reformed epistemology tradition are correct then belief requires no justification before it can be rationally presumed. If faced with a potential argument that would defeat their belief, such as the problem of evil (covered below), the believer would have to meet this challenge in order for their belief to remain justified. This defensive approach would only be required when a defeater is offered, however. Whether the atheist can maintain that the burden of proof is on the theist in the face of the challenge of reformed epistemology is the subject of much discussion.

    Strong or positive atheism makes the claim that God does not exist and hence offers reasons as to why we should reject Him. These might be the problem of evil, criticisms of specific (or general) theological ideas, or claims that the concept of God is meaningless, unsupported by evidence, a psychological flaw or simply unnecessary. Notice that the failure—if we judge it that way—of arguments for the existence of God to prove it does not lead to strong atheism, just as failing to prove guilt means the defendant is presumed innocent—not that they actually are. Whether we should accept it or not depends on how convincing we find these positive arguments.

    On the face of it, there is no reason why an atheist should be any more or less rational than a theist, or indeed anyone else. Nevertheless, in order to give some content to atheism other than the absence of belief discussed above, many atheists hope to view their perspective within a larger scheme of taking a skeptical and critical approach to claims about the world. Thus, they say, atheism should be characterised more by the way in which they attempt to find out about the world and not concerned solely with the issue of God. Theists, of course, can just as easily—and generally do—advocate much the same things, and some suggest that a joint effort in this regard can best marginalise those who consider it irresponsible to believe/disbelieve in God and would tell others what they should or should not believe.


    The earliest known agnostic was Protagoras, who wrote that:

    The term itself literally means "without knowledge" and was coined in the 1880s by T.H. Huxley. Discussing his position on matter theological, he described his difficulty in summarising it for others:

    As a result, he decided to call himself an agnostic to draw attention to the fact that he did not have knowledge of whether God existed or not.

    Some people misunderstand agnosticism to be a "middle way" between theism and atheism: where one is supposed to say that God exists and the other that He does not, agnosticism is said to represent the thinker who has become tired of the struggle between two opponents battering each other when the bell shows no sign of ringing any time soon, deciding instead to offer a shrug of the shoulders and the honest response "I don't know". However, a theist and atheist alike may take the position that we cannot know whether or not God exists but that, on the balance of probabilities and the various arguments for and against, we can make an educated guess. Even a strong atheist or the most certain religious believer may admit that they cannot be absolutely certain of anything but that the possibility of error strikes them as small. It is perhaps better, then, as well as more accurate, to understand agnosticism as an epistemological position rather than something distinct from belief or non-belief.

    Arguments for the existence of God

    Although, as we've seen above, some thinkers do not believe that the existence of God can or needs to be justified, there are five traditional arguments that seek to do just that, some or all of which can be called upon by the believer to explain why he or she decided that God does indeed exist. We'll look at each in turn.

    The Ontological argument

    This argument was first propounded by St. Anselm, Archbishop of Canterbury in his Proslogion of 1077-78. It is considered by some that he intended it for those who were already theists, not necessarily for convincing atheists. This distinction is important because his goals for the argument tell us how it was supposed to function: if it was meant for theists, to provide a rational basis for already-existing faith and hence work as a cumulative argument (as discussed above), then we might judge it differently than if it was supposed to prove definitively the existence of God. Anselm himself wrote:

    Given this context, we can now look at the argument itself. In basic form, it states that the definition of God entails His existence. For example:

    P1: God is the greatest possible being, one whom nothing greater than can be conceived of;
    P2: If God is just a concept and does not exist in reality then a greater being can be conceived, one that exists both as a concept and in reality;
    C1: This being would be greater than God, contradicting P1;
    C2: Therefore, God is not just a concept and must exist in reality.
    Thus the fact that we define God to be the greatest possible being means that He must exist, or else He would no longer be the greatest. Another way to understand the argument is to distinguish between a necessary being (that is, one that necessarily must exist) and a contingent one (that is, one that may or may not exist, depending on the circumstances); according to the ontological argument, then, it would be greater for God to exist as a necessary being than as a contingent one. Notice that this argument depends only on the definition, not any facts about the world. It is perhaps for this reason that many people find it unsatisfactory at first glance, since it doesn't seem right to be able to define God into existence. However, saying what is wrong with it has historically proved rather more difficult.

    There are several criticisms that can be made of the ontological argument. In the first instance, does the notion of a "greatest possible being" make sense? Just as we wouldn't speak of the greatest possible morning or the greatest possible number, should we define a being in this way? Plantinga refined the argument in a way that hopes to avoid this issue by calling God "maximally excellent", meaning He has all the traditional attributes like being all knowing and all-powerful.

    Another approach is to challenge P2 and say that existence is not a quality that should make up "greatness". This was the line taken by Kant when he claimed that existence is not a predicate; that is, it does not tell us anything about an object or entity, but only that it is or is not. If we compare, say, two coins (as Kant did), one of which exists and one that does not, is anything added to the concept of the coin in the one case and not the other? In recent times, some thinkers have answered in the affirmative, saying that the existing coin has the property of purchasing power, while those non-existing conceptual coins do not. Whether we buy this argument or not is another thing.

    To return to the idea that we shouldn't be able to define something into existence, in Anselm's own time another version of his argument was offered by Gaunilo, a monk who used it to show that a greatest possible island must exist. The point of his criticism was to say that if the ontological argument could be used to prove the existence of God from His definition then we could do likewise for anything. Anselm responded that islands are contingent and hence do not have necessary existence as part of their definition, unlike God. In general, the objection is that while we might be able to go from a concept of what we imagine to exist to a concept of what actually exists, we cannot go from the former to saying what really does exist. Others challenge this by saying that we can say something about the non-existence of concepts like square circles or married bachelors, so why should we discount the possibility that we can also speak of the existence of a concept like God from the definition itself?

    The Cosmological argument

    According to Plato in his dialogue the Timaeus,

    This is the idea behind the cosmological argument, which infers the existence of God from the apparent fact that the universe and the phenomena in it exist when it seems that they need not do so (hence the question that occurs to many people, philosophers or not: "why is there something rather than nothing?"). The argument was later formulated in different ways by Aristotle, Aquinas and Leibniz, the idea being to note that the universe cannot account for its own existence—so it is claimed—and thus a cause is sought outside of it to explain the brute fact of existence.

    St. Thomas Aquinas used varying forms of the cosmological argument in three of his famous "Five Ways", these being proofs of the existence of God in his work Summa Theologica. The first runs as follows:

    P1: Everything that moves is moved by something else;
    P2: An infinite regress (that is, going back through a chain of movers forever) is impossible;
    C: Therefore, there must exist a first mover (i.e. God).
    The second proceeds in a similar fashion:

    P1: Every effect must have a cause;
    P2: An infinite regress (as before) is impossible;
    C: Therefore, there must be a first cause (i.e. God).
    These two seem much the same but the slight distinction is that the first focuses on the fact that things are moved by agents acting in the world while the second discusses the actors causing these things to happen.

    Several criticisms have been made of Aquinas' assumptions, as we would expect given the length of time since he first proposed them. As we saw in our fifth, tenth and thirteenth discussions, philosophers have challenged the idea that events are linked in a "chain" from one to the next, each resting, as it were, on those below. Another telling objection is to ask why there could not be more than one first cause/mover? Why could the chain not lead back to several ultimate causes, each somehow outside the universe? Not only that, but these two arguments could just as easily lead to two different Gods.

    The other argument Aquinas offered runs thus:

    P1: Contingent beings exist;
    P2: If a contingent being exists then a necessary being must also exist;
    C: Therefore, a necessary being exists (i.e. God).
    We discussed necessary and contingent beings above, but the idea here is that if everything in the universe was contingent then there must have been some time when there were no contingent beings at all. In that case, how could the universe have come into being, since contingent beings would require a cause? This means that there must be some necessary being, which we take to be God.

    The problem again is that this third argument might be taken to imply another God, different from the other two. Others object that matter or energy are not contingent (although still others question this assumption), or that the contingency could run backwards in time as far as we like and "end" in the future.

    Leibniz reformulated the cosmological argument in terms of the principle of sufficient reason. According to this principle, every fact or truth must have a sufficient reason to explain it. As we touched on above, the universe seems to fail to account for its own existence with no sufficient reason within it, so Leibniz inferred that there must be a God to do so. In opposition to this it has been argued that the existence of the universe is just a brute fact, not in need of any explanation—it just is. Both Hume and Russell complained at moving from every event having a cause to the claim that the collection of events having a single cause. On the other hand, if we ask "why?" of individual events then why not the universe, too?

    Another form is called the Kalam cosmological argument after the school of Islamic philosophy of the same name. In its basic form it claims that since the universe came to exist at some time, it follows that it must have a cause for its existence. That cause, of course, is God. However, it could instead be that the universe has always existed, either eternally in some form or expanding and contracting as some scientists suggest. Moreover, even if God did "start" the universe, the argument doesn't say He needs to have continued to exist.

    As with the ontological argument, the cosmological argument does not appear to be intended to convince non-theists that they should become theists but instead suggests the existence of God as a possibility, or an explanation of the brute fact of the existence of the universe. How convincing it is depends, apart from the opinions we might hold of the content of the argument, on whether we feel this fact is in need of explanation or not.

    The Teleological argument

    This argument points to the existence of purpose and order in the universe and supposes that if we see signs of design then there must have been a designer. Indeed, the word "teleology" comes from the Greek telos, meaning "purpose", "goal", or "end". Sometimes it is called the argument from design, or more properly the argument for design.

    Perhaps the most famous version of the argument is due to William Paley, who argued by analogy. Imagine finding a watch abandoned on a deserted island, say. We examine the watch and its workings, and from the fact that it appears to be designed with a purpose in mind we infer that it must have had a designer. In particular, even if we were not familiar with watches at all the complicated structure and the way in which the parts worked together to achieve a specific function indicate that it could not have come about by chance. Although it is often supposed that he intended his argument to convert non-theists, in fact it seems from his own testimony that he wanted to clarify the issue for those who already believed.

    The idea behind this argument by analogy is that effects that are analogous have analogous causes. That means that when we see evidence of design in the watch and in the universe and reason that the two circumstances are analogous, the fact that we infer a designer for the watch leads us to analogously infer a designer for the universe. Hume was critical of this approach, saying that we know that man-made structures were designed because we have seen them being built or heard about it. How can we be sure that the analogy holds? Moreover, why should the similar effects (that is, the appearance of being designed) not follow from different causes?

    Another objection made forcefully by Hume was that certain events in the world, such as natural disasters, would—if we follow the analogy—suggest that God didn't do a very good job of designing the universe. Indeed, if a watchmaker offered us similar workmanship, he suggested, we would reject it. In more recent times further scientific studies have made this complaint still more powerful, with many areas of the human body and natural world alike seeming to be very badly designed, if we want to maintain that they were designed at all. The success of evolutionary theory has also provided an alternative explanation as to where the order we see has come from, with the caveat that there is apparently no need to invoke purposive behaviour to account for it. This is not necessarily an objection against design, however, since many theists now suggest that evolution is the means used by God to achieve His goals.

    With developments in science continually suggesting new angles to view the argument from, as well as refinements that point to the amount of beauty in the universe as opposed to just design, the teleological argument rumbles on and it perhaps once again depends on the perspective from which it is viewed. Some feel that the purported design can be explained in other ways, while others consider it not a proof of God's existence but again suggestive of the likelihood, explaining a quality of the universe that they see around them.

    The Religious Experience argument

    Perhaps the most interesting argument for the existence of God comes from the fact that very many people have experiences they characterise as religious. These tend to have different forms, but there is enough common ground to list a few of them that have been distilled as a result of work by people like William James and David Hay:

    The experience is hard (if not impossible) to describe.
    It is a feeling of oneness with God.
    It can also be a sense of being dependent on God.
    It may sometimes call attention to a painful separation from God.
    It can be experienced anywhere, in everyday situations.
    It can provide insight into otherwise inaccessible truths.
    The experience tends to be transient.
    There are other descriptions, of course, and the experience itself seems to be largely personal. The issue, then, is to explain these religious experiences in a satisfactory way. The religious experience argument, again, does not seek to prove that God exists but instead that it is reasonable to believe that He does because of the direct experience of Him. Moreover, the argument gives a motive for non-believers to also believe unless they can explain the experiences (which they may have for themselves) in another way. Indeed, we could say the argument is an inference to the best explanation:

    P1: People have religious experiences;
    P2: The existence of God explains these experiences;
    C: Therefore, God exists.
    There are several ways we could challenge this argument. Firstly, we could contest P1 and say that the experiences are not religious; rather, they are interpreted that way by religious people and differently by non-religious (or even those of another religion). However, can we find some way to determine what the true experience is supposed to be? It could just as easily be that the interpretations are different (even among believers in the same religion) because they are interpretations.

    Another potential criticism is to admit that many people do have religious experiences but point out that many others do not. The implicit suggestion here is that God would want us all to have such experiences, especially if He wanted us to become believers eventually. In reply, it could be that something like faith is required, particularly since it isn't obvious—either from religious texts or a little thought—that a non-believer should expect to undergo religious experiences with the same frequency as a believer.

    We could also look at P2 and say that there are other explanations for religious experiences. For example, the experiences could be deceptive; but for all those testifying to them to be unreliable witnesses is perhaps less credible than assuming they all are not. Alternatively, we could try to posit a naturalistic or psychological explanation. Either would need to also account for the sheer number and depth of the religious experiences, however, as well as showing why they are better explanations.

    In summary, the argument from religious experience does not prove existence definitively and depends in good measure on what our prior opinions of such experiences are. Nevertheless, it provides an explanation for a widespread phenomenon.

    The Moral argument

    The general idea behind moral arguments for God is given by Ivan and several other characters in Dostoevsky's masterpiece The Brothers Karamazov as "without God, everything is permitted"; or something similar; that is, God provides the basis of moral order. Many people think or feel that such a moral order is—or should be—a fundamental aspect of our universe, not an incidental one that has come about but need not have done so. The idea that without God there would be no moral sanction to stop us doing as we pleased is explored in that work, along with possible rejoinders.

    Moral arguments have been overlooked by many thinkers, partly because of the misunderstanding discussed above according to which they fail to justify the existence of a God with specific attributes, as required by certain religious beliefs. The character of moral arguments is such that what is shown—if the argument is successful—is not "it is likely that God exists" but "it is likely that I ought to believe that God exists".

    One form of the moral argument is to point to the experience we have of their being moral actions, right and wrong. Although not everyone agrees about what should be right or wrong, many do accept that these terms have meaning independently of us. Some understand this as implying that there is someone we are responsible to for our conduct, or that concepts like guilt only make sense if we have someone by whom our conduct is judged. Can there be moral laws without someone making them?

    The main criticism of this approach is to target the idea of morals existing independently of us. As we saw in our eleventh discussion, many thinkers have questioned this view and we looked at some alternative explanations for the existence of morals and the feeling that some things are right or wrong. To say that no other account of moral responsibility can be given is controversial and fails to justify God's existence.

    Another moral argument is due to Kant and suggested that being moral was a categorical imperative, according to which an action would be moral if we would wish it to be applied universally and immoral if the contrary. Noting that we experience moral obligation and that we desire to bring about the summum bonum or "highest good", Kant argued that ought must imply can: if there is no way that something can be achieved then it makes no sense to say that it ought to be. Since it is beyond our power to ensure that this highest good can be reached, it must be that God exists to make it so.

    This argument has been attacked from several directions, firstly criticising the move from ought implies can to the actual existence of what can be. Why should something necessarily have to be, just because we decide it both ought and can? Secondly, why should it have to be God in particular that brings about the higher good? We can also argue against deontological theories, as we saw in our earlier discussion of ethics. Note that Kant's argument was not that no moral order is possible without God, but only that He was required to achieve the summum bonum.


    In the early seventeenth century, the Friar Marin Mersenne was keen to advocate the new mechanistic philosophy that was taking hold at that time because he believed that a good account of natural law was necessary in order that miracles—over-riding or influencing what we would ordinarily expect to happen—could occur. Many believers hold that miracles can, have, and do still happen, the most important and famous of which tend to be the resurrection or the parting of the Red Sea. Still others say that miracles are nonsense and do not—or cannot—occur.

    As we have touched on previously in our discussions, not everyone agrees that there are such things as natural laws in the first place. However, even if we suppose that there are then it is still important to understand what a miracle is supposed to be. Often they are understood as violations of natural law, but this formulation is problematic: natural laws, by definition, are intended to account for events with natural causes, so it makes no sense to call an event with a supernatural or non-natural cause a violation of natural law. A better way to understand miracles, perhaps, is as events contrary to natural law. This would mean that an event with a non-natural cause might be noted as an exception to natural law, rather than as an instance that is supposed to refute it.

    The best-known argument against miracles comes from Hume but it has been subject to much recent philosophical critique, not least because he seems to use the understanding we rejected above of a miracle as a violation of natural law. Here we will try to understand the basics behind the argument. According to Hume:

    The idea here is that we only have experience of natural events and causes. If we were told of a miracle happening and asked for justification for it, any evidence or argument analogous to what we experience everyday would lead us to suspect that there might have been a natural cause for the incident, and so it wouldn't be a miracle after all. Indeed, any attempt to justify it by reference to non-natural causes would be to explain the miracle via other miracles.

    Hume gives an example of what he means by saying that if it were told to him that the sky went dark across the earth for six whole days, with travellers and people from other countries testifying that the same event happened everywhere, he would have to believe it. Even though such an event would seem extraordinary, he thought it was sufficiently analogous to similar events (an eclipse, say) that it could reasonably believed on the basis of so much testimony. On the other hand, he notes that if Queen Elizabeth the First was agreed by all historians with an equal degree of testimony to have died and subsequently risen from the dead, he would not believe it. He reasoned that this latter case could not be seen as analogous to anything else we experience.

    Why accept the one and not the other? As the quote above explains, Hume thought we were never justified in supposing a miracle to have taken place because we experience natural events and hence have to look for natural causes. Even if something apparently supernatural were to occur, we would have to identify it via natural phenomena (for instance, water turning into wine) and hence are constrained by Hume's empiricism to look for natural causes. That is to say, as we only have past experience of natural phenomena to go on, along with testimony from witnesses and evidence, thus we have to make a decision about the likelihood of a miracle occurring on this basis alone.

    The problem with this, as some thinkers have identified, is that if Hume were to have stood watch over the water at the wedding in Cana in such a way that he could be sure no one had interfered with anything, and moreover that he had no reason to doubt his faculties, he would still have to deny that he had seen a miracle. We can make the same reduction of his argument to absurdity with any other example of an alleged miracle. Just when could it be said that a miracle had occurred?

    What we see from this is that there apparently could be no such thing as a miracle, even when well-attested and where there is no reason to doubt what we are experiencing. In short, we cannot assume that everything occurs due to natural processes and then claim that any exceptions that cannot be dismissed are in fact still natural events that will eventually be explained in natural terms. Indeed, it is almost as though the argument begs the question; that is, assumes what is to be proven in order to prove it. If we take it that all supernatural events are either examples of errors in testimony or our faculties, or—where these cannot be claimed—say that we cannot call these miracles because no supernatural event can be justified on the basis of natural phenomena, then we have defined the supernatural out of existence and miracles with it.

    In summary, philosophers of religion have shown that it is not irrational to believe in miracles, and that it is not impossible that one should happen. To say that a particular event could not have happened because it is contrary to natural law is to assume that there are no such exceptions, but that is what was supposed to be proved. Convincing someone who was not there to see it, however, is another matter.

    The Problem of Evil

    The traditional form of the problem of evil is due to Epicurus:

    In short, why does evil seem to happen if God is both good and capable of stopping it? This is considered by many people the most formidable objection to the existence of God, with some suggesting that it provides an argument for why a benevolent God does not exist. In one form, it amounts to considering the following two propositions logically inconsistent:

    P1: God exists and is omnipotent, omniscient and good.
    P2: Evil exists.
    This is known as the logical problem.

    It is important to point out that the problem of evil is by no means conceded to be prima facie a problem at all. To begin with, there are axiological difficulties: firstly, we note that the claim that God would disallow the existence of the intrinsically evil can only be justified within the context of a moral theory—such as consequentialism, as we discussed in our look at ethics—which may (with good reason) be rejected by a theist. The second complaint, indeed, is that any axiological version of the problem of evil must necessarily be incomplete because it cannot make explicit the move from noting that an evil state of affairs is not prevented to concluding that God has acted morally wrongly. Once again, the standard way to formalise this step is by reference to other ethical ideas that are anything but uncontroversial. The problem, at base, is the assumption of problematic (axiological) concepts such as goodness and desirability.

    To return to the argument, it has been suggested that P2 is not at all obvious. If we perhaps understand evil as what ought not to exist, particularly from the perspective of humans, we could ask if it can be said to have meaning distinct from human valuations, or indeed if it makes any sense at all to consider a world without evil as being more perfect than the one God is supposed, by the problem of evil, to be bound to bring about. According to Aquinas, for instance:

    Aquinas' point is that it isn't necessarily clear that the world would be more perfect in the absence of evil; in fact, many of the concepts we might like to claim for a perfect world—such as justice, kindness or fairness—only have the prestige we attach to them because we imagine that other circumstances could have replaced them at each observed instance.

    Another remark on evil that should be made concerns the so-called Unknown Purpose Defence, which notes that although Dostoevsky's Ivan Karamazov could declare it absurd that the salvation of the world should cost the life of one young girl, human (epistemological) limitations might not permit us to guess the motivations of God, especially if, as some argue, He cannot be known directly, as we touched on above. Indeed, these thinkers suggest that the situation we find ourselves in—not knowing why evil should exist—is precisely that which we would expect to be in, given theism. Rowe proposed restricted standard theism as a counter-argument, in which all we say is that God has the properties defined in P1 above. However, this does not seem to refer to God as most people understand Him.

    As a result of these and other difficulties, it is generally conceded by philosophers of religion that the logical problem of evil has been laid to rest.

    Another version of the problem of evil is called the empirical problem, which comes from Hume and claims that:

    In spite of the (empirical) fact that people do see evil in the world and yet believe in God, sometimes even converting from atheism or another religion, we could set out this argument as follows:

    P1: Evil exists;
    P2: Person x holds no theological beliefs;
    C: Therefore, x will be an atheist.
    That is, the sight of the many evils of the world would lead a person to think, "Well, a God who is good and all-powerful cannot exist." We could object to this by saying that instead it might be that the apparent senselessness of some evil might force a person to seek an explanation for it, which might be God. Indeed, that would seem to be why a significant proportion of people believe, at least in part (as we saw in our discussion of the moral argument above). It seems that what we want to say is:

    P3: Persons holding no theological beliefs will be inclined by the existence of evil to adopt atheism. Unfortunately this assumes what is to be proven.

    Another approach to the problem is called the probabilistic argument from evil and is taken to be a positive argument for the non-existence of God. According to this argument, going back to our original propositions again, P2 counts as evidence against P1. In criticising this idea, Plantinga noted that the meaning of this claim depends on the probabilistic theory we hold to, the soundness of which is a question for the philosophy of mathematics. Each of the alternatives have difficulties associated with them, and so we cannot charitably assume them valid if we are going to also hold it against the moral argument that not everyone agrees that morality exists independently of us.

    A different way to address the problem of evil is to present a defence of God, called a theodicy This is to accept that evil exists and that God is both good and able to remove evil but seek to explain why he does not. A well-known example is the free-will defence, according to which it was not possible for God to create a world with good but no evil because good could not exist without freedom, much like the quote from Aquinas suggested above. One form of the free-will defence might be thus:

    P1: God's purposes for the universe require humans to have free will;
    P2: Humans with free will may act in an evil manner;
    P3: Evil exists;
    C: Therefore, God is not responsible for evil.
    In criticising P1, some argue either that the concept of free will is itself incoherent (which we considered in our twelfth discussion) or that God could just as easily have made the world such that we freely choose to be good all the time. Counter-arguments reply that we come back to Aquinas again: in such a world there could be no virtue, fairness or compassion, for these qualities exist only in contrast to their absence. Since these things are what we consider to be the very best human traits, it follows that this world would be no utopia at all.

    Another criticism seeks to strengthen P3 by saying that although we may accept that some evil is necessary to contrast with the good, there is still a disproportionate amount of it, especially if we point to the horrific wars and genocides of recent times. To many people, this seems a decisive point: why would God need millions of people to be killed at a time? Although it is hard to see why it should be any better that a single child should be murdered for the sake of everyone else, as Ivan Karamazov objected, other thinkers respond that we simply have no basis for comparison and hypothetical speculation can hardly be expected to settle the issue satisfactorily.

    Still another argument in this area concerns animals: given that God is good and omnipotent, why does He allow the suffering of animals? Free will is not an issue here, since it is generally assumed that animals do not have it. Since this is a deeply problematic area for many people, responses have again suggested that the purpose of such suffering may be unknown or that most of it occurs when we remove animals from their natural surroundings. Alternatively, it could be that we have the free will to try to do something about it.

    In summary, some formulations of the problem of evil are stronger than others and the difficulties it poses depend at least in part on the perspective we adopt towards evil and whether we view it as a decisive objection to the existence of God or something to weigh against the other arguments for and against.

    (... continued in part 2...)
    Teaser Paragraph: Publish Date: 06/14/2005 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    The philosophy of mind has been a hot topic for several thousand years and over that time almost every philosopher has had something to say about it, for better or worse. The central issues it is concerned with are ones that most of us think about from time to time, even if we don't always use the same terminology. In this article we'll try to see why the subject has had held such a fascination for thinkers over the years and what we can learn from their efforts.

    The mind/body problem

    It seems clear enough what we mean by a body: we see it, we understand it and we take out life insurance for the day it gives up on us. Whether this notion of body represents our own, or one we might prefer to have thanks to aggressive advertising, we have a general conception of what we mean when we talk about it. What is it that we call mind, though? We say things like "it's on my mind", "I've half a mind to", along with countless other examples, and are traditionally talking about somewhere that thinking goes on, together with deciding, musing, writing bad poetry on Valentine's day, and so on—the place where consciousness, the intellect and other assorted characters are supposed to reside. Descartes noted that if he cut off his foot, his mind did not seem to be affected. If we lopped off our heads instead, would we still have a mind? On either answer, we can still ask where it went as the axe fell—even in the absence of volunteers.

    The mind/body problem, in one of its aspects, concerns the relation between the two. Some people have thought that the mind and body are one and the same, the mind being just one aspect of the body and located in or identical to the brain (excepting those instances when our bodies seem to be governed by our stomachs or other regions): these are called monists (i.e. mind and body are one). On the other hand, some consider that they must be separate, either wholly or significantly, with the mind not being equivalent to the brain: these are called dualists (i.e. there are two things at work). These definitions are very basic, though, since we could ask "one (or two) kind(s) of what?" We'll look at some of the possible responses when we come to study both in more detail below.

    In addition to wondering how mind and body are related, there is the question of the influence of mind on how we observe our world. Is there a world at all, independently of our perceiving it? How much does mind shape what we see? How do we know that our memories reflect what really happened? Pain is another problematic issue, and not just for doctors or rugby players: if a hypochondriac says he or she is in pain, how can we know if they are or not? If we can find no problem with their body, does it follow that there is no pain? How is it that some people appear to be able make themselves ill, especially around the time of examinations, and how is it that tough decisions can make people ill when there appears to be nothing at all wrong with their bodies? What about the problem of other minds? Can we ever know what other people are thinking, or how it feels to be them? Later we'll also come to the matter of changing our mind about something and ask how much choice we have in it, or if it instead it is determined by circumstances beyond our control (much as we discussed in our previous look at the issue of free will). All of these are aspects of the same problem, hence the attention paid by philosophers today and throughout our history.


    As we said above, monism (from monas, a Greek word meaning "one") tries to respond to the mind/body problem by saying that the two are not distinct after all. This is all very well, but that could mean that in fact there is only body, as we often suppose, or that there is only mind. The consequences of the two are quite different and there are several understandings of each, so we'll consider some examples.


    To state it in plain terms, physicalism is the idea that everything is physical. This is not to deny that there are other aspects to our world, like morals and bad jokes, but only that, ultimately, these are physical (for the former, perhaps the result of our evolution, as we discussed in the eleventh essay). In the past, physicalism was identified with materialism, but it became difficult to call certain supposed physical features of the world "material" (like the force binding particles in a nucleus together). Physicalism is a metaphysical notion, although it is often associated with the so-called scientific approach.

    It's clear both that physicalism is an example of monism and that it provides a suggestion for how to approach the mind/body problem: if everything is physical, then it is probably in physical theories that we'll find the answers. However, there are several different forms of physicalism that approach this issue in different ways. In the first place, we have type/token physicalism, and the best way to understand the distinction is via an example. Consider these three terms:

    Auckland Blues, Auckland Blues, Auckland Blues In that line there are three references but only one thing referred to; or, in the words of every supporter, "there's only one Auckland Blues". We call the "Auckland Blues" a token, of which there are three, but only one type of thing is mentioned. More generally, a token is some physical object, process or occurrence that represents an object, process or occurrence, whereas a type is some physical property that represents a mental property.

    Another example will help us understand the difference in usage: if we take a rugby ball and point at it, we could say that the ball is a token, physically identical with what we mean when we say "give this ball a kick". On the other hand, what physical token can the Auckland Blues be identical with? We could say the stadium played at, but the club is more than that. If we add the players, we don't quite have it; likewise for the supporters. If instead we understand it as a type, then we don't have quite the same problem of trying to find a single physical thing with which it is matched. The distinction is important because, as we said above, if we want to say that everything is physical then we need to explain what that means; type and token physicalism are two possibilities.

    A difficult concept in physicalism is the notion of supervenience. Consider two pictures on a computer screen: both are composed of pixels, and if they are different in any way then we know that they must differ somewhere in terms of the pixels that make them up. We say, then, that the pictures supervene on the pixels; the higher-level picture is a consequence of the arrangement of the pixels, but not the same. Somehow the levels are different: if we zoom in to see what makes up the picture, we then lose sight of what the picture was of, just as if we get too close to a painting to observe the brushstrokes we can no longer take in the whole scene. Thus the painting depends on the brushstrokes, but is not identical with them—it supervenes on them.

    This brings us to reductive and non-reductive physicalism: the former says that every mental concept can be reduced, somehow or other, to a physical concept, while the latter relies on supervenience. Instead of trying to reduce the mental to the physical, we can say that the mental supervenes on the physical.

    An interesting problem for physicalism is Hempel's dilemma, in which we ask what physical means. If we want to define the term via contemporary physics, then it would appear that physicalism is straightforwardly false, since physics today is incomplete and very few people would claim it gives us the whole truth, if at all. On the other hand, if we instead try to define it by reference to what physics may become, some time in the near or distant future, then are we saying anything at all? No-one knows what form physics might take in the future and the history of science doesn't give us much confidence in saying what will be retained from what we have today.

    Identity theory

    This theory is easy to understand: it states that mental states and brain states are identical. This, if we feel a pain somewhere then this is tantamount to say that the appropriate activity is going on in the brain; likewise, feeling love for someone is just the same as a certain brain state. This is an attractive proposition for those arranging blind dates or dentists, but critics have asked what brain state is identical to the experience of a colour, say. If we experience a grey sky, does it mean the brain state is grey also? That hardly makes sense. Furthermore, other animals can experience the same grey sky but their brains are not identical to ours. Which brain state is the experience identical to?


    A popular branch of the philosophy of mind is functionalism, in which the question is less "what is the mind?" (i.e. what kind of thing?) and more "what does it do?" Another way to make the same inquiry is to wonder what the function of the mind is, and to distinguish it from the body by saying that this function is different from those performed by the body.

    Consider, for instance, a bridge. Many different things can serve as a bridge, from the complex structure connecting downtown Auckland to the North Shore to a series of planks laid across stones that will get us from one side of a stream to the other. What's common here is the function: a bridge is defined by what it does, not its shape, design or what it's made of (although we may recognise common traits)—we can identify a bridge because of what it's used for.

    According to functionalism, we can think of the mind—or mental states—in this way while avoiding the earlier criticism of identity theory; indeed, functionalism was originally suggested to solve such difficulties. Functionalists claim, then, that mental states can be identified with the function they have on behaviour. Instead of worrying about what a mental state is (i.e. what it's composed of, or where it is), we call it mental because of what it does.


    To understand eliminativism (often called Eliminative Materialism) it useful to compare it to what the philosophers Paul and Patricia Churchland have called folk psychology: in the same way we call the collection of remedies built up by tradition and old wives' tales "folk medicine", we mean by folk psychology that similar group of pronouncements we make on questions of psychology to explain why people behave the way they do. When trying to give such explanations we often refer to factors like people's hopes, fears, upbringing, influences, problems at work, their local team having lost again, the price of petrol going up, and so on, all adding up to an elaborate system by which we describe what we suppose Carlos Spencer will do next, or other matters of lesser import.

    Eliminativists claim that folk psychology is hopelessly flawed and will eventually be replaced (eliminated) by an alternative, usually taken to be neuroscience (the study of the brain and nervous system). Eliminative materialism dates back to the 1960s and perhaps earlier, with Paul Feyerabend arguing via analogy with the history of science and Quine suggesting a physicalist approach, but the Churchlands have been its modern champions. To better understand the issue here, let's take an example.

    In seeking to explain why Hugo failed to arrive on time for an appointment and was instead seen in the company of a young lady, we could say that he has been known in the past to forget completely what he was thinking about or in the process of doing if a pretty girl passes by, and moreover that his chat-up lines consist entirely in philosophical witticisms that only he finds amusing; as a result, he missed the appointment because he was chasing after the unfortunate girl who was running away as quickly as possible.

    Now this theory may be wrong (Hugo could have met-up with a cousin who needed help more urgently), but it is a theory nonetheless. Advocates of folk psychology claim that such theories function much like those in the sciences; after all, they explain behaviour, can be falsified and tested, and offer predictions—even novel ones. Thus the factors we described above, like hopes, fears and susceptibility to the fairer sex are simply the mental states we use in such theories, even if they perhaps cannot actually be observed and so claimed to really exist in that way.

    By contrast, eliminativists might agree that folk psychology "works" to a certain satisfaction, but they claim it will be (gradually or otherwise) replaced. For instance, the existence of malevolent spirits was invoked to explain some mental disorders in the past, but now we usually say that this account has given way to psychological and other explanations. Thus we generally note that malevolent spirits turned out not to be real after all. In a similar way, notes the eliminativist, the folk psychologist's theories will give way soon enough because mental states do not exist.

    Eliminativists typically argue that folk psychology is untenable for one reason or another. Some have suggested that it is stagnant, or a "degenerative research programme" in our earlier terminology (cf. our sixth discussion), but others reply that this is no reason to assume it false or even hopeless. Another objection is that it fails to account for things like dreams, memories, some mental illnesses and consciousness (see below), but the rejoinder is that technical concerns about its completeness don't outweigh the fact that people use folk psychology all the time and it is successful. Indeed, a counter-criticism made against eliminativism is that it ignores just how successful folk psychology is.


    A form of monism that differs significantly from all those we've seen so far is idealism, in which it is supposed that instead of all mental concepts being actually physical, in whatever way, in fact the converse holds: only minds and mental concepts exist, with the physical being explained in terms of the mental. The most famous idealist was perhaps Bishop Berkeley, but there have been many people who were or are idealists and some suggest that if we took a headcount over history it would probably come out as by far the most popular theory in the philosophy of mind. That proves nothing, of course.

    Idealism solves the mind-body problem with ease: there is only the mental, so the problem of the interaction between mind and body is not a problem at all. Many of the counter-arguments advanced against idealism failed, and one of the interesting rejoinders that Berkeley provided was to note that idealism was at least as parsimonious (cf. our sixth discussion) as physicalism, saying that anything that could be explained on the assumption of the physical could just as easily be explained by reference to mental concepts only. Many people object to idealism on the grounds that it doesn't feel right, but—quite simply—it does feel right to lots of others, so this is not much of a complaint. Idealism is the subject of much study today.

    Criticisms of monism

    There are many critiques of monism, or physicalism in particular, including those we have already noted incidentally. However, there are two that are referred to often and so we'll look at them in more detail.

    The knowledge argument

    There are several forms of this famous argument, but the most common—due to Jackson—involves a scientist called Mary. For some reason, Mary has spent her life trapped in a room in which the only colours are black and white. She has access to television, computers, and so on, but the monitors are all also black and white. As a result, she has never seen another colour. She is able to get all the information there is via her computer and has thus studied the eye, light, what happens when light of different wavelengths arrives at the retina, what happens when we speak, and so on, so that when she says "the sky is blue" she has all possible information about what it means to say that.

    One day Mary is released from her room and she actually sees that the sky is blue (or some other colour, if she is unlucky and lives somewhere it always rains). According to the knowledge argument, Mary thus learns something new, namely what it's like to see the colour blue (indeed, we can relate this to the discussion of qualia below). Thus we have:

    Mary had all the information about the physical before she was released
    Mary learned some new information after she was released
    We conclude that not all information can be physical, and hence we have—seemingly—a strong objection to physicalism. There are plenty of other ways we could understand this problem: for instance, someone could know everything there is to know about rugby, including the rules, tactics, details of all past games, the physics of how the ball or human players could perform in all possible conditions, but they wouldn't know what it's like to play the game for themselves. The general form of the argument, in its strongest form, is thus:

    P1: A person x has all physical information about y before release;
    C1: Therefore, x knows all physical facts about y prior to release;
    P2: There are news facts about y that x learns on release;
    C2: Therefore, x did not know all facts about y before release;
    C3: Therefore, there are facts about y that are non-physical
    Some thinkers deny that new facts are learned on release. Although this is done in various ways, one basic idea is that it is possible to infer what it would be like to experience y from the information x has. Others deny C2 by saying that although new knowledge is gained, it is actually composed of old facts—precisely those that x already knew. Nevertheless, the interpretation of this argument and the objections to it are still keenly debated and now highly technical.


    A term used often by philosophers of mind is qualia (from the same root as "quality"), by which we mean the introspective character of an event; that is, what it is like to have an experience of something, whether it is a pain, bad poetry for Valentine's Day or a hospital pass. Since not many people deny that there are qualia, whatever they might be ultimately, we'll look at what mental states can be said to have qualia and what the nature of qualia might be, if anything.

    What states can be said to have qualia? We could consider a list of indicative examples:

    Seeing a blue sky
    Hearing a loud noise
    Cutting a finger on a knife
    Feeling tired
    Falling in love
    Feeling bored
    Smelling a rose
    What, then, are these qualia? Are they physical or non-physical, reducible or non-reducible? Some philosophers suggest that qualia are not new information; they can be derived from the physical facts we already have (as we saw with Mary above). A problem with this idea, though, is given by a thought experiment involving a zombie (whether or not zombies actually exist is not the issue here): when we look at a blue sky, we might feel happy at the nice weather, disappointed that the garden won't get the rain it needs, or any number of other experiences. An identical zombie may do likewise but has no experiences, even though it is the same in all other respects physically. How can the physicalist explain what is going on without supposing that qualia are non-physical? However, if qualia aren't physical then what are they?


    As we noted, and in contrast with monists, dualists suggest that mind and body are not the same. The idea dates back to Plato and we find it wherever the soul is distinguished from the body; more modern versions tend to originate with Descartes. There are several forms of dualism, though, so we'll begin by looking at the ways in which it can be stated before moving on to more specific issues.

    Forms of dualism

    Dualism is commonly divided into three forms:

    Predicate dualism
    Property dualism
    Substance dualism
    When we said that dualism involved two "things", we put off saying what kind of things we meant. The different forms taken by dualism fill in this information in different ways. Predicate dualism, to begin with, is the claim that more than one predicate is required to make sense of the world. A "predicate" in logic is what we say about the subject of a proposition (see the fourth and tenth parts in our series); thus "Hugo is boring" has "boring" as a predicate. Can the (psychological) experience of being bored be reduced to a physical predicate, such as one explaining in it terms of brain states, say? If not, we have predicate dualism.

    There are plenty of candidates for predicates that cannot be so reduced, like almost all psychological experiences (as we saw with the knowledge argument above), suggesting dualism. We could try, for example, to consider how we feel about learning philosophy at this very moment, and wonder if a description in physical terms could capture it. To many people, it seems unlikely.

    Property dualism is stronger, asserting that whatever there is in the world, it must have more than one property (such as the property of being physical, say). Perhaps, for instance, there really is only the physical; nevertheless, we may still be unable to account for the properties of what we find in purely physical terms. As before, the troublesome areas are psychological—especially the question of consciousness (see below).

    Stronger again is substance dualism. Substances are intended to be those things—whatever they are—that have properties. The mind, then, is perhaps not just thoughts, emotions and mental states, but that which has them. If psychological properties are non-physical, does that mean the mind experiencing them must be, too? If we suppose that it is, then we have substance dualism. On the other hand, if we think that all this goes on in the brain, then we don't—the substance would be the same, even if we still think the properties are dual.

    Bearing in mind these possibilities, we'll now consider the main problem for dualism and some attempts to avoid it.


    If mind and body are separate, how is it that they interact? Most people would agree that there is some form of interaction: thinking in a negative way apparently influences the way we behave, while an experience in the world can change the way we think. How is it, then, that two separate or distinct things like mind and body, either as properties or substances, can interact as they seem to? If the body is physical but the mind is something else, where do they meet?

    Descartes thought that the pineal gland was the answer. In more recent times, philosophical issues with causation have come to the fore: how can the mind cause a change in the physical? If we suppose that there are laws of physics (a problematic issue in itself, as we saw previously), then we know that energy is conserved therein. If something outside of the physical brings about a change, then, what does this mean for the assumption of conservation? If physical laws are closed, as it seems, then surely interference from outside would contradict this? Some interpretations of the quantum theory have made this situation even more complex, but for now we'll look at a few of the suggestions that claim to escape the difficulties of interactionism.


    The denial of interaction from the mental to the physical is called epiphenomenalism, in which it is supposed that although the physical can influence the mental (like the Auckland Blues in full flow giving rise to fear), it doesn't work the other way around; this avoids the worry about physical systems not being closed, but does it really help?

    The first objection is to point out that it certainly seems as though the mental can affect the physical: what about depression causing us to loll around in bed and write still more bad poetry, or fear of snakes stopping us from becoming intrepid archaeologists? Moreover, why would the mental have come about at all, if it does nothing? Lastly, what of the possibility of explaining how we act by reference to our mental states? To say "I know it didn't rhyme, but in my defence I was feeling upset about the row we had" might invite our valentine to reply with "causation in that direction is disallowed by epiphenomenalism, my sweet."


    Some of Descartes' followers, like Malebranche and Geulinex, agreed that the posited interaction was impossible, so that on every occasion in which it occurred the intervention of God was required to explain it—hence occasionalism. Whatever our views on religious questions, it seems hard to believe that every instance of interaction should be credited to a miracle, so this idea has lost much credence. Nevertheless, some historians of science believe it may help to explain why science arose in those cultures that disallowed it, since the search for laws in nature is somewhat confounded by God interfering all the time, whereas a belief that He set everything in motion and subsequently left it all alone could encourage us to wonder what rails it is running on.


    A solution adopted by Leibniz but of little use outside a theological perspective is parallelism, according to which the mental and physical don't interact at all, running—so to speak—in parallel. Obviously we could ask why it seems as though they do interact, but the only possible answers are that some kind of pre-established harmony between the two makes it look that way, or that it's just a coincidence that the world appears like they do. The problem with both is that they don't seem to explain anything or give us a chance to learn anything else.

    Criticisms of dualism

    As we mentioned above, the main criticism of any dualist theory is to ask how the mind and body are supposed to interact if they are distinct. Another problem is to explain the apparent unity of the mind; that is, it seems as though the mind is a unity, but how does that come about if it is a collection of properties? Alternatively, if it is a unity then what substance explains this?

    Other issues

    There are myriad other aspects related to the philosophy of mind that we may consider briefly here.

    The explanatory gap

    We have a fair idea what goes on in the physical world, even if it is incomplete, and we also have a fair idea what it is like to have experiences. Nevertheless, there seems to still be quite a distance between the two that is called the explanatory gap, and some claim it can never be bridged. This is a tricky problem for philosophers because it isn't yet clear how this gap comes about, if it must remain a chasm or what it means for our theories of mind.


    This term derives from the Latin conscius, meaning to "know something with others", and is understood in different ways. To try to appreciate what we mean by it, we can consider how we use it: when we say, for instance, "I was conscious that I wasn't paying attention", we refer to a kind of self-awareness—almost a kind of catching ourselves doing something other than what it appears we are doing. Sometimes we experience this kind of situation while driving: suddenly we realise that we weren't concentrating on the road because we were perhaps thinking intently about something else, almost driving on autopilot. Then we might say we weren't conscious of the driving, that we became conscious of this, and so on.

    What does it mean to be conscious, then? We could say it is to be aware of our own mental states. In that case, what kinds of thing can be conscious? Can animals, for example, be conscious or self-aware? What about computers, either now or in future? How is it that consciousness seems to be a unity (like mind, above), so that we are conscious of lots of things at once? To take but one of these issues, we'll ask if computers can become conscious as so many science fiction stories presuppose.

    Artificial intelligence

    The idea that computers can become intelligent or possess intelligence has been the subject of much research, especially with the famous victories over chess masters. Does this mean that the computer actually understands chess, though, and that it demonstrates intelligence, or is it just following a program unthinkingly, without any possibility of self-awareness? Some philosophers and scientists have thought that as technology increases computers will become as intelligent as humans, perhaps more so, or at any rate that there is no objection to this possibility in principle. Others have wondered if the kind of understanding a computer could have must differ fundamentally from what we call intelligence or becoming conscious.

    The mathematician Alan Turing proposed a test for whether or not we might say a computer can be said to be thinking, in which we would ask questions of x and y, housed in another room. x would be a person while y is a machine, or vice versa, and the object of the game would be to use our questions to try to determine which is which. If we cannot tell the difference, can we say that the computer is genuinely intelligent? Are we, in fact, just biological machines that can think, just as computers will eventually be able to?

    An objection to the possibility that machines could think lies in the requirement of some philosophers that the machine should be able to do things like writing a bad poem on Valentine's Day because of falling in love; until this kind of thing can be done, they say, machines cannot be said to think. Turing replied that he would be satisfied if we merely had as much reason to suppose the machine to be thinking as we have in supposing other people to think. Another problem is to say that thinking machines ought to be able to do many of the things we can, like have a sense of humour, make mistakes or get angry, but are we right to expect a thinking machine to think the same things as we do?

    One philosopher opposed to this idea of artificial intelligence is John Searle, who proposed another famous argument know as the Chinese Room argument. According to this thought experiment, a person knowing no Chinese is working inside a sealed room with Chinese symbols and an instruction manual for using them. As Chinese messages are passed in, the person follows the instructions and passes more Chinese symbols out. Unknown to this person, the messages coming in are questions and those going out are answers to them. Thus, says Searle, the person is able to understand Chinese by Turing's test but in fact understands no Chinese at all. Searle concludes that, in a similar fashion, a computer does not possess intelligence merely because if its ability to use and manipulate programs and data.

    Strictly speaking, what Searle was actually hoping to do was provide an argument against so-called "strong AI", which is the idea that computers may genuinely understand languages and have other capacities that we humans have. He didn't deny that computers think, because he considers that our brains are actually machines and yet think just fine. His thought experiment was designed to show that running a program does not equate to understanding.

    There have been many replies to Searle's argument, but one of the main criticisms is to say that the person in the room functions just like the CPU of a larger computer; in that case, although the person may not understand Chinese, the system does. After all, this system, comprising the room and everything in it, takes questions in Chinese and answers in Chinese. Doesn't that mean that the system understands Chinese after all? Searle replied that the person could memorise all the instructions and symbols, using them even outside the room, but still not understand any of it. Several philosophers have found this unconvincing. Another objection is to ask how we know that anyone understands Chinese, if not by asking them questions in that language and getting sensible replies back. In that case, the person would understand Chinese as well as anyone else.

    As with the other issues we've discussed already, this argument, criticisms of it and rejoinders to them are still very much the subject of contention—like many areas of the philosophy of mind.


    The philosopher Franz Brentano revived this term from its medieval origin (the root is intendere, meaning to be aimed at a goal or purpose) to call attention to what he felt was a distinctive characteristic of mental states. He noted that such states are always about something; for instance:

    I am mad about injuries to key Blues players
    I am in love with rugby
    I am hopeful that next year will be better
    ... and so on. Without the additional information, these don't make much sense; after all, if someone said "I'm mad" and responded to the question "what about?" with "I'm just mad", we would probably be forced to leave the conversation by offering our congratulations and going on our way. According to Brentano, then, all mental states are characterised by being intentional. What this means for the philosophy of mind was covered in his work and that of several others influenced by his ideas.

    Mind what you say about mind

    To summarise our discussion, then, we have seen that there are many aspects to the philosophy of mind and many approaches to follow in tackling it, all of which have a certain plausibility on the surface but which present interesting problems when we probe deeper. Since there are complex philosophical issues involved and important questions to be answered that have a relevance to all of us, it seems this area will continue to be the focus of considerable work and argument. Whether the last person you spoke to was a zombie, an android, a rugby player or a Chinese speaker is perhaps something to bear in mind.

    Dialogue the Eleventh

    The Scene: Trystyn and Anna have met for coffee. They are sat across from one another.

    Anna: The thing that strikes me as a result of yesterday's farce is that you can never really know what someone else is thinking or feeling. You can guess, or just stumble on ahead without worrying much, but people can get hurt as a result.

    Trystyn: How can you know they get hurt, then?

    Anna: Don't try to be clever, Trystyn.

    Trystyn: (He sighs.) I'm not. The point is that if you can know that they get hurt, however you manage that, then you can probably make a fair stab at it the rest of the time.

    Anna: How?

    Trystyn: How do we do anything like this? We observe people, use what we know about their character, their ideas and influences, their mood, and so on. It all leads to a picture of them from which we make predictions or take explanations.

    Anna: So all these things correspond to a mental concept, like sadness?

    Trystyn: Not exactly, but we can infer that the person is sad. Usually we say they are sad about something, though, not that they're in some "state of sadness".

    Anna: Maybe, but people are wrong sometimes, or they misjudge.

    Trystyn: There are alternatives.

    Anna: Like brain states?

    Trystyn: So they say. Being sad corresponds to some physical state of the brain, the firing of neurons and so on. When we say, "I'm sad about what happened with so-and-so", we're saying no more than that at a certain time our brain is in a certain state.

    Anna: That doesn't tell us anything, though, about how to deal with anyone.

    Trystyn: Well, perhaps they will in the future, with more research, but this is why people are so reluctant to give up the tried and tested old ways: they work.

    (A long silence. Anna seems reluctant to say something.)

    Anna: So what does it mean to say that you're in love with someone, or attracted to them?

    Trystyn: What do you think? (He is avoiding looking directly at her.)

    Anna: A biological urge, I suppose—or so Steven would probably say if he was here.

    Trystyn: (He looks up suddenly.) I wouldn't be so sure.

    Anna: (Not really listening...) It seems to me that there's more to it; that it somehow misses too much, or fails to capture what it feels like. It's as though even if you were able to state the position of all particles in the universe, the laws governing them, the processes that occur physiologically, the biological origins, and so on, you still wouldn't know what it's like to fall for someone...

    Trystyn: ... or fall out of a tree. (Anna laughs.)

    Anna: Exactly. So there's information missing somehow.

    Trystyn: Maybe not. What if you were actually using the information you already had and just seeing it in a different way or context? Kind of like saying, "ah, so this is how it all fits together." That way you'd have learned something new, in one way of thinking about it, but you'd only have used the facts you already had.

    Anna: Don't you think that it would truly be new information? That the whole is greater than the sum of its parts? Something would still be missing.

    Trystyn: You mean the way the light changes when you're in the room?

    Anna: What?

    Trystyn: It's from a song.

    Anna: Oh.

    (Silence. Trystyn looks at Anna, but she looks away. He smiles. She looks back; he looks away. Silence again. She frowns.)

    What are you thinking?

    Curtain. Fin.
    Teaser Paragraph: Publish Date: 06/13/2005 Article Image:
    By Paul Newall (2005)

    Krzysztof Kieślowski's Three Colours trilogy is a monumental work that blends cinema, philosophy and music in a seamless whole. Its sheer depth poses a host of interpretational difficulties but this paper seeks to unravel a minority of the interwoven themes that form it.

    Perhaps the most philosophical of directors (and writers, with his long-time collaborator Krzysztof Piesiewicz), meanings in Kieślowski's work are elusive and not easy to pin down. He claimed that "knowing is not my business - not knowing is", and this is a sense we find throughout his creations: a lack of answers to questions that are explored rather than resolved. Although he had plans to work on a further trilogy (Heaven, since completed by Tom Tykwer, Hell and Purgatory), his death in 1995 meant that the Three Colours was his final gesture.


    When Juliette Binoche first met Kieślowski, they discussed philosophy. This was a recurring trend for him, with Ir ne Jacob"s "audition" for Red consisting solely in a philosophical conversation over coffee. Binoche would turn down Jurassic Park to be Kieślowski"s Julie, remarking that she "would rather play a dinosaur than one of those characters". Reckoned by many to be the finest actress of her generation, she understood that Kieślowski was interested in details and prepared for her role accordingly. Asking to wear her own clothes on the principle that being familiar with costumes is necessary in order to forget them, she studied and was influenced by Annie Duperey's novel L'Ange Noir, which tells of the death of her parents at a very young age. Displaying no visible signs of bereavement, Duperey wrote that she had "suffered enough without having to show it as well."

    In Blue Binoche is Julie de Courcy, a woman who loses her composer husband and their daughter Anna in a car crash at the opening of the movie. Fleeing her old life and her lover Olivier, she tries to start over, taking an apartment in a working class area of Paris.

    There are several instances of close-ups in Blue, particularly the focus on Julie holding a sugar cube to let her coffee soak into it. Kieślowski was explicit on the importance of these passages:

    Kieślowski's intention here was to show Julie concentrating on tiny details "in order to be able to discard other things". He spoke of sending an assistant on a long search for the right sugar cube (one which would dissolve in five seconds – no more, no less), based on his conviction that the viewer would be patient enough to wait for just this long and understand the implication: that she "watches the sugar cube dissolve into the coffee in order to reject an offer she has just received from a man who loves her".

    The opening act in Kieślowski's trilogy is ostensibly concerned with liberte, the first of the ideals of the French Revolution. The subtlety in Blue that can easily be missed, however, is that the process Julie goes through is exactly the reverse of what is superficially occurring. Speaking of the part, Binoche said that "when you’ve lost everything, life is nothing"; but what we see made plain throughout the movie is that she has not lost everything. Olivier still loves her unconditionally and although she tries to remove all trace of her past, selling the family's belongings and eating the blue lollies that remind her of Anna, she keeps the mobile from the blue room and puts it up in her new apartment. Even after Lucille touches it and Julie recoils so slightly as to almost be imperceptible, a quite beautiful gesture on the part of an actress having achieved total mastery of her craft, it stays as a perpetual reminder of what she has lost. This is straightforwardly inexplicable of a woman who supposedly views memories as traps and seeks freedom from them, but it immediately makes sense if we understand her behaviour in a similar way to that of Patrick Bateman's in American Psycho.

    Thus when we watch Julie trying to block out the music that reminds her of the past by curling up in a natal position in the swimming pool, fingers in her ears (Binoche's idea), and yet returning to an apartment with the blue mobile, just as she destroys her husband's final composition even as she keeps on a scrap of paper in her handbag the motif that would tie it all together, we realise something: she is not free of all ties because life has lost its meaning, but instead wants to be free in order that life – and the past – will have no meaning. She is hiding from her pain inside a false liberty.

    During the meeting with Olivier, Julie recognises the music of the busker outside playing a recorder as that of her husband's. When she asks him where he heard it, he replies that he makes up all sorts of things. This is an instance of a theory of Kieślowski's that "different people, in different places, are thinking the same thing but for different reasons". With regard to music in particular, he held what might be characterised as a Platonic view according to which notes pre-exist and are picked out and assembled by people. That these can accord with one another is a sign of what connects people, or so he believed. Indeed, music played a vital role in all of Kieślowski's work, his relationship with Zbigniew Preisner being a unique one wherein the latter (a self-taught composer and graduate of philosophy) wrote the score before the movie, fitting the story to the music rather than the other way around (Kieślowski described it as "the film [being] an illustration of the music").

    Julie's mother has Alzheimer's and represents the ultimate end of any attempt to be free of memories, being unable to recall most details of her life. We notice, however, that when Julie discovers her rat infestation she goes to her mother to ask if she was afraid of rodents as a child. This scene again draws our attention to the inconsistency – or tension – between Julie's apparent desire to forget her past while at the same time needing it to make sense of her present. That she would turn to the one person who really is losing the power of retention is not so much ironic as tragic, demonstrating the absurdity of her predicament.

    There are four instances of the fade to blue accompanied by de Courcey's motif for the Concert for European Unity: at the hospital on the visit of the journalist; on the stairs when Julie locks herself out of her apartment; in the swimming pool; and when she learns of Patrice's affair and is asked by Olivier "what do you want to do?" When we realise that these notes are those left by her husband to complete the concert, his last work, they become symbolic of the ties that remain even though she has tried to make a clean break. Only when she finally accepts that she cannot run away from the love that endures does she cry, letting go and beginning again with hope – truly free at last.


    The second feature in the trilogy, White is concerned with equality. Many commentators have viewed it as the weakest of the films, passing over it in their haste to get to Red and describing it only as a black comedy. There is no doubt that it is funny, but unfortunately this misses the key points that Kieślowski was stressing and also its importance in making sense of the whole. Indeed, it is no exaggeration to say that White adds another dimension without which the overall effect would be considerably lessened.

    White is the story of Karol Karol, a Polish hairdresser (named by Kieślowski as a tribute to Charlie Chaplin) living in Paris and married to Dominique Vidal, a French coiffure. She leaves him because of his failure to consummate their union. The movie begins with a number of humiliations, as Karol loses his marriage, his finances and his dignity through the divorce hearing and his (now ex-) wife immediately taking a lover. On the steps of the courtroom Karol is the target of a pigeon with a good aim, so that moments after taking a small pleasure in the bird's flight he is "humiliated through his naivety in the face of nature", as Kieślowski put it. This sets the scene for what is to follow, a point on which Kieślowski insisted:

    In a continuation of the motif from Blue, Karol grins at an old man struggling to use a bottle bank, taking a perverse pleasure in someone apparently being worse off than him. However, he salvages something from his humiliation, demanding the return of his two franc piece (which he subsequently keeps with him) and finding beauty in the bust he notices in a shop window and carefully restores in Poland.

    There are two instances of foreshadowing in White. In the opening credits we watch as a bulky suitcase makes it way around an airport carousel, without knowing what this means as yet. Only later do we understand that it contained Karol, on his way home by being smuggled as part of Mikolaj's luggage. Later Karol spins his two franc piece as we watch Dominique enter a hotel room, looking exhausted. This, we discover, is where she will meet the "reborn" Karol after his funeral.

    The subplot involving Mikolaj, a Pole who Karol meets in the Paris Metro and helps get him back to Poland, is itself one of rebirth or resurrection. Wanting to kill himself but being unable to do so, Mikolaj offers Karol money to help him. At the fateful moment, Karol shoots him in the chest with a blank before telling him that the next one is real and asking "are you sure?" Mikolaj is not and the two men trade the idea for running onto a frozen lake like children. For Mikolaj now, "everything is possible".

    The remainder of the movie follows Karol's rise to prosperity in a Poland opened up to capitalism. Reaching a point of financial security, he is able to fake his own death having first changed his will to make it seem that his ex-wife was involved in the handsome settlement she receives, resulting in her being jailed. Karol is thus able to erase his humiliation and attain equality with Dominique, but it comes at a cost of its own that neither can afford.

    Indeed, Kieślowski's real point in White is that the maxim "these days, you can buy anything" is false: love cannot be bought and cannot be described in terms of equality. Moreover, it is much more than consummation, which is hinted at by Dominique early in the film when she tells Karol "you don't understand that I want you. You don't understand that I need you." Only at his funeral does Karol realises that something is wrong with his attempts to achieve parity when he observes that Dominique is genuinely upset. Delpy herself noted that the result of his machinations was that "both characters are locked up in their own prisons – his because people think he is dead. They still love each other, though, and hence there is hope."

    There is thus in White a critique of viewing equality in economic terms - or as a matter of power - as both Karol and Dominique come to appreciate that neither have dominion over love, just as Julie learned that she could not free herself from it in Blue.


    The final movie he completed before his death, Red is acknowledged as Kieślowski's masterpiece. Its meaning is so elusive that overinterpretation is a constant danger. A good example is the name of the heroine, Valentine, which has been the subject of much speculation. Kieślowski recounted, however, that he had simply asked Ir ne Jacob what she would like to be called, which she responded to by selecting her favourite name as a child. Jacob herself noted that Kieślowski was "distrustful of any message, of a moral", but this is not to imply that interpretation is unlimited.

    The most obvious aspect of the film is its daring use of colour. In contrast to the muted greys of Gen ve, anything of any significance it saturated with or marked in red – from the ribbon on Valentine's telephone to Auguste's jeep. When Valentine stops at a traffic light in front of an empty billboard framed in red, then, moments before Auguste crosses the same road and drops his books (the elastic having broken), we are aware that some kind of foreshadowing is taking place.

    The subject of Red is fraternity, at least on the surface, but Kieślowski himself stated that "the essential question the film asked is: is it possible to repair a mistake which was committed somewhere high above?" The meaning of this apparently cryptic allusion becomes clearer as the movie progresses, particularly if we pay attention to the many pointers scattered throughout. When we notice the camera lingering over a picture of a ballet dancer in Auguste's apartment, for instance, and then watch Valentine struggling to hold the very same pose at her class shortly thereafter, the room dripping in reds, we expect to find a connection between them. Nevertheless, they seem to keep missing each other, such as when Auguste moves to the window when Valentine's car alarm is sounding, only for his girlfriend to appear and distract his attention.

    When she knocks down his dog Rita, Valentine finds herself at the home of Joseph Kern, a retired judge who appears only under this description in the credits. Ostensibly indifferent to Rita's accident, he dismisses Valentine and yet comes to the window to look at her again. He then sends her money to pay the veterinarian's bill – far too much, as it happens, and apparently quite deliberately. Although this scene passes quickly and our attention is focused on Valentine's winning on the fruit machine – red cherries – and her understanding it as relating to her brother’s appearance in the newspaper, the question neither asked nor answered is how the judge was able to post any form of payment without knowing who Valentine is or where she lives…

    In any case, the plain implication is that Kern paid too much in order to draw Valentine to him again, and revisit she does. He lacks the correct change for her when she gives back the overpayment, so he disappears inside and fails to come back. Valentine, of course, follows her curiosity and seeks him out, discovering the eavesdropping on his neighbours. Before we come to this strand, however, we notice that the judge points to the thirty francs rather than passing them to Valentine. When she picks them up, we see that they were resting on what seems to be a picture. This is actually a record sleeve, the artist being van den Budenmayer. (The composer was a fictional creation of Kieślowski's and Preisner's, used in La Double Vie de Veronique, although it was taken seriously by some music critics.) Again the camera lingers and we have another example of foreshadowing: later we see Valentine listening to a CD which we recognise by the cover as van den Budenmayer, her interest having been piqued by seeing it at Kern's house. Next to her are Auguste and his girlfriend Karin who buy the last copy, as though the judge had arranged for their paths to cross again.

    For Kieślowski, the dialogue involving Valentine and the judge was between "experience which can know disappointment and youth which has yet to face it". Faced with her disgust at his behaviour, Kern challenges Valentine to fix the problems he listens in on and asks her if she acts to help others or instead just to make herself feel better. Rising to the bait, she visits the home of one neighbour only to find the daughter already listening in on her father's conversation with his lover while his wife is cheerfully oblivious. Much as we learned from many of the shorter pieces in the Decalogue, situations like these are too complex for easy answers. No such attempt to help is involved when Valentine learns that another resident controls much of the Genevan heroin trade and wishes death upon him, her brother firmly in her thoughts.

    Some commentators have suggested that the judge is – or represents – God. He asks Valentine to stay, telling her "the light is beautiful" just as he is bathed in it. They listen to Auguste's conversation in which he tosses a coin to decide between bowling or studying penal code and Kern does likewise beforehand, resulting in tails and still another lingering close-up. Somehow we know that Auguste will obtain the same result, and – more foreshadowing – that is where Valentine will end up, too. Again and again we find Auguste and Valentine passing one another without meeting, as though the judge is contriving to make it happen. Indeed, he tells her that Auguste has not yet met the right woman; and when she asks how he knows this, he replies "I watch them from my window". This in itself engenders the realisation that much of the movie involves glass, usually as a barrier between a spectator such as Kern and the world outside.

    The second discussion between Valentine and Kern takes place after he has turned himself in. He did this, he says, to see what Valentine would do. He mentions that he had to write to his neighbours using a pencil since the pen he had used all his life would not work, while the previous scene had involved Auguste being given a very similar one as a gift. Even so, he remarks that Valentine may have been very close to Auguste when she went bowling, which piques her interest. She notices that he seems happy about the couple breaking up and demands to know if he provoked it; and indirectly, of course, he did.

    It is at this point that we witness the development in both characters. Valentine wants to help her brother but has come to realise that there are no easy answers. When she asks the judge if there is anything she can do, he replies "be". Asking for clarification, he repeats himself: "just that: be". Kern then admits that he had made judgements in the past that he now believes to have been wrong, acquitting a guilty man – a sailor – who had since led a good life. Valentine tells him that he had therefore been saved, but Kern wonders about how many others he might have judged differently, even others who were guilty. "Deciding what is true and what isn’t", he says, "now seems to me a lack of modesty." This passage is key to understanding the trilogy and Kieślowski’s oeuvre as a whole: all judgements are too soon and everyone can be saved by the smallest of gestures. This lesson applies particularly to Kern, whose liberation from the confines of his objectivity is symbolised by the breaking of the glass we see him trapped behind on many occasions. After the fashion show he places his hand on the window of his car, a gesture she reciprocates which indicates that their connection has transcended the boundary between them.

    Standing at his window, however, Kern states that the difference between him and those he judged is only contingent. In reply, Valentine asks him if there is someone he loves. Jacob herself was clear on the meaning of her character's question:

    Here, of course, we come full circle to the Concert for the Unification of Europe from Blue and its chorus drawn from 1 Corinthians 13.

    The final dialogue occurs after the fashion show, to which Valentine invites the judge. She asks him to tell her again about the dream he described, involving her waking up happy beside someone. When she asks if this is what will happen, he is unequivocal. As though the realisation is unfolding slowly at that moment, she inquires of him, "what else do you know? Who are you?" and states that she feels something important is happening around her. We appreciate what this something is when the judge recounts his experience of visiting the fashion show years earlier and dropping one of his books from his balcony seat when the elastic binding them broke (captured in a beautiful sweeping shot by Piotr Sobocinski), falling to the ground open on a particular page. Just as we had seen for Auguste in the present day, the passage indicated by this "accident" was the one that came up in the subsequent test. As if this were not enough, he then remarks that he had to recharge his car battery.

    What we learn, then, is that Auguste is somehow living the judge's life over again, with more than a hint of implication that Kern is directing it. There have been numerous chances for Valentine and Auguste to meet without doing so, but this time the judge will ensure it happens. The details are identical, down to Kern's description of his lover, her betrayal and following the couple across the English Channel. He calls Valentine the woman he never met and explains that his last judgement was on a case involving Hugo Holbling, the man who had taken his only love from him. This was his last act, taking early retirement. There follows the joining of hands and Valentine's noticing an old lady struggling at a bottle bank, the same motif we have found in each element of the trilogy. She helps her, completing the cycle and saving the world in a moment.

    It remains only to note the breathtaking close of Red, in which the threads of all the movies are drawn together by the tragedy of the ferry sinking in a storm that also claims a yacht – one we understand to contain Hugo Holbling from the pictures he had shown Karin and the closing of her weather service in order to travel across the Channel. The only survivors are Julie and Olivier from Blue, Karol and Dominique from White (these pairings indicating that both couples remain together and also, by the use of his name, that Karol has given up his pretence of being dead), Valentine and Auguste from Red, along with a barman. (There is no mention of this character, Steven Killian, at any point in the trilogy, which may imply a final lesson from Kieślowski against overinterpretation, leaving a detail that cannot be explained.) This is the moment at which Valentine is captured in exactly the pose of her billboard advertisement, a stroke of genius by Sobocinski that can only be experienced since words fail to convey the sheer power of the shot. The camera then cuts to the judge gazing through a broken window, smiling quietly. Kieślowski has answered his own question, the mistake rewritten and absolved by the love that never fails.

    Finis vitae sed non amore


    1 Corinthians 13

    Though I speak with the tongues of men and of angels, but have not love, I have become sounding brass or a clanging cymbal.
    And though I have the gift of prophecy, and understand all mysteries and all knowledge, and though I have all faith, so that I could remove mountains, but have not love, I am nothing.
    And though I bestow all my goods to feed the poor, and though I give my body to be burned, but have not love, it profits me nothing.
    Love suffers long and is kind; love does not envy; love does not parade itself, is not puffed up;
    Does not behave rudely, does not seek its own, is not provoked, thinks no evil;

    Does not rejoice in iniquity, but rejoices in the truth;
    Bears all things, believes all things, hopes all things, endures all things.
    Love never fails. But whether there are prophecies, they will fail; whether there are tongues, they will cease; whether there is knowledge, it will vanish away.
    For we know in part and we prophesy in part.
    But when that which is perfect has come, then that which is in part will be done away.
    When I was a child, I spoke as a child, I understood as a child, I thought as a child; but when I became a man, I put away childish things.
    For now we see in a mirror, dimly, but then face to face. Now I know in part, but then I shall know just as I also am known.
    And now abide faith, hope, love, these three; but the greatest of these is love.
    Teaser Paragraph: Publish Date: 06/13/2005 Article Image:
    By Paul Newall (2005)

    Bret Easton Ellis’s masterpiece American Psycho is typically described as a satire ("a black-hearted satire on the terrible power of money" said Jenny Turner in the Scotsman) and, in particular, a savage indictment of a (Western) society caught in the iron grip of commercialism, greed and superficiality.

    Ellis' decision to quote Talking Heads ("And as things fell apart / nobody paid much attention") on the inlay before the story begins would seem, on the face of it, to set us up for just such a reading. At the same point, Ellis also excerpts Dostoevsky's Notes from Underground:

    We are thus led, as it were, to viewing the work from its outset as a commentary on a society gone wrong, in which the protagonist is perhaps incidental to the purpose at hand.

    American Psycho is also, as is well known, a brutal read, as hard on the stomach as it is on the mind. The numbing detail of chapters cataloguing the recording exploits of Genesis, Whitney Houston and Huey Lewis and The News, combined with incessant information as to which combination of designer labels the characters are dressed in, are upstaged only by the detached and almost itemised descriptions of the many episodes of violence, murder and sexual abuse throughout the text. Ellis spent a considerable amount of time researching these, trying to understand exactly how much punishment a human could take without expiring. This has lent weight to the assumption that his purpose was bloodshed for the sake of it, or to hammer home the lesson of what will surely come of a culture built upon corruption and apathy. In this essay I shall offer an interpretation that goes beyond these basic observations to find a deeper, more philosophical and more romantic story hidden behind the easy option of mere satire.

    All that happens in American Psycho is sandwiched between two related remarks, one scrawled in graffiti and the other a neon sign in a cinema. They are, respectively, "ABANDON ALL HOPE YE WHO ENTER HERE" and "THIS IS NOT AN EXIT", both capitalised. Having noticed the former, the novel begins in the third person as the central character, Patrick Bateman, describes a colleague, Timothy Price, before shifting into the first person where, with a few (and significant) exceptions, he will stay. Price is one of three crucial people who will shock Bateman out of the reasoning he has constructed around himself, trapping him and thus leaving him without an exit.

    The thesis ostensibly explored in American Psycho is that examined at great length by Dostoevsky in The Brothers Karamazov. Although stated in various (contentious) ways, for our purposes it may be given as "if life is pointless then anything is permitted". Godless or otherwise, a society that has lost its moral rudder makes the existence of psychopaths like Bateman almost an inevitability, or so it is implied. What I want to suggest is that this proposition is precisely backwards: it is not that life is pointless and therefore Bateman does evil, but instead that he does evil to prove (to himself) that life is pointless.

    This counterintuitive reading is difficult to appreciate at first but becomes more readily apparent as we compare Bateman's differing reactions to the situations and characters he meets. With his friends, workmates, acquaintances, girls or those people he comes across from day to day at the gym or serving his drinks at restaurants, he is on a kind of autopilot: detached, uninvolved and noting what goes on largely as a spectator. Witness the laconic way in which he tells a girl, Daisy, that he has hurt people before and may do so again with her; or the mechanical discourse on world and US politics at Evelyn's. This is the empty life he has fashioned, comfortable because it is predictable. Nevertheless, the story we read is on one level an extended test as he attempts, with increasing risk, to show that “nobody pa[ys] much attention” to his inhumane behaviour. This, he feels, demonstrates that people just do not care. Perhaps the most memorable occasions are Killing Child at Zoo ("I feel empty, hardly here at all […] and I walked away, my hands soaked with blood, uncaught") and Chase Manhattan. The latter, in particular, involves a change in the narrative as Bateman moves from his usual first-person telling to "Patrick shoots him in the face" and then back again, when "calm is eventually restored". What hope can there be for anyone in such a world?

    The problem for Bateman is that he is trapped in this thinking, with each instance of not being called to account convincing him still further that no act can have any meaning. That is the point. At dinner with Jean, he attempts to set this out in detail in a monologue:

    While it is an easy matter to point to passages like this as indicative of an emptiness in Bateman, he himself contradicts all of it moments later, only to fight against it:

    In spite of the effort he makes to turn this feeling away and reject it, the chapter ends rather differently:

    Earlier, at the previous dinner, Bateman had imagined "running around Central Park on a cool spring afternoon with Jean, laughing, holding hands." This is followed immediately by the most important line in the entire book: "We buy balloons, we let them go." This is the "taking pleasure in a look or a feeling or a gesture" that has supposedly never occurred to him, and which is said to achieve nothing. As quickly as he experiences these isolated moments, however, Bateman talks himself out of his optimism and back into the solace of the meaningless, where his failure becomes the norm again.

    His position is thus one of knowing there is a way out but being too afraid to take it. If everything is meaningless, of course, then there is no shame in not letting the balloon go simply for the sake of it. The suspicion that there is something more is what Bateman attempts, over and over, to kill – to remove the doubt that nags at him and asks why Bethany left him, a circumstance that bothers him so much that, typically, he has to murder her to make it go away. It is when things go differently that his confidence and detachment evapourate, whether trying to strangle Luis Carruthers and finding himself immobilised by not having predicted the outcome or genuinely worried that he does not know how much Tim Price makes or where he went when he disappeared down the tunnel. It is easier for Bateman to believe that nothing is of any consequence and to prove it by acting with seeming impunity than it is to face up to his emptiness on the inside and admit that Jean makes him lose control, not knowing what will happen next.

    American Psycho, then, is a story about a man so afraid of the uncertainty in the world around him that he finds solace in an idea; namely, that there is no meaning and no one really cares. This at once renders him no different than anyone else and excuses his failure to take any kind of risk. Never having to worry about making his way in life, he seeks out and destroys meaning wherever he finds or suspects it to be hiding to soothe his worry that he has somehow fallen short. Faced with a friend who takes (non-violent) directions he dare not, a colleague whose sexual orientation he was unable to judge and a secretary who will love him unconditionally, he backs away, unable to cope. This fundamental inadequacy, the certainty – buried far beneath the violence – that he is scared of not knowing what will happen next, is why Bateman is trapped in the sure knowledge that there is no exit external to him to take and why he ends the book by sighing again, crushed under the realisation that he will have to find the answers within himself.

    The book is a tragedy, not a satire.
    Publish Date: 06/12/2005 Article Image:
    By Paul Newall (2005)
    In this article we'll consider the problems associated with free will and determinism, starting by explaining the terms involved, the difficulty (if there is one), and then trying to understand the proposed solutions. The importance of the topic is plain enough: it comes up often, in many contexts, and is one that people can easily understand the relevance of; which is only to say that it isn't just for the philosophers.
    The terms
    What do philosophers and laymen mean when they start worrying about whether we have free will, or what the consequences of determinism must be? We'll begin by making sure we know what is at issue before we worry about the implications.
    The idea of determinism is easy enough to explain in a simple fashion but considerably more complex if we want to use it to make an argument about free will, whatever that is. Initially, then, we'll define it in the common sense fashion: determinism states that the way things will be is a result of how things are and the work of natural laws. That is only to say that if we know exactly how things are at the present moment and the laws that govern how the world (or the universe) works, then we can derive how things will be at some future time. Bearing in mind that we've skirted over some problematic issues that we'll come back to, let's consider some examples.
    Take a simplistic case first. We know that denizens of Auckland are keen on their rugby, and moreover that if their team wins then they are going to be happy. Conversely, if they lose then they are going to be morose in the special way that only rugby fans can be when the finest attacking side on the globe somehow manage to undo themselves yet again. Add to this the fact that Auckland have just won a game, and we can say that Aucklanders will be happy. We use the way things are (that Auckland have won again), together with the law (that Aucklanders are happy if their team wins), to determine that the Aucklanders will be happy again, at least until they throw another cross-field pass under their own sticks for the umpteenth time.
    Now consider something that many people hope eventually to achieve, namely that all the laws of physics, or of nature, have been discovered and understood, along with the (hypothetical) situation wherein we know the position and other characteristics of all fundamental particles (whatever they may be) in the universe. We can then apply the laws to (again) determine how everything that follows for these particles will play out over time. (Whether this is possible in light of certain other theories in physics and elsewhere is not important for the purposes of our example.)
    In general, then, we have an iterative process: we take the state of the world at some time t, a formula (or law) that tells us how to get from t to t+1, and hence we know how the world will be subsequently. In these simple terms, it is little different from figuring out how much money we'll have in our savings account at the end of the year by knowing how much we had and how to work out how it will change over the period of investment.
    Now we'll look at tightening this definition. We said that determinism involved several aspects:
    The way things are; 
    Natural laws; 
    The way things will be. 
    Taking the first, what do we mean by "the way things are"? We can say that we are concerned with the state of the entire world, or universe, but why not take only a small portion instead? That would give us, say, determinism for a small region (or even particle) based on knowledge of how it will behave in future—a decidedly less ambitious endeavour. The problem here, though, is restricting the domain in this way: can we even do so at all? Although it may seem plausible initially, there are myriad factors that might influence the area we're looking at, especially if we're talking about laws of nature that are presumed to apply everywhere.
    Let's look at our examples again. If we try to restrict our concerns to Auckland only, we fail to acknowledge that news from elsewhere is coming in, affecting the mood of Aucklanders. We also assumed—quite reasonably—that rugby is the only important thing to Aucklanders (indeed, the entire world), but there may yet be some people who have failed to heed the gospel and want to consider other matters, too, for their sins. It seems that our attempt to determine the mood of Aucklanders is doomed to fail because our restriction was impractical.
    Now take our second example. If we want to consider only a portion of the universe—say the earth, or an area of it—we need to bear in mind that the laws of physics will apply everywhere. If we look at a law like gravity, for instance, we suppose that it applies everywhere, such that the other particles, planets, stars, and so on, in the universe will have an effect, however small, on the earth or the region we've taken. How can we determine the future behaviour of a restricted area like this if we explicitly discount the influence of the other regions of the universe?
    In general, once again, we seem to be forced to take the whole of the universe in order not to miss the impact of whatever we leave out. That means that determinism will have to apply everywhere.
    Looking now at natural laws, we can first say that there are some objections that can be made to the very possibility, which we'll come to later. For now, what do we require of these laws such that determinism makes sense? For one thing, it won't help to have natural laws that only hold for a certain length of time: if Auckland eventually becomes populated by people who don't appreciate their rugby, our attempts to determine their mood following games will be useless. To take a less horrifying prospect, if it so happens that the laws of physics only apply until, say, ten years in the future, at which point dropped items float in the air and tourists are no longer annoying, then our efforts to determine the future behaviour of the universe will be dashed. Plainly, then, we require these laws to hold at all times.
    Secondly, suppose that our observations on Aucklanders only apply to those from Ponsonby; that would render a prediction (based on our determinism) for someone from Parnell useless (we simplify the actual rugby circumstances somewhat here). Similarly, if the laws of physics work well enough on earth and in regions nearby, but behave in a completely different fashion in some far away galaxy (perhaps a region of the universe in which all politicians are honest), then our determinism will fail. The moral of the story here is that we require the laws to hold everywhere.
    To summarise, then, we want the natural laws under consideration to be universal (as befits the first part of our tale), applying everywhere and always. We also want the laws themselves to be deterministic: it wouldn't help much if they weren't, since a well-determined present plus indeterministic laws would make the future state indetermined, too.
    Lastly, consider what we mean by "the ways things will be". One thing we could ask is why the determinism has to function only in one way? Why not, for instance, require that if we know the laws of nature and the way things are then we can also determine the way things were? Is the past not as fixed as the future under these circumstances? If so, it would seem that taking the "arrow of time" in the direction we're accustomed to is moot: if the past, as far back as we care to consider, is determined, then it seems trivial that what follows will be also. Quite often we find that talk of determinism ignores or minimises this notion.
    This brings us to the question of how long the universe need be determined for: if we take it, say, that the way things are, together with natural laws, determines how the universe will be for perhaps the next five hundred years but no further, it seems that although the universe is not truly deterministic, it is for the time we'll be alive to think about it. It would appear, then, that we don't need to have a deterministic universe forever, although this is generally what we're thinking about when we worry about the implications for free will.
    To conclude our study so far, we have the idea of determinism as being able to give the future state of the universe from its present state and the laws of nature governing it. Note, however, that although this has thus far been an epistemological issue, but determinism is very much metaphysical; determinists hold that the kind of thing we've been discussed about fixed laws determining the future state of the universe even if we don't know about it and perhaps can't know. If we knew the required information about the state of the universe and the laws governing it, then we could determine the future state, but that state is still determined in the same way even if we don't.
    Let's now take a look at free will, so we can begin to appreciate the problem that will arise.
    Free will
    Much like determinism, what we mean by free will seems obvious enough (which is usually a good reason to suppose that it isn't). Take some examples:
    If I live in Auckland, I choose whether to support their rugby team or not 
    If I'm interested in philosophy (or not), I choose whether I read this article or not 
    It's up to me what my favourite colour is 
    I choose what to pick from the menu (my finances notwithstanding) 
    We could expand such a list ad nauseum; in general, the decisions we make are ours to make in the first place: we choose according to our will. Obviously there are pressures put upon us (from our peers, our upbringing, our circumstances, and so on), but ultimately the choice is ours.
    Alas, the matter is not so simple. To start with science fiction (or not, depending on our stage of paranoia), what if our decisions are influenced by others? We could, for example, be swayed by the wrath of Khan, some kind of government mind control designed to make us vote Republican (an implant of sorts) or else an alien influence. Are the choices we make then still examples of our free will? It hardly seems reasonable to say so. What, though, of the impact of the pressures we considered above? Some people are well able to ignore the advice of their parents and lose their teenage years in a fog of alcohol, as though they're the first and only ones to think of so doing, but various culture-specific (and other) influences are not so easy to escape. Take the following hypothetical list of circumstances:
    Everyone I know supports Auckland 
    Everyone I know thinks that to support anyone else is tantamount to asking to be institutionalised 
    My parents and grandparents support Auckland 
    The few people I know who don't support Auckland are ridiculed endlessly 
    I was brought up to support Auckland 
    If I then decide to support Auckland, was it really an expression of free will? (We could just as well ask the same question if I decided not to.)
    Clearly we've been concerned here with the extent to which a freely willed decision is really ours. It does no good to say that free will is what we have when we choose one direction instead of the other possibilities when beyond the influence of such circumstances, because situations like that are few and far between (indeed, we could argue that they don't exist at all). What we want to say, then, seems to be that insofar as we can, free will is when we are able to choose more than one option and do so by our own volition.
    Consider now, though, not just external influences but those from "within". If I have a sweet tooth and choose candy instead of an apple, did I exercise my free will? What if someone who suffers from kleptomania steals something? (To hint at what will come, was it my fault, or theirs, respectively?) The issue here is whether such decisions can be called examples of free will.
    We can add to these the various desires we often have that are referred to (although not by everyone) as "base", such as opting to watch more television instead of studying for a test, or choosing to eat a forbidden food when on a diet. As we discussed in the ninth article, some thinkers want to discount such choices and consider free will to be what we would decide on if we were in full possession of the facts and, as it were, our own masters. The problem, as before, is to ask if we're ever in such a situation: when can I say my choice was freely made? We can approximate it, but when are we truly free of base desires and influences that would lead us to choose wrongly? More importantly, perhaps, how do we determine in the first place which is the "right" choice?
    Although there are many more angles we could take, to draw these strands together we can look at the definition of free will that Aristotle advanced many years ago:
    ...when the origin of the actions is in him, it is also up to him to do them or not to do them.Based on what we've considered above, can it ever be the case that we're in a situation simplistic enough to satisfy this? If not, what does it mean for the notion of free will before we've even reached the problem?
    Regardless of the difficulties we've considered above, we now come to the issue of compatibility, Here we have a fair idea of what's at issue: we talk of couples being incompatible if there's something about them that will make quarrels inevitable, say, or of supporting Auckland being incompatible with supporting North Harbour: that is, the one excludes the other, or the one circumstance must lead to the failure of the other. In the context of this discussion, we have two concepts: free will and determinism. Compatibilism suggests that the two can coexist; incompatibilism that they cannot.
    The problem
    Hopefully the problem should by now have become obvious, assuming it wasn't already known of beforehand: how can we have free will if everything is determined? Conversely, if everything is determined, how can we have free will? Still another version is to start with free will and ask how, then, can determinism be true?
    Often this problem is framed in an ethical context: if the kleptomaniac we considered above steals something, we usually attribute blame and, if we catch him or her, we assign some form of punishment; at the very least, we generally take a dim view of theft. If the future state of the universe if determined by its past and the laws of nature, though, then the kleptomaniac was surely bound to steal; can we then blame him or her for so doing? It hardly seems fair to complain at something that had to happen, besides bemoaning that it did. After all, were we to fall from a tall building and find ourselves plummeting to the ground rather too quickly (assuming, of course, laws such as gravity), we may utter a good many things, mostly remarking on the unfortunate circumstances, but it would scarcely make any sense to declare "how strange that I am falling down instead of floating; who is to blame for this?"
    Another aspect of the problem concerns accomplishment: suppose that someone has tried unsuccessfully to achieve some dream—like playing for Auckland, to take a realistic example. By dint of sheer effort, perpetual practice and—ironically enough—a great deal of determination, they eventually manage it and turn out at Eden Park. Similar stories abound and we hear about them every day in one context or another; usually we agree, readily, that the person is due our praise and congratulations. What, however, of determinism? If their achievement was bound to happen, on account (again) of the past and the inevitable laws of nature, then why pat them on the back at all? Why not, instead, say "so what? You could hardly have done otherwise?"
    This problem is one that cuts against our common beliefs both that crimes should be punished (in some way or other) and that accomplishments (and many other things) should be praised. We can extend it to cover others, especially with a fertile imagination, but one area in which it has traditionally been of great importance is theology, where the question of how much freedom we can have if God is all-powerful and all knowing has caused much debate. To what extent are we free to sin or not to sin, say, if God already knows what we'll decide and has made it so in the first place? Rather than take this issue specifically, we'll cover it in the general case.
    Arguments for incompatibilism
    The basic form of argument for incompatibilism seems straightforward enough from the above examples, but it's important to note that neither compatibilists nor incompatibilists dispute the fact that we make decisions: the first issue lies in whether we cause our choices to come about in the right way; i.e. if we determine them for ourselves sufficiently (as we discussed above when looking at free will) or not. The second way to look at the matter is (as we also remarked on before) to wonder if determinism takes something away from us—namely, the power to choose one way or another, and hence the issues of moral responsibility that go with it.
    One popular and easily understood argument is a form of the moral problem from before: suppose we've robbed a bank dressed in a particularly bad costume, leading to an embarrassing arrest; at the resulting trial, we're being defended by Lionel Hutz, the famously inept lawyer. Instead of mounting any kind of defence or providing a reasonable plea, he maintains instead that, as a result of determinism, we had no free will and hence no choice but to do otherwise. The syllogism here runs as follows:
    P1: We are only guilty of (or responsible for) robbing the bank if we chose to do so; 
    P2: We could not have chosen otherwise, because of determinism; 
    C: Therefore, we are not guilty (or responsible). 
    It's not clear that this is an argument for incompatibilism, since it rather suggests that there can be no such thing as responsibility at all—we could replace the bank robbery with any other action and get the same result—but that is hardly what most incompatibilists want to imply. Moreover, and even with strengthening the syllogism somewhat, we run again into the problem of defining free will: when, exactly, are we ultimately responsible for a choice, given all the influences and factors playing a role?
    Another incompatibilist argument is to wonder about what we could have done in the past. We can rue our decisions where we now think we went wrong, but in the absence of a time machine we can seemingly do very little about it. If the past is fixed, though, and the future is determined from it on the basis of natural laws, then can the future be "open" at all? On the face of it, it would seem not.
    The difficulty here is that causation—a very troublesome and mysterious subject in itself—appears to only run in one direction, from past to future. In that case, there is nothing that we can do to change the past, irrespective of determinism. The choices that we make may end up determining the future, but they can never have an influence on the past. What's going on here is very subtle: the past is closed because causation only works forwards, and for that reason the future is not.
    This may not be convincing, so we can strengthen the incompatibilist position: take the decision to read through this article, and consider the propositions:
    I decided to read this article. 
    I decided not to read this article. 
    Since we're reading, it seems that (1) is true. Now according to determinism, we have the past state of the universe and extant laws of nature to account for why (1) occurred and not (2)—we had no choice but to read. If we wanted to say that we had the possibility of choosing (2), then we would be suggesting that either we could alter the past state of the universe or the laws of nature, both of which seem rather beyond our meagre powers. Does this mean the incompatibilist is right?
    Perhaps not; surely the compatibilist is not suggesting that the possibility of choosing otherwise requires a miracle every time we suppose we've done so? What the compatibilist is instead saying is that if we chose differently then either the past state of the universe or the laws of nature must have been different also. Once again, this is quite subtle, but all we're saying is that we might have done otherwise if the circumstances had not been as they were: we chose to read the article, but if the state of the universe had been slightly different at the time, we might have not. This is not merely a clever way to absolve your narrator of blame for boredom, of course.
    A much easier way to argue for incompatibilism is to show that determinism is false, or that indeterminism must hold. Below we'll remark briefly on the prospects for both by considering the evidence and our physical theories to date. It won't help much, however, to find determinism on shaky ground if the notion of free will is in as much trouble.
    Arguments for compatibilism
    Taking now the other possibility, what of arguments for compatibilism? In our discussion immediately above, we've already seen some of the ways in which compatibilism appeals by answering the incompatibilist's ideas, as we'd expect: if it isn't obvious that the two concepts are incompatible, then we would invite some kind of justification to that effect. That said, the problem does seem to have something to it, even if we aren't yet convinced by the arguments.
    One suggestion was advanced by Hume when he said that some kind of determinism is required if we want to have free will. After all, if we want to be free to decide for ourselves and hence make plans and choices, we expect the same action or cause to lead to the same result or effect each time—otherwise what use is free will if we never know what will come of our decisions?
    Another way to think of compatibilism is to question the assumption of the past being "fixed" in some way, since some results in the sciences seem to cast it into doubt. Some physicists, as well as some so-called eastern philosophies, have suggested instead that determinism may be a relationship wherein every aspect of the universe has an influence on (or determines) every other part, with the links being more like a web than a chain.
    Problems with the problem
    We've already seen that there are a good many difficulties with defining free will in the first place. What other problems are there?
    Natural laws
    Several philosophers are skeptical of the existence of natural laws and have given some very powerful arguments against them, including particularly Bas van Fraassen, John Dupré, and Nancy Cartwright. How, for instance, would we prove that a proposed law was in fact what it claimed to be? Many such "laws" in the past have been found to be mistaken, so maybe the same will happen to what we suggest nowadays—does that follow? Not really (cf. our fifth article), but it should perhaps at least give us caution. If we want to assert that such laws exist, though, we might want to explain how we know this and why they should—a difficult task, to say the least.
    Another aspect to this problem, which some might consider even more important, is saying what these laws are. An obvious candidate would be the laws of physics as we currently understand them, but that isn't entirely helpful. As we noted above, what if they're wrong? Once again, theories in the past that seemed virtually certain have proven to be mistakes, and the theory that many physicists take to be our best yet (the quantum theory) is said to be indeterministic (although that is open to severe critique in itself). If we could say that we're approaching some "final theory" that some people aim or hope for, then perhaps we could base an argument for or against determinism on the form it takes (for example, something similar to the quantum theory), but where does that leave us now?
    Determining determinism
    There is a major epistemological issue with determinism. As we said above, some physicists (and others) interpret the quantum theory to be indeterministic, which, if it turns out to be the final theory or closely related to it, would cast much doubt on determinism. Advocates of determinism respond that this interpretation is doubtful (indeed, we could argue that such a reading is methodologically untenable), and that, even if it wasn't, the universe on a larger scale (i.e. not the microscopic quantum level) behaves deterministically enough. Indeterminists are not convinced. Other theories give support to determinism, but in general we can say that it is as yet far from clear what physics tells us about the deterministic nature of the universe or otherwise.
    Back to the start
    To conclude our discussion, then, the issues of free will and determinism, along with their relationship, are thorny ones. They have vexed philosophers for many thousands of years and involve considerations from other areas of philosophy and science alike. More important, perhaps, is the question of whether you decided to read this article yourself or not.
    Dialogue the Tenth
    The Scene: Trystyn and Steven have met for coffee and are discussing the events of the previous evening.
    Steven: You know, all this has got me thinking.
    Trystyn: How so? You seem to have calmed down a good deal anyway, whatever the reason.
    Steven: Well, it occurred to me this morning that much of that stuff was pretty much bound to happen.
    Trystyn: "Bound" how? Which parts of it?
    Steven: (Takes a sip of coffee before replacing his cup carefully.) Think of it this way: you're a quiet guy, almost all the time. I should've guessed that you wouldn't just blurt anything out, and I shouldn't have expected it. Hell, I doubt you would've, even if you'd wanted to, right? (He doesn't stop for an answer.) Similarly, I always act like an idiot when I first meet someone—I don't stop to think, or to look from a different perspective. It never really occurs to me to wonder about what other people might want, I guess, so I blunder on regardless.
    Trystyn: Even supposing that's all true, why does that mean it was bound to happen?
    Steven: I'm sure I don't know or understand the philosophical ins and outs of it, but it seems to me that most of the decisions we make aren't really ours after all; at least a good part of them are already decided by other factors.
    Trystyn: Like what?
    Steven: Like your upbringing, or your social circle, or the way you behaved as a kid. If you didn't say much when you were little, you probably won't grow up to be the sort of guy who volunteers information; far more likely that you'll only talk when someone asks you something, at least until you get to know them better or become more confident. What's more, it doesn't seem fair to blame, say, you for not telling me something; after all, if I couldn't really have expected anything else, then it doesn't make much sense to get annoyed at you for acting just as I should've supposed you would—in fact, in the way that made you my friend in the first place.
    Trystyn: Similarly, then, I can't really complain at you knocking books out of my hand?
    Steven: Heh, not really. If I'm an idiot then you've just got to get used to me, I suppose. You see my point, though?
    Trystyn: Sure, but how much of it is decided beforehand? Surely we still have some kind of responsibility?
    Steven: Well, that's what I was thinking about this morning, instead of wondering what I'd have for breakfast. If it was all decided for you, I don't see how we could hold anyone accountable for whatever they did, however wrong it may seem. That's the end of the line for jails, or so it seems.
    Trystyn: It'd be the same for whatever they did right, too.
    Steven: Huh? How's that?
    Trystyn: It's just the same thing in reverse: if I can't blame you for screwing up because you had to, neither can I praise you for doing well—you had to do that as well.
    Steven: There goes my dissertation—don't tell my professor.
    Trystyn: We'll keep it between the two of us—we'd have to.
    Steven: Heh—fair enough. I don't see any way around it all, though.
    Trystyn: There are several ways we could try.
    Steven: It figures that you'd say that.
    Trystyn: Of course. The first is to ask whether there really is a problem: if everything is determined, does it follow that we have no choice in any matter?
    Steven: It seems clear enough that we don't.
    Trystyn: All the more reason to suppose that we do, then. (He winks.) If everything is fixed in advance then things had to turn out the way they have; nevertheless, if they'd been fixed in a different way then they'd have turned out differently just the same. When we say that we chose differently, it just means that we could've done differently if the facts had been otherwise, but they weren't.
    Steven: Um... what?
    Trystyn: It took me some time to figure that out myself—just mull it over. If I wasn't so quiet, I might've spoken up; but I'm not, so I didn't. If the circumstances had been different, I'd have chosen differently.
    Another way is to say that either the concept of free will is in trouble, or determinism itself, or both. There are some good arguments for either.
    Steven: In trouble how?
    Trystyn: Take free will: given that "no man is an island"—with there being so many different influences on us, our thoughts, feelings, ideas and behaviour, all the time—can any decision we make really be said to have been a "free" choice, of our own volition? On the other hand, if everything is determined by what came before, how is it that they're determined? You yourself probably know that scientific laws are not so clear-cut...
    Steven: Alas, we're usually wrong.
    Trystyn: Right (he winks again), and some people say the quantum theory is indeterministic while others insist to the contrary.
    Steven: Don't let's start that off again. (He looks around the café...)
    Trystyn: We could just wonder if this purported incompatibility really exists at all. It could be that every action influences and is influenced by every other, making things a whole lot more complicated—and beautiful—than all this talk allows.
    Steven: I want to know if this means you're buying me lunch.
    Trystyn: Only your horoscope knows...
    Curtain. Fin.
    Teaser Paragraph: Publish Date: 06/12/2005 Article Image:
    By Paul Newall (2005)

    Philosophy has always been involved in archaeology. This is an argument we will develop by considering the so-called New Archaeology and the debates surrounding it that have taken place over the past thirty years or so.

    Archaeology is generally taken to have become established in the nineteenth century with Jacques Boucher de Perthes' discovery of chipped stones in Somme river gravel quarries, alongside the bones of now-extinct animals. He interpreted the former as human artifacts (such as hand axes) and hence claimed that humans had existed for much longer than Biblical accounts apparently allowed (due to the inference that they appeared much older than anything mentioned in the Bible). Although there was some skepticism to begin with, the antiquity of humanity was accepted soon thereafter. Notice, of course, the inevitable speculative dimensions to this early conclusion: how do we know when we have a genuine artifact and when a mere product of nature? How do we date them?

    One of the reasons for the relative ease with which archaeology took root was its association with two other intellectual currents. Firstly, James Hutton and later Charles Lyell had investigated rock formations and used stratification to support a principle of uniformitarianism, according to which conditions in the past were the same as (hence uniform with) our own, enabling us to infer things about the past from current geological arrangements. It was also argued that the processes that could account for stratification are still in operation today, such that the Earth had to be far more ancient than was otherwise believed.

    Secondly, Charles Darwin's On the Origin of Species was published in 1859 and also implied a lengthy process by which modern humans evolved, giving archaeologists the chance to look for signs of this in the material remains. Evolution hinted, too, that perhaps cultures developed in a fashion similar to plants and animals, with this possibility influencing anthropology and (later) social theory. The confluence of these three strands helped archaeology move from speculation (as it was denigrated by some) to the firmer foundation of the assumption of antiquity and the principles of uniformity and evolution. Moreover, as the world became smaller with travel over long distances the new peoples discovered were beginning to be investigated by curious scholars, especially given the (widespread) assumption that so-called "primitive" cultures could provide us with insights into how our own culture had changed over time. The idea that progress from savagery to civilisation was possible and had occurred in the past again informed social theories and suggested that we could come to understand the way we are today by searching for our origins in the archaeological record. The discoveries made in Egypt by Napoleon's teams and the increasing excavations in Mesopotamia also provided archaeologists with plenty of new data on which to test their ideas, their efforts motivated in large part by Biblical accounts and the Ancient Greek poets.

    If early archaeologists had been content to classify the remains they found and attempt to build a chronology from them, the influence of V. Gordon Childe meant that questions were being asked about why collections of artifacts were located in one place and not another, or what it meant to assume that such a collection implied a group had existed there previously. In short, archaeologists began to realise that a chronological ordering - even if "correct" (whatever that means and however we would determine it) - would tell us nothing about the past unless an interpretive step was added. In 1949 radiocarbon dating (C14) was invented by William Libby and suddenly archaeology seemingly had a scientific basis to back up the placing of finds in historical order, although it took some time for the consequences to become clear. Now archaeologists apparently had an independent means to determine the age of a site without needing to resort to written records (that may not exist) or comparisons with other cultures, removing speculation altogether and providing a firmer footing for chronologies. Other scientific techniques were employed, including chemical analyses, and soon the number of methods involved exploded. Unfortunately for this spirit of optimism, however, the surety of radiocarbon dating soon gave way to arguments over its application, validity and the approach as a whole. We will return to this controversy in due course.

    As a result of these scientific aids, some archaeologists claimed that the discipline was no longer plagued by quesitons of dating and began to be dissatisfied with the conclusions that were drawn by their colleagues. After all, providing a timeframe for a site was one thing but it became more important to explain what had happened. Gordon Willey and Philip Phillips suggested a processual approach, wherein the archaeologist should look at the processes involved in the histories of cultures. This is where a major turning point in the philosophy of archaeology occurred: impressed with the superior epistemological model that science appeared to provide, some sought to set archaeology on a surer footing by reproducing the supposed "scientific method".

    Following the lead of Lewis Binford, several archaeologists in the late 1960s began to argue for what came to be called the New Archaeology. Inspired by developments within the philosophy of science, they wanted to do more than just describe and believed that genuine explanations could be achieved by changing direction in archaeology. In the past, they claimed, archaeologists had made inductive inferences, collecting pieces of evidence and try to infer conclusions from them. There was (and is) a significant problem with this, however: given that the archaeological record is incomplete, how can inferences be accurate? One possible response is to wait until all the evidence is in, but this is impractical (or rather impossible, if we accept that the record must inevitably be incomplete); another is to give up making inferences at all, yet this leaves archaeology as solely as descriptive enterprise. In general, this was the well known problem of induction in action once again.

    Nevertheless, according to the New Archaeologists the alternative was to adopt the methodology of science and formulate hypotheses, deriving their consequences deductively and - most importantly - testing them. This model, based largely on Hempelian deductivism, was a form of logical positivism (explained here). The New Archaeology also included functional approaches, wherein generalisations are made about changes in political, social and economic systems and how these cultural processes can aid in explanation - hence Processual Archaeology, another term often used in discussion.

    Whatever the merits of the New Archaeology, one thing at least was clear: archaeology could never go back to what has since been called its "state of innocence". In order to practice archaeology it would be necessary to question presuppositions that had previously been implicit and reject them if required, as well as to consider the impact that the philosophy of science would have as debates therein changed the image of archaeology that had been crafted to date. This did not mean that philosophy came to archaeology from without, but rather that it had always been involved (although previously with little or no appreciation of the impact of assumptions). As Alison Wylie put it, "[w]hat you find, archaeologically, has everything to do with what you look for, with the questions you ask and the conceptual resources you bring to bear in attempting to answer them." This was a familiar point in science, especially in physics thanks to the philosophical investigations of Einstein and others following the advent of quantum theory, but now it would affect archaeology, too.

    Although the New Archaeology introduced several currents, including the realisation that perhaps the best way to arrive at explanations would be to study societies of today, hence the advent of ethnoarchaeology with the adding of an archaeological aspect to ethnography. Even so, no sooner had archaeology seen an influx of positivistic thinking than criticisms in the philosophy of science began to chip away at the influence of the Vienna Circle thinkers and the earlier confidence began to look jaded. The most famous work of this period is probably Thomas S. Kuhn's The Structure of Scientific Revolutions, but others can justifiably said to have played an even greater role. Then, as now, there were some archaeologists who paid little attention to theoretical debates and believed (along with some physicists and biologists) that it is possible to do science without worrying about the tortured timewasting of philosophers. This seemed especially so for field archaeologists, apparently far removed from debates in academia. We will see, however, that this separation becomes increasingly difficult to sustain.

    Criticisms were made of the New Archaeology from several directions. Firstly, the philosophers of science N.R. Hanson and Paul Feyerabend discussed the supposed objectivity of science, one of its greatest assumed virtues, via the concept of theory-ladenness. They stressed that there can be no neutral observation statements; in archaeological terms, this is to say that data recovered from sites do not form a class of "facts" independent of the observer (or archaeologist) but are already filtered by us. Indeed, this insight was later developed by Paul Churchland in neurological terms such that this inevitable layer (there might be several) of interpretation is what makes our perception cognitive in the first place. The earlier example of Jacques Boucher de Perthes' is a case in point: he did not unearth human artifacts but some stones he understood in this way. Through this reading of the ostensibly objective data, they became evidence for the claim that humans had existed for far longer than had previously been thought. There was nothing about the stones themselves that made this conclusion a certainty, however; on the assumption that a natural process could occasion the same results, say, he might have scarcely given them a second glance. On the contrary, though, he came upon the stones with many preconceptions, not all of them explicit, including that stones cannot be weathered in this fashion; and so on. Today we might consider this a commonplace but the point is that we do not just arrive at virgin archaeological data but unavoidably bring our other ideas with us, without which we could not make sense of anything in the first place. This seemed catastrophic for the goal of objectivity.

    Secondly, and again via the philosophy of science, Ian Hodder and others emphasised that interpretations of archaeological date are underdetermined; that is, many readings are possible (some would say an infinity of them) for the same set of data and hence our choices between them must rely on extra-empirical factors. This is to say that appealing to the evidence alone is inadequate to account for our decisions (hence the death of the more naive forms of empiricism - see here for more detail) and so we inevitably go beyond it in arguing for our conclusions. Much like theory-ladenness, this may seem obvious with the benefit of hindsight but it is important to remember that at the time some philosophers of science and New Archaeologists with them were contending that the sciences could avoid the arbitrary and in particular solve the demarcation problem, or how to separate science and non-science. There were and is no shortage of people claiming that this task can be done, in spite of the difficulties with the criteria proposed. What theory-ladenness and underdetermination did not mean was that archaeology - along with science as a whole - would have to descend into relativism, that poorly understood bugbear and the stuff of rationalist nightmares, even if drawing a boundary between archaeology (and history, too) and mere stories became somewhat troublesome.

    If objectivity was problematic, that did not mean that it could not serve as a rhetorical trope. More importantly, though, its apparent demise led other archaeologists to consider what else had been lying hidden in their discipline, unexamined. This helped bring about what came to be known as postprocessual archaeology, in spite of there being much debate over whether the critiques of the New Archaeology could properly be said to have superseded it or rather complemented it. Nevertheless, neo-Marxists and others emphasised that the responsibility of the archaeologist should not just be to describe or explain the past but to use whatever they gained from both to make positive contributions to the present, striving to make the contemporary world a better place. For them it was not enough to promote archaeology as an objective science going about its business while politicians worried about poverty and power structures. In a manner similar in some aspects to Feyerabend previously, archaeologists such as Christopher Tilley went further and suggested that science is itself part of a system of economic and social hegemony, set up as an ideal without sufficient inquiry into whether or not its benefits would outweigh the costs to the individual or whether scientific (and thence archaeological) work actually aided power structures throughout the world. Once again, this was a criticism of the supposed objectivity of science by pointing to its (potential or actual) consequences, especially for the poor and disadvantaged. At the very least, archaeology led to some difficult questions that continue to be asked today. What happens, for example, when some remains are found on land ostensibly "belonging" to native inhabitants of a country? Do archaeologists have the right to investigate? What if the tribal group, say, forbids unearthing anything? That instances of just this issue have occurred recently (such as Kennewick man) show that archaeology is not able to pursue a neutral path, avoiding political and cultural conflicts, particularly when its results may impact on the wider world and social debates.

    At any event, one of the results of the debates in the philosophy of science and in postprocessual archaeology was widespread agreement that there is no such thing as the "scientific method". Just as physicists do not behave as biologists or geologists do, with significant differences within these disciplines, too (compare organismic and molecular biology, say, or condensed matter and particle physics), so do archaeologists employ a range of methods. This does not mean that "anything goes", possibly the least understood argument in the philosophy of science, but only that the myth of science as a unified epistemology has had to be discarded in favour of a far more nuanced appreciation of what scientists actually do - aided, in a nice irony, by sociological investigations of their actual behaviour. It has also not stopped the so-called scientific method being used rhetorically to deny credibility to some ideas, in legal as well as public discourse, but it is interesting to consider how these issues impact upon archaeology.

    We have looked at some of the philosophical difficulties faced by the New Archaeology and now we will expand on them in turn, considering the problems confronting archaeology and how the New Archaeologists proposed to solve them. In so doing we will again come to appreciate that philosophy was not a distraction from the business of archaeology proper but an inevitable part of the discipline that could not be ignored.

    As we have seen, developments in the philosophy of science had led some archaeologists by the 1960s and 70s to question what archaeology is and how its practice should be understood. Note, however, that the clean break with the past suggested by the very name for this so-called movement – the New Archaeology – has been subject to skepticism itself. In 1955 B.J. Meggars had written of "the coming of age of American Archaeology" and raised many of the same philosophical issues that were discussed at length in the following decades, while criticism of the supposed foundations of archaeology was already well advanced in the late 1930s and the 40s in the writings of Kluckhohn and Bennett, amongst others. (See Wylie's How New is the New Archaeology? for an extended analysis.) Indeed, it may be that the view of the New Archaeology as a revolutionary break with archaeology thus far was (and is) itself influenced by currents in the philosophy of science, particularly Thomas S. Kuhn's famous The Structure of Scientific Revolutions. The most forceful objection to Kuhn's ideas (due to Lakatos, Feyerabend and others) was that the periods of "normal science" – in which scientists working within a paradigm are resistant to anomalies until it is finally toppled after the fashion of a revolution – never really existed in the first place. In archaeological terms, this is to say that there was no period of philosophical naivety wherein archaeologists ploughed ahead with scant concern for wider intellectual debates, but rather a continual internal dialogue and questioning of assumptions.

    In any case, the problems with what was called traditional archaeology were threefold: two philosophical (specifically epistemological) and one methodological. Firstly, the New Archaeologists complained that the traditional version relied on a form of empiricism that was hopelessly outdated, which involved only the observable (i.e. archaeological data) and systematising it. This had been shown to be untenable. Secondly, and resulting from this, the study of cultures had to be restricted to inferring motivating beliefs from material remains. After all, people of the past are unobservable, too, and hence if archaeology were limited to the narrow empiricism the New Archaeologists opposed then it would be impossible to set anthropology on an archaeological basis. Lastly, and perhaps most significantly, the belief that these were methodological restrictions meant that archaeologists could not go beyond the resulting form of archaeology, whether that was considered to be antiquarian cataloguing or unbridled speculation. Where a method is assumed to be fixed, of course, it is not long before its advocates claim that their investigations are therefore neutral and use their "method" as a rhetorical tool to deny recognition and support to alternative ideas not following it.

    It is worth dwelling on these concerns because it was not that empiricism was the problem, or philosophy on general, but bad philosophy. No one lamented the intrusion of philosophy but rather that concepts that had been shown to be flawed by the philosophers were still alive in the social sciences. As Kluckhohn wrote in 1939, it was not a case of choosing

    This indispensability of theory combined with the necessity of continually challenging and reassessing the concepts we employ in the sciences was also recommended by Einstein. When archaeologists or scientists at large proceed as though philosophical issues are irrelevant to their work, then, they do so not because they have examined their method at length and found it to be neutral but through ignorance and without realising that the assumptions we bring to inquiry can influence what we find. This insight was formalised as the problem of theory-ladenness: when the clear distinction between facts and theories failed, the New Archaeologists insisted that their discipline could no longer be viewed as the collections of "facts" and that the presuppositions that implicitly guide research should be stated openly and questioned forcefully at every available opportunity. Moreover, archaeologists would have to recognise that they are involved in their investigations rather than passive collectors of these "facts". Indeed, from the 1930s an increasing number of critics were noting that the huge volume of data accumulated was not matched by a richness of interpretation.

    A nice example of a debate entered into by the New Archaeologists (although it was being addressed well beforehand) was that surrounding classification. Are the categories used by archaeologists to sort material – systematising it – inherent in the remains or just instruments to help us make sense of it? This is a particular instance of the problem of universals, an issue in metaphysics that has been discussed for thousands of years. When archaeologists develop taxonomies by which data are sorted, are they arbitrary or are they to some extent forced by a "natural order", so to speak? For the New Archaeologists the claim that classifications could exist with or without archaeologists to employ them was just another instance of the idea that archaeology was a neutral science with the preconceptions of its practitioners not affecting the conclusions reached, as opposed to facts unavoidably being theory-laden.

    For the New Archaeologists, then, traditional archaeology was crippled by philosophical problems. In order to avoid the complaint (often levelled at them) that the discipline was thus entirely subjective, they sought to replace na�ve empiricism with a far stronger theoretical basis. This could be achieved, they thought, by accepting that interpretations of the archaeological data are underdetermined but that nevertheless a form of empiricism can be used because the evidence can be appealed to when judging between different hypotheses. Data could then be admitted to be theory-laden but still used to test claims – or so they hoped. In addition to their criticisms of traditional archaeology there was also an effort to develop alternatives, including an argument that archaeology should move away from descriptive accounts and provide explanatory ones. It was here that the New Archaeologists appealed to the work of the logical positivists, especially C.G. Hempel, insisting that an archaeological explanation must be governed by laws. For Binford, this meant going beyond the particular to the general, aiming ultimately at understanding "the total range of physical and cultural similarities and differences characteristic of the entire span of man's existence".

    As we have noted, however, this was occurring just as positivist ideas in the philosophy of science were beginning to crumble. When he appealed to a Hempelian form of explanation, in which an observed event is said to be explained if it fits an already established regularity such that we should have expected it to happen, Binford did not realise that he was invoking exactly the restrictions of na�ve empiricism by relying on observed regularities. In short, his version of positivism was as bad as the philosophical approach he had objected to. Moreover, Binford advocated Hempel's hypothetico-deductive model – that is, offering an explanatory hypothesis, deducing its consequences and testing for them – even though it was quickly shown to be flawed. The most common example used in this context is a proposed law such as "all swans are white"; if true, it would follow straightforwardly that all swans are white, but in order to confirm this we would have to check all swans – so it can never be confirmed deductively. In archaeological terms, this would be much like explaining the collapse of civilisations by a specific set of circumstances: in order to confirm that we have a law, we would have to check all civilisations anywhere and at any time. The hypothetico-deductive model can thus only apply in restricted (usually trivial) domains. Perhaps even more significantly, though, even if it were possible to arrive at laws explaining present conditions and behaviour, to infer anything about past cultures from the (relatively scant) material data available to us requires inductive steps – precisely what the New Archaeologists were supposed to be moving away from.

    There is a rhetorical dimension to the New Archaeology that is worth considering, too. It has been suggested that moves toward positivism in archaeology was but an extension of a larger endeavour on the part of naturalists to make the social sciences harder, invoking models that might apply to physics, traditionally the "hardest" science of all (hence the conceit that its methods are the ideals to be aimed at everywhere else). This was a reaction to a widespread concern that Enlightenment ideals of reason and civilisation were under threat from relativists and subjectivists, and particularly philosophers of science who claimed that science was irrational and no better than astrology or voodoo. These charges were – and are – empty but it is rare to lose money betting on how zealously people will defend simplistic models of science if civilisation itself is alleged to be under attack and losing ground. (Indeed, it is not difficult to find instances of this behaviour today in several contexts.) Nevertheless, by placing the social sciences on a positivistic base it was hoped that they could be saved from degeneration and contrasted with so-called pseudoscience and groundless speculation. The great irony was – and again, still is – that those who so desperately wanted to improve on a naive and untenable empiricism in order to counter a crude version of archaeology (and science in general) and to oppose "cranks" resorted to a positivism that was just as unsophisticated and relied on just the same uncritical empirical foundations. The problem was that too many archaeologists were caught up in a (false) dilemma that defined the rhetorical backdrop to the debate: either archaeology had to follow the other sciences and be rigorously recast along positivist lines or the entire game was lost and would unravel into skepticism. This is a tactic that – once more – is still employed today for much the same reasons, but a bad idea does not become a good one just because the only alternative provided is supposed to be even worse.

    Meanwhile, the insistence that archaeologists should seek explanations rather than descriptions led to a great deal of discussion concerning what form such explanations should take. While New Archaeologists were advocating a Hempelian theory of covering laws, critics like Merrilee Salmon came up with counterexamples in the form of generalisations that would cover a phenomenon but not explain it. For instance, men who take birth control pills do not get pregnant, but this is an explanation we would not consider. Likewise, prisoners who are deprived of writing materials do not reproduce verbatim the works of Shakespeare. Why not? There are (implicit) generalisations here that cover and yet we do not accept them as explanations. We thus see that covering is not enough; it is necessary, we might say, but not sufficient and on its own gives us no guidance as to whether or not the true explanation has been found. Some theorists responded by appealing to higher-level regularities, but this led to a regress problem because resorting to a presumed regularity of a different order to explain which of several possible regularities was the correct one led to the same question on the new level, and so on. Indeed, whenever we can explain a phenomenon because we have good (empirical) reasons to suppose the existence of the causes invoked by the explanation, we can always ask where these causes came from and how they operate – requiring explanation all over again.

    Perhaps the most important aspect of traditional and New Archaeology and the criticisms of both is that everyone involved (along with others in the philosophy of science) was directly or otherwise addressing the questions of what science is and what it aims at. Some opponents of positivism advocated a realist perspective, which they thought could improve on the philosophical dead ends, and hence we come full circle to having to understand the role of philosophy and the part it invariably plays in the discussion and practice of archaeology.


    Selected References:

    Binford, L.R., In Pursuit of the Past: Decoding the Archaeological Record (Berkeley: University of California Press, 1983).
    Feyerabend, P.K., Against Method (London: Verso, 1975).
    Hodder, I., Archaeological Theory Today (Cambridge: Polity Press, 2001).
    Kuhn, T.S., The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962).
    Renfrew, C. and Bahn, P., Archaeology: Theories, Methods and Practices (London: Thames & Hudson, 2004).
    Wylie, A., Thinking From Things: essays in the philosophy of archaeology (Berkeley: University of California Press, 2002).
    Teaser Paragraph: Publish Date: 06/12/2005 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    In the study of philosophy we eventually come up against postmodernism, however hard we may try to avoid it. Typically the context is someone uttering the familiar refrain "that postmodern nonsense", but sometimes it can be heard as a description of art or society. In this piece we'll try to get a grip on what it means, what we can use it for, what we can learn from it and why some people are want to insist that only troglodytes partake of it.

    What is Postmodernism?

    The first place we run into trouble when discussing postmodernism is in defining the term itself. The thinkers and ideas often referred to as postmodern disagree amongst themselves —usually significantly—as well as with dictionary versions, while opponents may not always be fair in their characterisations. With this in mind, can we even speak of postmodernism in the first place? To try to make sense of it, we can attempt several approaches.

    The word itself

    The term "postmodern" is a recent one, as we might expect. The furthest it has been traced is to 1932 or thereabouts, when it was used to describe the contrast in Hispanic poetry between Borges (and others) and newer work that seemed to be a reaction to modernism (or ultramodernismo, as it was called). Toynbee called the period from 1875 to the present (in 1940, when he wrote) "postmodern", while poets and artists began to employ it to talk of challenges to modernism. Some writers prefer to distinguish between two senses of the word: on the one hand, we have post-modern (with a hyphen) to denote the continuation of modernism, perhaps in new directions (hence the post-modern, or after modernism); on the other, postmodern (with the hyphen gone) signifies something different (postmodern, or after modernism and separate from it—replacing it).


    Given that all this talk involves modernism in some way, we need to understand this notion if we hope to appreciate what came after or replaced it. The difficultly—yet again—is that this term is itself used to denote a wide spectrum of directions, tendencies and influences in literature and art, as well as a philosophical idea; indeed, it also appears to differ in meaning in many countries, even if only slightly. Before we get any further, then, we can say that one of the main problems with postmodernism is that not everyone means the same thing by it: it could be a person rejects a claim characterized as postmodern when the listener does not even think of it as such. Perhaps the proper response, then, to someone who exclaims " not more postmodern rubbish!" is to ask "what do you mean by postmodern?" It may be worth ducking if the rejoinder is a swift clip around the ear, though.

    In order to attempt a rescue of this situation, we can focus not on the many specific differences in understanding but on the general tendency described by Jürgen Habermas and others whereby modernism is synonymous with or much the same as the Enlightenment project; that is, those ideas that came about (roughly) at the time of the Enlightenment (the seventeenth and eighteenth centuries), often also called the Age of Reason. This was when the first encyclopedias were being compiled and thinkers were critical of forms of traditional knowledge or authority, especially religious or political ones. Broadly speaking, the hope was that the search for truth by means of reason and the natural sciences would replace superstition, irrationalism and fear and lead to an ordered world in which men thought for themselves instead of following custom or the beliefs that had been held unquestioningly for generations. Kant offered a motto as defining the Enlightenment, saying "Sapere aude: have courage to use your own understanding." Goya rendered this as "El sueño de la razón produce monstruos", or "the sleep of reason produces monsters".

    While it is easy to see where the attraction in the progressive enlightening that would follow the march of reason, Weber called it the "disenchantment of the world"; many of the religious ideas, superstitions and folk tales that provided explanations or comfort of one kind or another would not stand up to scrutiny, but the rational picture that replaced them could seem cold, impersonal and just as imprisoning. Habermas' opinion is that although this process may be flawed in some ways, it is not yet finished: although much has been accomplished, the potential in this approach has still to be realized. Postmodernism, then, is on this view rather an anti-modernism that would give up this reasoned effort in favour of an irrational one that is skeptical of the very possibilities encouraged by the Enlightenment.

    Whether we accept this characterization or not, we could say that postmodernism is skeptical of theoretical viewpoints that are foundational (as we discussed in our /index.php?/page/resources?record=17">fifth article) or grounded in some way, and critical of theory in general. Sometimes a distinction is made along the following lines:
    Affirmative postmodernists: theory needs to be changed, rather than rejected
    Skeptical postmodernists: theory should be rejected, or at least subject to severe critique
    There are other ways to appreciate what postmodernism involves by looking at some of the ideas and understandings proposed by various important thinkers, as well as by comparing some of the trends in modernism with how they have become viewed in a postmodern context. This what we'll do shortly below.

    After modernism?

    Before we get to some of the characteristics of postmodernism, it would be meaningful to ask if any of them are new or radically different from anything that came before. Is postmodernism really after modernism? The answer to this question appears to be in the negative: all the features we see below have been spoken of or held before in ages past. We could try to insist that never before have thinkers assumed them in a systematic fashion, but that is also not the case today—as we said previously.

    Some writers have suggested that the very notion of defining periods (as "modern", "postmodern" or anything else) is merely a rhetorical device: a means of comparing the present to something different (usually to show the more recent in a favourable light) by constructing some other time in history that was perhaps not so enlightened as our own. For example, we have already seen the contrast between so-called "traditional" ways and modernism or the rise of the Age of Reason. Were traditional times really as backward as they are sometimes portrayed, though? If not, then it seems fairer to say that succeeding views brought to light those features that were already there but perhaps neglected or ignored. As we saw in earlier pieces, some of the "new" ideas proposed by philosophers and others have in fact been little different from (or the same as) those in the past; the only change might be that circumstances became more favourable to their acceptance.

    Comparing the two

    Bearing these remarks in mind, we can now contrast modern and postmodern thinking on some illustrative areas and questions, taking each respectively. Although we must be careful to over generalization or oversimplification, opposing modern to postmodern we have:
    Structure opposed to anarchy
    Construction opposed to deconstruction
    Theory opposed to anti-theory
    Interpretation opposed to hostility toward definite interpretation
    Meaning opposed to the play of meaning or a refusal to pin down
    Metanarratives opposed to hostility toward narratives
    The search for underlying meaning opposed to a suspicion (or certainty) that this is impossible
    Progress opposed to a doubt that progress is possible
    Order opposed to subversion
    Encyclopedic knowledge opposed to a web of understanding
    Some of these will be considered in greater depth as we continue.

    Elements and influences


    One of the most important thinkers on postmodernism, referred to often, is Jean-François Lyotard. In discussing postmodernism, he wrote:

    I define postmodern as incredulity toward metanarratives..."

    Now some people are not too convinced about Santa's existence either and may be incredulous toward him (hence explaining the lumps of coal in their stockings), but at least we know what we mean by him. What are metanarratives?

    A narrative is usually another way of saying a "story" or a description of some turn of events, so a metanarrative (sometimes also called a Grand Narrative, with capitals for effect) is a narrative that explains (or perhaps contains) all others. For example, there are various narratives all over the world that explain the creation of the universe and everything in it; if a particular story is claimed to be the ultimate one that explains properly or accurately, it could be characterised as a metanarrative. The Enlightenment narrative that we have discussed above, to take another instance, says that reason and the natural sciences will help to free the world from superstition and ignorance, bringing us to (or closer to) true knowledge of our universe. Metanarratives can and are used to translate other narratives into their own form, subsuming them as they must if they are to explain all other accounts in their own terms.

    According to Lyotard, then, postmodernism is at least skeptical of this tendency, if not outright "incredulous" at the very possibility of finding one story that explains the world and all others. It is easy to see where this suspicion could come from: we could make the argument that since all attempts so far (that we know of) to find a grand narrative have failed, it follows that the thing just cannot be done. That does not follow, of course, as we saw in our /index.php?/page/resources?record=17">fifth article, but it might at least incline us to be doubtful of the chances of success.

    Some critics have suggested that in talking of the "death" or failure of all metanarratives, we are merely offering yet another metanarrative in their place, one that talks of this universal failure and tells us we have to accept it as the final story. Another point of objection concerns those narratives that have not yet failed; for Habermas, as we saw, modernism has not fulfilled its potential, while other cultures have their own narratives that cannot easily be dismissed just because Anglo-European ones are said to be doomed.

    Another way to look at this issue is by way of foundationalism, which we considered in our /index.php?/page/resources?record=17">fifth article on epistemology. The search for a metanarrative, according to Gianni Vattimo, is much the same as the quest for a foundation underlying our knowledge; this assumption that we require a foundation, though, is called into question. Instead, Vattimo suggests the metaphor used by Jorge Luis Borges in his famous story The Library of Babel, in which the universe is described an infinite library. When we wander though it looking at the books, we find that they each refer to other books—never an external authority, or the "catalogue of catalogues", as Borges terms it. Rather than appealing to foundations, then, or something else to ground our knowledge, we instead have to be satisfied with the library, or an interlocking web of ideas and beliefs.

    A philosopher who has looked at this question in much depth is Richard Rorty, who is very critical of foundationalism (see our /index.php?/page/resources?record=17">fifth article) and much of classical epistemology. In his early work he opposed the notion that knowledge somehow "reflects" or "mirrors" the world around us. If that is so, then it would make more sense for us to give up looking for an overarching language or narrative to understand all others in and instead just translate between them, much like Vattimo. Antifoundationalism is a rejection of the earlier ideas in favour of other understandings of knowledge, some of which we considered previously. Rorty suggests that we employ our concepts as tools to accomplish whatever goals we have, not as a means of hooking onto the world as it really is.

    Another epistemological perspective that has seen much activity in recent years and which often comes up in the context of postmodernism is constructivism. According to this idea, we don't receive knowledge through our senses or through discussion; instead, we build it up for ourselves from these and other inputs—we construct knowledge, rather than discover it. A slightly different way to say this is that we adapt our knowledge to organize what we experience, as opposed to using it to explore an external reality. This is quite a contrast with foundationalist approaches; according to some constructivists, we come up with many models to guide us toward whatever goals we have and all that reality can do is help us accept or reject those that are unsuccessful. We could say that we're devising better and better maps to get us where we're going, not exploring the territory.

    An obvious criticism of constructivism is to ask how it can select between alternative models if not by reference to a world that already exists and is not just constructed by us? Can we really say that we built up the fact that we can't breathe underwater, or was it instead forced upon us by the way the world happens to be? We find in our everyday experience that not every model is as good as any other when trying to accomplish a specific task, so many constructivists point to coherence or pragmatic concerns (cf. our /index.php?/page/resources?record=22">tenth article) instead of verifying ideas by testing them against the world.

    The notion of metanarratives and their rejection or acceptance thus involves many aspects, including epistemology and metaphysics. If Lyotard's definition of postmodernism is anything to go by then our opinions of these issues can go some way to determining how we view the subject.

    Power and knowledge

    In our /index.php?/page/resources?record=18">sixth piece we looked at the power that can be associated with terms like "knowledge" and "truth". Some thinkers characterized as postmodern worry about this and feel that some legitimate areas or methods of inquiry—or indeed modes of life—could be restricted. To take a simple example, if it is known that a certain method of farming is known to most efficient, it may be that some people insist that everyone adopt it—after all, there are a lot of hungry people. Nevertheless, should we allow this knowledge to force others to live in a way they do not wish to?

    On another level, some people consider that "primitive" groups should be civilized for their own benefit, but critics say that this assumes that what is good for one is good for everyone. This is partly a question of ethics (see the /index.php?/page/resources?record=23">previous piece), of course: should we point to the successes of a particular way of doing something or insist that others adopt it to, say, increase their health or life-span? The concern is that the sanction of calling something the truth endows it with a power that makes it easier to force people to do or accept things they otherwise might not.

    Another example of this kind concerns madness or insanity, the history of which was studied by Michel Foucault and others. According to a certain understanding of this phenomenon, popularized by a group known as the anti-psychiatrists, it is very difficult indeed to define what we mean "insane", say, unless by comparison to "normal" behaviour; what, though, is normal? Nowadays more complex methods are used in this process but it is clear that in the past it would be a relatively easy matter to define conduct that we disapprove of as abnormal or insane and legislate for the (forcible) treatment of people displaying it. If a certain group has the power to decide who is mad and who isn't, then their actions could have terrible consequences, as we have seen throughout history with the sterilization of so-called simpletons in the US or the concentration camps in Germany.

    The principle behind these and other instances is to be aware of the power and influence associated with defining terms or making distinctions between people; the way we understand concepts has consequences—the pen being mightier than the sword on occasion—so we have to be aware of this and act accordingly.


    A term that comes up often in discussions of postmodernism or thinkers associated with it is poststructuralism. Much like our opening remarks on postmodernism, this is also a difficult concept to define and involves the same notion of after-structuralism, so we need to look at this as well. Structuralism, then, is sometimes described as the attempt to bring all our attempts to understand the human condition under one model or structure, with a single methodology, all derived from the linguistics (the study of languages) of a Swiss theorist called Ferdinand de Saussure. There are many other influences but this is often said to be the main one.

    Much work and controversy is associated with Saussure's studies and that which followed, but the important and basic is that language is conceived of as not just a way of expressing our needs and ideas but something required before we can even think or have social interaction. The meaning of a story, say, is thus to be found in its structure; by analysing this and the language used, we can come to understand it.

    As structuralism became more important, particularly in Europe, poststructuralism emerged as a challenge to it. Is the meaning of a word really fixed or is it instead, to consider an alternative, actually defined by the use we want to put it to? What if the words we employ to refer to some fixed structure in fact miss their mark and never quite provide us with a bedrock structure to base everything on? Poststructuralism suggests instead that meaning is always unstable; when we use a word to point to a concept, it never quite gets there—reaching instead to another word, and thence to another, and so on. This is another challenge to the possibility of metanarratives and the Enlightenment ideas in particular.


    When we read a story, we sometimes take it for granted that the author is explaining to us what happened to the characters, what they thought about and—often—what the moral of the tale is. We could think of it as a fireside chat, in which the writer talks and we listen; in some detective stories, say, we are hoping to find out who did it, how and why. In some books, though, the moral isn't so obvious, and with poetry or movies it can be even worse; sometimes two people can see the same film and understand it in completely different ways. In that case, the issue is one of interpretation: who has appreciated the point of the piece most accurately?

    One way to answer this would be to ask the author, if he or she is still alive. Having said that, why should they necessarily be the one to decide? If we have a favourite poem that we read to have a particular meaning to us, should we allow that there are more authoritative ways of approaching it? Given that there may be very many understandings of the same piece, some of which may seem a lot more sophisticated than what the writer apparently intended, can it make sense to call one legitimate and the others not?

    Hermeneutics is the study of interpretation, named according to some after the Greek god Hermes (Mercury in the Roman pantheon), the patron of interpreters (among other things) who also lent his name to hermeticism. In the past it was associated with the interpretation of scriptures; some holy books warn against over-interpretation while others attribute many distinct layers of meaning to the same text, particularly in some Judaic works and the Hermetic oeuvre. Works by Homer, Dante or Shakespeare have been studied on many levels, but the prime example remains the religious texts: commentaries on commentaries had so much become the standard that in the fifteen hundreds Luther declared his famous maxim sola scriptura (or "by Scripture alone"), intending to strip away all the interpretations that had gone before and hence influenced the reader and instead start anew.

    In more recent times, Jacques Derrida declared "il n'y a pas de hors texte"—there is nothing outside the text. One way to understand this is to take it that there are is no guidance or adjudication to be found when considering a piece save within it; thus, when we try to decide what the correct interpretation of a poem is, we can only use the poem itself and not point to something external that would settle the matter for us. Indeed, one writer (Dilthey) said that the purpose of hermeneutics is "to understand the author better than he understood himself"; perhaps the writer unconsciously included aspects or influences in a text that he or she is not aware of and that can only be brought to light by interpretation by others? This led some to proclaim the "death of the author", but at the very least we have the author, the text itself and the reader all having an input into how the text is read.


    One form of interpretation or analysis of texts that is associated with Derrida and the so-called Yale school of Paul de Man, Harold Bloom, Hillis Miller and Geoffrey Hartman is deconstruction. It has had more of an impact on philosophy and literary theory in Continental Europe, but its influence has been felt widely. It can be traced back to Nietzsche but the problem with explaining or understanding it is that its proponents often insist that there is no deconstructionist method; that is, it isn't just another systematic approach to be applied that can be defined by explicit steps or principles. Even so, we can list some general guidelines that will help:
    Add nothing to the text: The piece (it could be anything) under consideration should fall apart from its own flaws without needing to look outside it.
    Look for unstated assumptions: By reading closely, we may be able to find presuppositions that the author relies on implicitly but doesn't argue for or explain; by pointing these out and criticising them, the purpose of the text may fail.
    Reverse the terms: It may be that by changing some of the terms in a piece to their polar opposites, exactly the reverse argument is made. For example, a racist text may be just as sound (or otherwise) with "white" swapped for "black" (or vice versa); but if it applied to any group, it wouldn't be making a point at all.
    Look for multiple interpretations: Rather than allowing one reading of the text to be privileged, try to find others—particularly those that may contradict or be entirely opposed to others. If a piece can support so many, perhaps its conclusions or premises should be called into question?
    Look for limitations: What can the text not include or describe? What has been explicitly or implicitly excluded from it in order to make the points or arguments therein?
    A major criticism levelled at deconstructionism is that its proponents seldom attack their own work in the same way; why not deconstruct a deconstruction, for instance? There are also obvious limitations to which texts can be deconstructed: although some think it can apply to anything, it is hard to see how it can address mathematical or (some) scientific papers without the knowledge of these areas that most deconstructionists lack or without tackling the philosophical problems associated with them first.

    Another objection to deconstruction comes from a different perspective on language. According to Wittgenstein, rather than representing a correspondence between propositions and reality (cf. our /index.php?/page/resources?record=22">tenth article), language is a series of games or practices that enable us to achieve whatever goals we have in a situation; thus, as we said earlier, meaning is defined by use. On these terms, deconstructionism is simply beside the point: language adapts to its use and pulling a text apart fails to take account of this.

    Queer and feminist theory

    "Queer" was originally a derogatory mode of address for homosexuals but was adopted in a positive sense in the 1990s by some militants. Based partly on Foucault's writings on sexuality, queer theory is concerned with sexual identity and particularly the idea that fixed categories (such as "masculine" and "feminine") are insufficient to describe the diversity we see in our world. Foucault noted that a vague grouping of actions were replaced by a group of sexual categories and questioned whether this was justified or meaningful; is it enough to speak of heterosexual and homosexual or is this binary either/or not enough to account for the varieties of human behaviour? Even if we add other designations, the same question remains: are we describing divisions that actually exist or instead forcing individuals into moulds that they do not fit? What are the consequences of the latter, especially for those questioning their sexuality? Queer theory studies these and other similar questions.

    In a similar way, feminist theory considers the role and influence of gender and of ideas defining the role of women in society. For instance, is knowledge asexual? Some propose a radical feminist epistemology wherein knowledge claims depend on who is making them? Did biological differences determine, wholly or in part, the historically restricted role of women or were social and other prejudices to blame? Does the portrayal of women in the media, art or literature have a positive effect or does it merely reinforce old stereotypes? Should women work for equality or the celebration of difference? Whatever the answers to these questions, the main point raised by feminist theory is that the relationship between the sexes is not one of fairness and equal standing but instead a narrative of oppression and inequality. Whether this is so, who or what is to blame and how to remedy it is still the subject of much discussion today.

    Postcolonial theory

    Although influenced by Edward Said's early work, postcolonial theory is relatively recent and seeks to study those cultures affected by colonialism. One way to define it is as those political, economic, social and cultural practices that evolve as a result of or response to colonialism. A potential problem for any look at a former colony is seeing it from a Western perspective and judging accordingly; when people from within the culture decide to describe it for themselves, why should they adopt this perspective instead of their own? What is the effect of using the former colonial language, say, as opposed to the native tongue(s)? Does self-description come naturally or is it a reaction or resistance to being discussed on another's terms? How did the interaction between coloniser and colonised affect both?

    One consequence identified related to the Western use of the term "Orient" (or, today, the "Middle East"); according to some theorists, this had the connotation of "exotic" or different and hence instilled a view whereby other parts of the world were talked of as "us and them" or "here and there", a practice that continues today and which prevents or makes it difficult for the "us" to understand "them". In addition, "they" might have had to alter their feelings of identity as a result of the pressures of colonisation. Postcolonial theory looks at these issues and tries to increase our appreciation of our history and its impact on our ability to learn about others if we implicitly suppose them to be different before we even start.


    Postmodernism (and its related aspects) is not without its critics, of course. Several different complaints have been raised, the importance of which depend on how a particular idea has been stated:
    Although postmodernism focuses on irrational tendencies and appears to celebrate them, it still uses reason as a tool.
    Postmodernists mock the inconsistencies of modernism but are not consistent themselves.
    Rejecting criteria for judging questions is not enough; alternatives have to be provided.
    Postmodernists call for interdisciplinary work and not taking subjects in isolation, but they do this themselves in their own criticisms and fail to learn enough about other subjects to be in a position to do so.The first three are often forms of ad hominem tu quoque, a logical fallacy in which an argument is questioned because the proponent doesn't seem to hold him or herself to it; if the positions are explained carefully, though, there is no requirement for a postmodernist to be consistent if his or her objective is only to show that an idea is flawed. One way to think of this is as a substantial shrug of the shoulders: if someone demands to know what we have to offer instead of their suggestions, we can say "I don't know, but yours are still wrong"; afterwards we can ask what we need to conclude from this (for instance, is it better to have bad ideas than no ideas at all?). There are some thinkers, of course, that do offer explicit statements that can be addressed by the above criticisms (such as saying "we should not use reason to decide things" and then offering argument in support), but our discussion in the /index.php?/page/resources?record=20">eighth article entreats us to be careful and not to avoid interesting postmodern ideas that are not beaten so easily.

    The remark that much of postmodernist thinking demonstrates a lack of knowledge of other disciplines—leading to weak criticisms thereof—is one we could make about most subjects but has more importance in this context. Is it sensible to complain at the relationship between power and knowledge, say, without knowing how physicists and biologists claim to come by the latter, particularly given the diversity of approaches even in these (cf. our /index.php?/page/resources?record=18">sixth piece)? A situation to be avoided if possible is one in which no-one really knows what anyone else is doing but criticises them all the same. The problem of realism that we looked at before is very significant to the kinds of ideas postmodernists have put forward, which is why we find it being addressed by some of them. Opponents of postmodernism find it doubtful that the search for facts or truth need oppress anyone; although it is possible to use knowledge as power, they say, this has nothing to do with the facts themselves and everything to do with interpretation and the people doing the interpreting.

    Another telling criticism is to note that to be anti-theory is still to have a theory; that is, the theory that we shouldn't have a theory. Rejecting the need for criteria (whatever their purpose) is still a criterion. Is it possible to be as playful as some suggest, not holding beliefs or methodological approaches and instead refusing to define or pin down narratives? How lightly can we hold our ideas before we end up either holding nothing at all or become certain of them without realising it?

    One point raised against postmodernism concerns the language used in many works, which can seem tangled and obtuse at the best of times. Are long, complicated words being used as part of a specialist language or because postmodernists have nothing of consequence to say and want to hide this fact behind their rhetoric? Often the answer is a matter of opinion, or of saying that even a difficult writer can sometimes offer a comment clearly enough to raise an eyebrow before plunging back into a thicket of terminology. Since a key assumption of this series is that anything worth saying can be said clearly, it may be that some people are reluctant to wade into postmodernist thinking for fear that their time will be wasted; unless the writer is composing his thoughts merely for the amusement of himself and a few select friends, this is a difficulty that still restricts the impact that postmodern ideas can have.

    The limits of interpretation

    One thinker critical of the idea that meaning is forever deferred or that interpretation can go on and on without ever reaching an end is the semiotician Umberto Eco. In his work Interpretation and Overinterpretation he asked if instead there are limits to how much interpretation we can do with a given text. For example, suppose we take Dostoevsky's novel The Brothers Karamazov, the tale of a father's murder, apparently by his own son, and with much discussion of philosophical and theological issues. We can each read it in a different way, understanding some lines, sections or characters in disparate ways and maybe even disagreeing vehemently about the moral of the story (if any); however, it seems ridiculous to say that we could interpret it as a manual explaining how to survive on Mars in the event of a global shortage of apples—some readings are too far beyond the text to be able to claim much (or any) support from it.

    In addition to apparently baseless interpretations, we can also overinterpret and see things that aren't there. An especially rich source of examples can be found in conspiracy theory, wherein the search for links between events and the hidden motivations of individuals or groups can result in speculations that, while they have some basis in fact, go too far. We see this also in the hunt for codes in Shakespeare and Marlowe: the former is believed by some to have left clues to the real authorship of his work while the latter was a spy and peppered his writing with anti-masonic comments. Eco himself gives the instance of the "Followers of the Veil" who read Dante's erotic references as coded criticism of the Church. Too much interpretation can lead us to see what we want to, rather than the (sometimes) quite specific intention of the author.

    Eco's main point is not that a text can tell us how it should be read but that it restricts what we can say. Even if we can take an infinity of different understandings, they are not equal: some of them will be supported by the text while others will not. In this respect, his remarks are much like the criticisms that were raised against older forms of empiricism (cf. our /index.php?/page/resources?record=17">fifth and /index.php?/page/resources?record=18">sixth pieces): we can't just appeal to our own ideas of what there is in the world but neither can we test them against that world without further ado; instead, we have to accept that our assumptions, goals and hopes can influence what we see but we still check our thinking to see if it has any support in the very thing we are trying to understand. Thus we can accept that there may be no final reading or fact to be found without giving up the possibility that some readings are more "far-fetched" than others. In terms of metanarratives, it may be the case that none of the possibilities yet or to come can succeed entirely, but we can still say that some are better than others.

    To summarise, postmodernism is made up of too many elements and thinkers who very often disagree with each other to permit any simplistic assessment of it. We have to take each idea as it comes and treat it on its own merits, even while it remains fashionable to employ "postmodern" as a synonym for muddleheaded.

    Dialogue the Ninth

    The Scene: The next day. Trystyn and Steven are walking beside the river, discussing the previous night's events. Both seem down.

    Steven: Why didn't you tell me she was already taken?

    Trystyn: She isn't "taken".

    Steven: What? Of course she is.

    Trystyn: You should think about the consequences of the words you use, even when upset. She's not an object; she's in a relationship.

    Steven: Which you failed to tell me about.

    Trystyn: What could I have said? It's not for me to define what she has and what she means by it. Perhaps she views it differently to me, or to you?

    Steven: You know very well what I mean.

    Trystyn: Perhaps, but not what she means.

    Steven: (Exasperated...) What? Meaning is fixed.

    Trystyn: No, it isn't. Lots of people use words in different way, or understand them differently to how you might. Meaning is flexible this way, according to how you want to use a word. Maybe her relationships are flexible, too?

    Steven: Mine are not. In any case, if you intend to use a word in conversation or anything else—if you want to communicate—then it has to be the same or nearly the same as the other party. I'm sick and tired of this postmodern nonsense where people avoid any kind of responsibility by claiming that there are just too many interpretations to call any of them valid. If you talk to someone then you have to consider what they'll think or feel; look at their behaviour, the situation you're in and the circumstances. It's just like taking a bunch of theories and testing them; it's not enough to take your own interpretation and call it equally valid to any other, or better because it's yours.

    Trystyn: You can see, though, that she might've assumed you knew?

    Steven: Why would I? How easy it'd be if we all accepted that nothing can be known at all; we can't pin meaning down because it always eludes us or remains indeterminate. You know who does that? People who are afraid to say "this is what I mean, and nothing else". You can read a book any way you like but there are boundaries to it forced upon you by the author's intentions, the characters and their goals, possibilities in the story; you can add to it, but the structure is already there to build on. If you move too far away from the context then you're just talking to yourself, making yourself look ridiculous.

    Trystyn: I guess the point of it all is to prevent one perspective from gaining power over others, or to stop it from being considered correct at the expense of all others. We know what happens when people are certain of themselves and decide to convince everyone else.

    Steven: (Shaking his head...) This kind of tyranny isn't associated with everything. I just wanted to walk her home. An author pens a story and doesn't necessarily intend to subvert the human condition or hide his motives so that some guy with no knowledge of his subject can pull it to pieces and coin a few words while he's at it. The way around problems with meaning isn't to render everything meaningless.

    Trystyn: Wow.

    Steven: (Under a full head of steam...) Of course I know that perceptions differ; that meanings vary between theories; that sometimes pinning something down can kill it. What's the solution? We have to be a lot more careful. We can take account of the problems and try to be clearer, or more cautious, but what we can't do is take our toys and go home. What does that achieve?

    Trystyn: Not much, I guess.

    Steven: Suppose it can't be done—that we can't find all the answers. Suppose even that every attempt to do so is tainted by our biases or the use we hope to make of it, or even that meaning will forever elude us. Won't we still try?

    Trystyn: I'm sorry I didn't.

    Steven: I didn't expect to know her mind, or for her to fall at my feet. It just wasn't too much to ask that you both pay some attention to me—after all, I'm hardly the most complicated of fools—and consider the consequences of what I would find meaning in.


    Curtain. Fin
    Teaser Paragraph: Publish Date: 06/11/2005 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    In this installment of our series we'll consider ethics, looking at what we mean by this term, what use we have for it and thereby attempting to understand why this aspect of philosophy is so important to everyone. Along the way we'll examine some of the philosophical assumptions made or that need to be considered when constructing or deciding upon an ethical system and finish by looking at some contemporary problems that may be approached with the benefit of the perspectives introduced.

    What do we mean by ethics?

    In simple terms, morality is the right or wrong (or otherwise) of an action, a way of life or a decision, while ethics is the study of such standards as we use or propose to judge such things. Thus abortion may be moral or immoral according to the code we employ but ethics tells us why we call it so and how we made up our minds. As a result, ethics is sometimes called moral philosophy; we use it to criticise, defend, promote, justify and suggest moral concepts and to answer questions of morality, such as:

    How should we live and treat one another?
    What are right and wrong?
    How can we know or decide?
    Where do our ethical ideas come from?
    What are rights? Who or what has them?
    Should we coerce one another?
    Can we find an ethical system that applies to everyone?
    What do we mean by duty, justice and other similar concepts?
    There are many such issues that are typically studied according to the separation of ethics into three sub-branches:

    Metaethics: the study of where ethical notions came from and what they mean; in particular, whether there is an ethical system independent of our own opinions that could be applied to any situation at any time or place.
    Normative ethics: the search for a principle (or principles) that guide or regulate human conduct—that tell us what is right or wrong. A norm is just another way of saying "standard", so normative ethics is the attempt to find a single test or criterion for what constitutes moral behaviour—and what does not.
    Applied ethics: the study of specific problems or issues with the use or application of moral ideas investigated in normative ethics and based on the lessons of metaethics. Applied ethics may sometimes coincide with political or social questions but always involves a moral dimension.

    The distinctions between these will become clearer by example as we consider them each in turn. For the time being, we could note that the question "what do we mean by good?" would be metaethical, "what should we do to be good?" would be in the domain of normative ethics, while "is abortion moral?" would be the province of applied ethics.

    Why study ethics?

    Of all the areas of philosophy, ethics is the one that seems most pertinent to us and it is no exaggeration to say that everyone is engaged in ethical thought at most times in their lives, knowingly or otherwise. Moreover, it is quite mistaken to suppose that philosophers have a monopoly on deep ethical ideas while the rest of us bumble along, blissfully unaware of the import of the questions we suggested above; instead, a glance at the newspapers, television, internet, as well as books, films, plays, together with conversations on every street corner or in public houses and cafés, shows that each day we are confronted with ethical problems and have to make ethical decisions.

    We discuss these matters all the time, then; in this piece, we'll try to see how a philosophical treatment can aid us in this endeavour. How well do the ideas we currently use hold up to scrutiny? Are they based on sound assumptions, or could we think otherwise? Are we applying them correctly, or as best we could? Perhaps most importantly, are there alternatives we have not yet considered?

    Some historical perspectives

    If ethical thought is universal as we suggested above then it should come as no surprise that there were many thinkers in the past who put forward their ideas and tried to improve on what came before them. Many conceptions of ethics in the ancient world were based on or influenced by the Greeks, particularly Plato and Aristotle. The former thought that people were inclined to be good and desired happiness; the problem was to know what would bring about that good in the first place. If they acted wrongly, it was due to not understanding how they should go about achieving happiness in the best way—not because they wanted to act wrongly or badly. In that case, ethical difficulties were epistemological ones; wrong came from error, not intent.

    Plato also suggested four virtues: wisdom, courage, justice and temperance; Aristotle agreed but added others, like generosity, truthfulness, friendliness and prudence. However, Aristotle went further than his former tutor and said in his Nicomachean Ethics that goodness is in the actor, not the action; that is, an act is virtuous because of the manner in which a person has chosen it—having done so through sound knowledge and by holding oneself in a kind of equilibrium, making the decision for specific reasons and not at a whim—and thus not because the act is good in itself. This is an important distinction to grasp: the idea was that something we do is virtuous because we choose it when calm and collected, aiming for the best, as opposed to anything specific about the deed. That would mean that one answer to the question "how shall we live?" could be "by being good", instead of "by doing good".

    Another point of note is that neither Plato nor Aristotle specified what we would now call a normative ethic; it is one thing to say "acting in such and such a manner, you will choose the good", but quite another to be able to say exactly what that good consists in. Nevertheless, this was a common trait in the ancient world: in the Homeric epics and the stories and plays thereafter, the virtues were displayed practically. Concepts like honour or courage were defined by their use, showing a character being honourable and courageous but also demonstrating when these became foolhardy or even failings. Once again, this was what we might consider a fairly loose explanation of ethical conduct; a hero was honourable because he acted in a way called honourable, not because honour was defined and his conduct matched the description. Moreover, even the gods made mistakes and these showed that virtue was to be lived, not explained.

    From the point-of-view of normative ethics, the Greek ideas lacked explicit rules by which to discern how to live and answer moral questions. Various religious texts were able to provide these but were open to different interpretations. Given the acceptance of whatever sanction was claimed, though, a guide for conduct was provided; even then, however, they were not as free from ambiguity as we might suppose and required application by both religious and legal authorities.

    In spite of the increasing sophistication of ethical ideas and the legal precedents that were often based upon them, from roughly enlightenment times onwards a great deal of moral theorising took place. According to Berlin, this was due in large part to the resurfacing of three old assumptions:

    that to all genuine questions there is one true answer and one only, all others being deviations from the truth and therefore false, and that this applies to questions of conduct and feeling, that is, to practice, as well as to questions of theory or observation—to questions of value no less than to those of fact;
    that the true answers to such questions as these are in principle knowable;
    that these true answers cannot clash with one another, for one true proposition cannot be incompatible with another; that together these answers must form a harmonious whole...
    The result of such assumptions was for some the building of ethical systems, often elaborate but occasionally simple that would tell us the true way to govern conduct because, as Berlin's points note, the perfect ethical system both exists, is knowable (that is, we can find it) and—much like the Highlander—there can be only one. Some thinkers used God as their foundation, others reason and still others both, but the trend throughout was that the aim was achievable.

    In the meantime, there were a few skeptics like Bayle and Huet who were casting doubt on the whole enterprise and—especially for the former—influencing generations to come. They criticised these assumptions and doubted the efforts of the system builders and theorists like Descartes and Hobbes. Vico was also opposed to some enlightenment ideas and criticised the possibility of finding the nature of anything, man or good included. The history of this time is too complex for our purposes here and Schneewind's study is more than enough; suffice to say that this trend continued: thinkers explicitly or implicitly convinced by the three assumptions tried to construct systems while those who were not opposed them, sometimes with other suggestions. An understanding of right and wrong based on duty came to be contrasted with one based on consequences, alongside evolutionary ideas and some new applications of Aristotle; we shall look at these below after first considering the metaethical questions that they all rely upon.


    As we remarked above, in metaethics we look at the principles that underlie ethical systems and their applications. If we take a question like "is it wrong to be mean to Hugo?" for instance, the metaethical aspects we would first need to clear up might be to ask:

    What do we mean by wrong?
    How do we determine it?
    Who does it apply to?
    Is the definition of wrong at our discretion or does it apply according to a fixed standard independent of our opinions?
    What does a correctly identified wrong action imply, if anything?
    As we pass through some of the areas of metaethics below, we'll see how each of these questions could be answered.

    Where do ethics come from?

    It seems unlikely that storks also bring ethical ideas with them, perhaps slipped in as reading material for a baby bored with waiting on a doorstep; instead, we could expect some kind of foundation or justification of a rule or suggestion such that we are both inclined to accept it and appreciate why we should. There are several possible candidates:

    From God.
    From an abstract world where concepts exist in some way.
    From agreement between people.
    From a consideration of duty, or virtue.
    From a consideration of the consequences of various actions.
    We may be able to think of others. At this early stage we can make an initial distinction by suggesting two general answers to our question: on the one hand, ethics are already decided but need to be discovered—whether they be created by someone or something, or just "waiting" to be found; on the other, they are not set in stone but are discussed in one way or another and arrived at through agreement, with due regard for practicalities. We'll now consider each element of these in turn.

    To what or to whom do ethics apply?

    The answer to this question is not at all obvious. Over the course of history, ethical systems have been presumed to be relevant only to free men and not slaves, or to white men and not black, or civilised men and not savages, or to men and not animals and environments. Sometimes there were such codes for all these groups, but they were different or separate. Why should there not be a single system for all? On the contrary, though, why should there be?

    One way to view this issue is via the concept of rights, which are subject to much the same criticisms. Typically a right is granted by a government or authority and represents some principle that—one way or another—is to be considered inviolable or not to be taken away, such as the right to life. Some people think rights are decided upon, perhaps by suggesting that everyone should be entitled to live without perpetual fear of being murdered for no particular reason; others think these rights are the consequences of eternally existing ethical codes discovered by reason or granted by God. Of course such a right does not imply that no-one will try to hit Hugo a glancing blow about the head, but only that doing so will have consequences.

    Should such rights apply equally to everyone? Although the egalitarian spirit would seem to suggest so, in fact matters are complicated every day by circumstances—particularly dilemmas like kill or be killed. In that case, perhaps we should be more sophisticated in our use of ethics?

    Some people think that there is little or no justification for seeking and applying ethical codes to humans and not to animals. On the contrary, say others, animals do not understand the concepts of ethics and rights and hence cannot take part in a society employing them. If that were so, says the counter-argument, neither could they be granted to infants and the mentally incapacitated. One reason for proposing a wider use of ethics to cover animals too is the idea that rights can in principle belong to anything—even an environment.

    Clearly the question of who or what can have rights or ethical value has consequences for the codes we may draw up. Some argue for the attribution of value of the basis of it being self-evident that people (or animals) have, for example, the right to live in decent conditions; others that there is a practical justification for so doing.

    Ethical realism and anti-realism

    As we discussed in the sixth article and have seen above, there is much dispute and disagreement as to whether ethical values exist independently of human ideas; for example, would it be wrong to steal even if there is no-one around to do so or nothing to take? If we answer "yes" to such questions then we are ethical realists, holding values to be a part of reality (or cognitive claims about it) in some way; if we respond "no" then we are ethical anti-realists (or non-cognitivists), supposing to the contrary that—whatever they might be—they are not.

    Objective, subjective and intersubjective

    An issue that tends to come up frequently in debates on ethics concerns the objectivity or otherwise of moral laws or values; in fact, this is another way of understanding the question of ethical realism. There are three usual positions advocated; ethical values could be:

    Objective: depending only on the object of inquiry, and hence independent of what we think, hope or expect to find.
    Subjective: Depending on the subject doing the inquiring.
    Intersubjective: Depending on agreement between subjects.
    Like most things, the best way to gain an appreciation of what these distinctions mean and imply is by way of examples. Suppose we take the proposition "it is wrong to hit Hugo" and try to understand it from each position: according to the objective reading, the proposition is true or false whether or not we approve of it (a reasonable point of issue), and even if no-one but Hugo was around (another typical circumstance). Indeed, we could interpret it as saying that it would be wrong to hit Hugo even if he did not yet or ever exist; the important thing is that the truth or falsity is not dependent on what we think of Hugo, what time of day it is, how we are feeling and whether it is raining, but only on the facts or reasoning—whatever they may be—that decide the truth value.

    On the subjective reading, the truth or otherwise of the proposition rests in part or wholly on the judgement of the subject suggesting it. In this case, the wrongness of hitting Hugo would be decided by the moral ideas the subject had happened to decide upon, according to his or her conscience, often because the notion of an objective choice on the matter has been rejected or criticised.

    For the intersubjective version, it might be that an agreement to agree can be reached—in whatever way—whereby it is decided that the proposition will be considered true or false. This is not objective, because it depends initially on the opinions of those concerned and hence is not independent of them—the same group may choose to say otherwise at another time, for instance; neither is it subjective, since the decision is taken as an ethical standard to apply across the group, not just individually.

    Another example could be the adoption of a declaration of rights, whether it be that of the French in 1789 or the more recent United Nations version. An objective critique might say that not all the rights claimed can actually exist or be supposed to be independent of us, while a subjective opposition might view the notion of rights as meaningless in the first place; nevertheless, an intersubjective agreement between governments could effectively say "we are going to take it as given that these rights exist and act accordingly". That, of course, is what we generally do—the best we can, while taking into account philosophical arguments.

    A significant problem in ethical debate occurs when participants employ either objective or subjective understandings of morality and are using different assumptions before they even lock horns properly. A person who does not believe in the possibility of objective ethical values would find it difficult to achieve any common ground with someone who does; it may be that there is no way to reduce the one to the other. This, we might suppose, is often the attraction of intersubjective reasoning. Generally speaking, though, subjective and intersubjective ethical ideas are often mistaken.

    Here again we see one of the cautions of the philosophical approach: the person we are speaking to may not share our starting point-of-view, so we need both to examine it and try to see our own ideas from their perspective if we hope to get the most from our discussion.

    Relativism and pluralism

    Aristotle wrote in his Nichomachean Ethics:

    According to Protagoras:

    These are the beginnings of relativism in ethics: ethical relativism takes note of the apparent fact that ideas about what conduct we call good and bad, or acceptable and unacceptable, has varied across time and between societies—in short, that some people call things "wrong" that others do not have any problem with.

    Individual relativism is the idea that people create their own moral codes, separately from anyone else; this is found in Nietzsche, and also the character Raskolnikov in Dostoevsky's Crime and Punishment who wrestles with the notion that great men—like Napoleon, in his example—are not subject to the same rules as the rest of us but decide their own. Cultural relativism holds that moral values are relative to the cultures they are found in. Support for the latter is found in studies comparing societies or just from the simple experiment of travel: some countries are tolerant or approve of homosexuality, polygamy, dressing provocatively or prostitution; others are not. Some countries employ the death penalty within a prison environment while others do so publicly and still others refuse to consider the possibility of such a punishment. In some nations of Europe, not partaking of a coffee after a meal is tantamount to declaring oneself to be a barbarian, while others do not judge so harshly.

    Given the wide diversity of these positions, relativists ask if it can make any sense to suppose that ethics can be everywhere and at all times the same; instead, they vary relative to the circumstances and period in which they arise and are employed. It is important to realise that this does not imply that "anything goes"—a misunderstood methodological point—but rather that values are not independent of the many factors that impact upon their use.

    Pluralism is not the same as relativism; it was advocated by Mill and we looked at some of his arguments in the eighth part of this series. Berlin described it as follows:

    Thus it may be that—as with relativism—many different cultures advocate different ethical ideas, but—unlike relativism—they may each see their ideas as objective and it may not be possible to compare them and decide which are true and which are not. In that case, perhaps it is as well that many attempts are being made and that there is a variation in ethical systems? After all, Mill's advice was to allow people to live as they choose in order that we learn by experiment what works and what does not. Pluralism, then, takes note of these circumstances and suggests that we build around them—allowing people to live according to the values they decide upon so long as they do not harm others in so doing. It values tolerance and diversity, whether because they are believed to be important in themselves or because of their consequences.

    Moral skepticism

    Hamlet was not convinced of the existence of objective moral guidelines or principles and expressed his doubt thus:

    The epistemological claim that we can know nothing of the existence or nature of such principles is called moral skepticism, which sometimes also includes the ontological claim that there can be no such thing. The moral skeptic asks where these guidelines are—in some abstract realm? If so, then:

    how can we know anything about them?
    how are we supposed to interact with something separate from us?
    These questions are powerful arguments against objective principles. The first questions the second of the three assumptions Berlin noticed that we looked at before; the other asks about the seemingly strange world that objective ideas presumably inhabit. Another argument points out the wide range of ethical ideas that people have had before and have today in different parts of the world; where do these come from if in fact there is only one true set of rules to be found? A possible answer could be that the correct rules have been distorted by our attempts to try to find them when starting from such diverse positions and within different cultures, but skeptics do not find this very convincing.

    Metaethical problems

    There are other difficulties that arise in metaethics that we can consider here. The first is the is/ought problem, according to which we ask how a statement about what we ought to do can ever be logically derived from a statement about what is. For example, suppose we consider the proposition "we ought to do more about people who do not have enough to eat"; in the form we studied in the fourth introduction, this would read:

    P: Some people do not have enough to eat;
    C: Therefore, we ought to do something about it.
    The first line (the premise) talks about what is, namely the unfortunate fact that some people go hungry every day; the second (the conclusion) says that we ought to act in some way. We can see the aspect of such arguments that the is/ought problem identifies quite clearly here, because premises are missing:

    P1: Some people do not have enough to eat;
    P2: People not having enough to eat is a bad thing;
    P3: Bad things should be acted upon to make them better;
    C: Therefore, we ought to do something about it.
    The second premise is implicit in the first argument, but not stated; the conclusion does not follow without it, because all we are doing is connecting an is statement with an ought without offering any reason or justification. Note that the transition from the second to third premises is also subject to the is/ought problem.

    The basic point at issue is that is/ought statements appear to involve unstated ethical ideas that do not make the step from what is to what ought to be any more valid, unless we assume that the ethical ideas are true in the first place—but that is precisely what we are supposed to be showing. In our example, the sheer number of hungry people alters nothing about the logical step involved.

    Another example could be the so-called "problem of evil", in which it is asked how a good and all-powerful God could allow the existence of evil, or sometimes a specific case that is supposed to be unproblematic—like the death of a small child. The same is/ought difficulty is at work here:

    P1: God is good and all-powerful;
    P2: Evil exists;
    C: God ought to prevent or remove evil.
    The premises talk about what is, while the conclusion addresses what ought to be; however, the implicit premises are missing again: even if we made a long list of every evil thing we could think of, it would still not follow that we can say what God ought to do with presupposing the ethical point we are trying to make.

    (There are other aspects to this problem—the supposed logical conflict between the two premises and the question of whether our ethical ideas are close to or a necessary approximation of God's—that we shall consider in the philosophy of religion introduction to come.)

    The is/ought problem is a major one for ethics and leads some to think that the derivation of an "ought" from an "is" simply cannot be achieved. Others suggest that facts about our world and about human nature can lead to a resolution, as we shall see below.

    Another issue to look at is the naturalistic fallacy, proposed by the philosopher G.E. Moore as a criticism of naturalist or evolutionary approaches to ethics (see below also). He claimed that "good" is a simple property, in that it cannot be defined by reference to anything more simple or broken down further. The attempt to do so is called the naturalistic fallacy.

    To see how this is used, suppose we are trying to find a way around the is/ought problem and look at some feature of the natural world to help us—the sociability of groups, say, or cooperation. We could say that since we have evolved to be social animals, an action that promotes or develops cooperation is good while one that does otherwise is bad; unfortunately, according to the naturalistic fallacy, "goodness" is not the same as "sociability" and so the effort fails. Other examples would do likewise. It is easy to see that a way to criticise Moore's thinking would be to address his claim that "good" is a simple property in the first place, and that is what many thinkers have done.

    Another metaethical topic that bears considering is the motivation behind an action: according to ethical egoism, all acts are selfish in intent; we may help people, for instance, but ultimately it is to make us feel better. Altruism, on the other hand, suggests that unselfish actions are possible whereby we are solely intent on some benefit for others with no concern for ourselves. Some thinkers argue that all purportedly altruistic deeds are in fact reducible to egoistic motivations, while altruists point to problematic instances like a soldier throwing himself on a grenade to save his comrades or a mother sacrificing herself for her child.

    A last problem to study here concerns moral dilemmas, a situation in which we are faced with two possible courses of action where both seem to violate whatever ethical ideas we may hold. For example, we might know that a woman is very likely to die in childbirth, but that the only other option is to abort the baby; in that case, we would be faced with the unenviable choice of whether to allow the death of the mother or the baby, both of which seem wrong. Alternatively, we might allow a person to believe something about our feelings for them that is actually mistaken, or different; then we could either allow them to continuing believing a lie or tell them and perhaps upset them. This latter is a moral dilemma that comes up often. Lastly, we might be asked to support a war or conflict in which civilian casualties will occur but will remove some kind of purportedly greater evil; do we go along with it, knowing that innocents will perish, or oppose it and allow the evil to persist?

    Some situations may of course be more complex and have more than two solutions, but cases like these are common enough. How do we resolve the dilemma, if it can be done at all? One answer could be to say that no such genuine dilemmas exist, since we can always analyse them and find a solution. Along similar lines, if we think that there can only be one correct judgement to be made in all cases of moral difficulties, then one of the options must be true and the other false. The problem with both is that it is not always clear how a dilemma can be resolved and to assert that they always can be means that a methodology for so doing is needed.

    On the other hand, we could say that dilemmas can be resolved but not by picking the true course of action; instead, we have to go with making the best of a bad situation and trying to find some kind of accommodation between the two. However, if we deny that all dilemmas can be decisively resolved then we imply that the idea of a moral certainty in action is open to challenge: what of the notion of duty, or obligation, then, if there is no true or correct way we should act in a given circumstance? Some of the possibilities as to how to judge between alternatives when it is not obvious which would be best will now be considered.

    Normative ethics

    We come thus to the systems themselves, or the search for them. In this section we'll look at some of the ideas that have been put forward to guide our conduct and help us determine right or wrong, as well as whether or not we should throw a snowball at a defenceless Hugo.

    The Golden Rule

    Variations on this famous ethical principle have been proposed at many different times, perhaps most memorably by Jesus; simply put, it states that we should do unto others as we would have others do unto us. Thus, before we decide to launch a blizzard of grit-filled snowballs at Hugo, we ask "would we like Hugo to do the same to us?" If the answer is "yes", then fire away; if "no", then perhaps we should hold off? This example is flawed because the chance to bury Holblingian rhetoric may prove too tempting and lead to the rejection of the golden rule, but we can think of many more: for instance, shall we break into a home and steal the treasured jewellery of the occupants? Probably not, since we would not approve of them doing likewise to us; and so it goes.

    Notice, though, that there is a slight conflation of the rule in these instances. To avoid it (and demonstrate the point at issue) we can distinguish between two forms of the rule:

    The positive golden rule: do unto others as you would have others do unto you.
    The negative golden rule: do not do unto others as you would not have them do to you.
    The negative form, then, is what we appeal to when we say "I won't hit Hugo because I wouldn't like him to hit me"—in an alternative universe where this kind of prospect would actually scare us; the positive, on the other hand, is what we employ when we muse "I'm going to be nice to Hugo because I want him to be nice to me".

    The golden rule, in either form, is easy to apply and provides a guide by which we can all live without needing to use any special kind of reasoning or understanding, hence part of its attraction. How can we criticise it?

    Firstly, the golden rule tells us nothing of how we should treat animals, or babies, or the mentally incapacitated, since none of these can do as we would have them do, or not as we would have them not. Secondly, how do we know if we are not asking too much of others? Thirdly, what if we would like others to treat us in particular ways? Using our fertile imaginations, we could come up with certain practices that we might enjoy but should we then do likewise to others who may not be of a similar disposition? Fourthly, we could find problematic instances: we would like it if a lottery winner gave us half of their winnings, so should we give them the same amount of money, or do we have to win as well first? Perhaps we would like a friend to give us their new car; shall we give them ours? Fifth, what of limitations? Suppose someone is unable to treat us as we expect, but does the best they can all the same? Shall we treat them equally, or do the best we can instead? What of a person trapped in some kind of self-critical circle, wherein they are hard on themselves constantly, and pathologically. Should they turn this on everyone else?

    A general complaint at the golden rule is that it is wholly subjective—it depends upon what the subjects wants or would not want. We can try to get around some of these difficulties by saying that we should do as others ought to do to us, but then we rely on other ethical standards to determine what that "ought" implies.

    In summary, the golden rule is a useful tool and easy to implement, but we need to be careful not to be too simplistic when adopting it and bear in mind its limitations.

    Deontological theories

    The term "deon" comes from Greek and means duty, so in the general sense a deontological theory is concerned with our duties, obligations and responsibilities to others. In that case, moral conduct consists in following the normative guide provided by those duties; the problem would be in finding out which duties are the correct ones.

    By way of an example, it could be that it is our duty to not steal from others, even if we are hungry and cannot afford to eat; indeed, if it were the duty of others to help those in need, we would not starve. A network of such duties could conceivably allow for us all to get along tolerably well, even where we disagree about important matters like who should play at first five-eighth for the All Blacks.

    Deontological theories have a long history through thinkers like Grotius, Pufendorf, Locke and Kant to the present day where some people still refer to the notion of "things one just does not do". Part of their appeal lies in the apparent fact that some things seem "self-evident", such as to not go around killing people or hurting children.

    Kant offered a slightly different suggestion for how we should act, called the categorical imperative. This was based not on what we might want or desire but a single principle that he thought should apply at any time or place. He gave several versions of it, but the two most important are:

    "Act as if the maxim of your action were to become through your will a universal law of nature."
    "Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end."
    The first tells us to examine a possible course of action and say "what would happen if everyone were to do this?" In the case of murder, then, society would probably fall apart in short order; for giving charity, it might become a better place for all. In particular, if we take to insulting Hugo and everyone else was to join in, life might become not worth the effort for him.

    The second entreats us never to employ other people as mere tools for our own ends; in short, not to use them. If we make friends with a person, say, not because we value their company or conversation but solely to help our career progression in some way, then we are treating them as something we use to get somewhere and not as a person just like us, deserving of the same dignity.

    The main issue to contend with when looking to criticise deontological theories is how we come by knowledge of what our duty is: how do we know we have the true or proper duty, instead of a false one? Such theories are usually in one sense or another foundational—that is, ultimately based on some supposition or other that is at the base of our structure of knowledge. In other words, when we try to justify our ideas we eventually have to stop somewhere; at this point, we have our assumptions upon which to build everything else and hence they cannot be explained further.

    The first problem with such trivially obvious propositions, then, is that what is self-evident has differed through the ages and between cultures and individual people today. Who is to say which of the many duties are the right ones to adopt? At one time it was considered a duty to not condone suicide, for instance, but now it is gaining wider acceptance. Another is how to decide between duties that may conflict—if two people are deserving of our charity but we only have enough to help one, which do we choose?

    An answer to the second point is often given in terms of intuition; in times of such conflict, we will somehow know or feel which is the stronger and hence act accordingly. Alternatively, or additionally, we must use our judgement of the circumstances and specific factors to help us. Whether this is a convincing reply is another matter.

    Consequentialist theories

    The are several ethical theories that may be broadly called consequentialist, meaning that the morality or otherwise of an action is determined by its consequences. A division is usually made according to the answer we give to the question "consequences to whom?" and runs as follows; an action is morally sound for:

    Utilitarianism: if the consequences are positive for everyone;
    Ethical Altruism: if the consequences are positive for others;
    Ethical Egoism: if the consequences are positive for the individual.
    Clearly the important issues here are what we mean by positive (or a similar term) and how we decide when consequences are to be so described. Part of the attraction of such theories, though, is that they appeal to experience to justify our ethical ideas, instead of something more vague like intuition or duty.

    Suppose that one evening we are sat comfortably, enjoying reading a copy of How to make your snowballs hurt people, when we hear a ruckus outside; upon investigation, it turns out that someone is being beaten up by a gang of youths. We could call the police, but by the time they arrive it may be too late—what would be the right thing to do?

    If we wade in to help, we may find only that another person takes a pounding; from an ethical egoist point-of-view, then, it may be best to stay out of it (unless we have studied under Seagal). On the other hand, that is not going to help the victim—and we should be concerned at the consequences for him or her if we are ethical altruists, as well as those for the offenders. Even if we get battered in short order, at least it will give the victim a break while perhaps someone else calls the police. As utilitarians, we ought instead to somehow add up all these considerations and decide what to do.

    How do we decide, then? According to early formulations of utilitarianism, we take each case individually—act utilitarianism—and measure the pleasure against the pain involved in an act (hence the name); to be charitable, the understanding of these terms should be broad. Later the measure was benefit to society or some similar concept. The problem here was that taking an afternoon nap, for instance, does not contribute as much to society as a few hours of voluntary work in the local community—hence the nap is wrong. This does not seem to make much sense to the well-fed post-Christmas armchair pilot. An alternative is rule utilitarianism: this time we consider whether the implementation of an action as a rule would be beneficial to society. Killing someone, for example, would be catastrophic for society if turned into a rule.

    There are several criticisms we can make of consequentialist ideas, or utilitarianism in particular. A typical example of a problematic issue is slavery: if a small proportion of the population was used to support the majority, perhaps a great benefit could accrue to society overall as a result? However, slavery still seems wrong to many people; act utilitarianism appears to fail. Even as a rule we could possibly have two separate rules for slaves and non-slaves. Another similar point concerns a situation in which the current circumstances are unfair or undesirable but no change is proposed; in that case, a bad system could perpetuate. Rule utilitarianism does not necessarily help us because there may be two or more possible rules that seem equally good; how are we to choose between them if their consequences do not differ in any significant way? Consequentialism is also said to fall victim to the naturalistic fallacy.

    There are other aspects to attack but perhaps the most important and difficult to deal with is the epistemological question: how do we know what the consequences are or will be? Usually we can make an educated guess, considering as many factors as possible, but everyone is aware of guesses that missed the mark or were completely wrong—like the weather forecasts. Is a best approximation enough to allow an action or rule that may be wrong when its consequences have played out? What about those affected while this process is ongoing?

    In recent times a new perspective has been added to consequentialist ideas in the form of cost-benefit analysis and consists in part in applying some economic ideas to ethical and political actions by weighing the perceived benefit of an action or policy against its expected costs. This has some obvious quantitative attraction but has drawn criticism insofar as such a method seems to ignore the kinds of decisions made by people for ethical reasons. The building of a mall, say, may be subject to such an analysis but does not appear to many people to address their valuation of the area more as open fields or with local shops. No matter how clear the benefit in such a situation may be, the people in a community may still claim that they prefer to buy their newspaper from the same store, the proprietor of which they may have known for very many years. How do we judge this on the basis of consequences?

    Indeed, another sense in which consequentialism seems to just not feel right to many people is the way in which it operates: do we really judge actions by their results? Sometimes, perhaps, but we also say beforehand what we think. For example, if someone talks of the consequences of hurting a child and on that basis calls it wrong, there appears to be something strange about even considering the matter in this way; that is, it does not take into account the psychological process going on when someone says "hurting children is wrong". Consider, for instance, how we would react to someone offering the proposition "hurting children is wrong because the consequences of this act as a rule for society would be undesirable". It may be correct, but it somehow misses the point entirely.

    Virtue theories

    Virtue ethics dates back to Aristotle and beyond, as we saw above. Instead of defining ethics by rules that ought to govern our conduct, virtue theorists prefer to advocate the learning and development of character habits. The Greeks noted that a kind of middle way was possible; a self-respecting man, for example, could become vain if he had too high an opinion of himself, just as he could become desperate if he lacked the trait completely. The same could apply to prudence in financial matters, where too much could lead to living like a pauper in the midst of riches while too little could result in genuine poverty.

    Although this seems to reflect the way we often think or talk about ethical conduct, there are a number of shortcomings. Firstly, habits of character or admirable traits do not tell us how to deal with moral dilemmas or those instances of applied ethics that come up regularly, like abortion or the death penalty. It is unclear how we are to deal with lapses in conduct; suppose a normally brave soldier is cowardly once—how should we judge this? What, also, of specific acts like the murder of a child? Should we pass over a temporary period of failure in the hope that a person's conduct will improve in the long run? Is there any sense in saying we have found the true habits of character that should be advocated, or do they differ and depend on circumstances? Lastly, what of the wide variety of cultures and the different modes of conduct they each value?

    Evolutionary ethics

    Some of the insights gained from evolutionary theory have led to the consideration of ethics from this perspective, with E.O. Wilson coining the term "socio-biology" for the "study of the biological basis of social behaviour". If we use our intellect to determine ethical conduct and this intellect (and its physical seat in the hypothalamus and limbic system, as Wilson notes) has evolved, then it makes sense to ask what this insight can tells us about our ideas of right and wrong.

    Take, as a standard example, a society in which all the individuals are selfish and concerned only with themselves. It seems fair to say that such a society would not last long unless some kind of cooperation developed. In that case, notions like helping others, honesty and of course cooperation would be likely to be selected while the converses would not; that is, these traits would have a biological explanation.

    This means that the question "why do we behave ethically" can be answered by saying that we have evolved to be that way. The other—perhaps more important—matter of what we mean by good is not addressed, though, but it may not necessarily need to be. If we ask "is it right to hurt people for no reason?", we could consider the question in evolutionary terms and reply that we have, say, evolved to in general not do so; that tells us that we are going to be inclined to think—and act—as though it is not, regardless of some ultimate answer to the question. It could be, then, that the issue is not "what is good?" but rather "why do we act as we do?"

    Reflective equilibria

    The late John Rawls is considered by some to be the most important political thinker of the last century, and with good reason—not least his Lectures on the History of Moral Philosophy. His ideas have a bearing on our studies here because the fount of all his work is the notion of justice as fairness, which he explains thus:

    The first point says, in part, that if anyone is going to be free, they can only be so free as everyone else may also be—fairness; the second remarks that if there are going to be inequalities then we must expect something good to come of them eventually that is of benefit to everyone—fairness, again.

    How, then, are we to determine what is fair? Rawls proposed a methodology for so doing that he called the veil of ignorance, meaning the attempt to investigate a possible action as if we knew nothing of our own or others' circumstances. Take, for example, the idea that a minority of people should work as slaves for the majority; for simplicity, suppose there are three of us—living on a desert island, perhaps—and one will be the slave. Under the veil of ignorance we do not include any information we might have to hand, such as which one of us is going to be the slave; if we knew that, we could easily be swayed in our decision by reflecting on the consequences for us personally. Thus, out of the three of us, one will be a slave. This does not seem fair at all—it could be us, for no reason other than picking the short straw.

    Clearly we do have personal biases and other information to help us decide such things, and to help us deal with these Rawls offered what he called the reflective equilibrium; that is:

    This means that we study an idea under the veil of ignorance to try to understand what the fair response should be, before opening it to further arguments and comparing it with what we already think about justice. After this process, we have reached a kind of balance or equilibrium, but we nevertheless continue to reflect on it and use whatever new information or circumstances comes along. Hence, we have a constructivist ethical approach—one that is built up and, like a house, open to adjustment while still remaining grounded in the concept of fairness.

    Rawls expanded on this outline at great length but it will be useful for us to consider an example here of how we can apply this basic insight. Suppose that we look at restricting the freedom of speech: under the veil of ignorance, it could be proposed that some of us not be allowed to speak as we choose while others are, even if they use that freedom to babble about philosophy instead of Frisbee. This hardly seems fair, and we can see no reason as yet why this unfair application would result in benefit for all somewhere down the line, nor why the liberty to speak as we choose should not be extended to all.

    Now we reconsider this decision in the light of other information. Should freedom of speech be unrestricted? Apparently not, given the strong arguments to the contrary that are available elsewhere—so the "level of generality" is very much an issue. What have we learned from previous times when freedom of speech was or was not restricted? And so on. We eventually arrive at a decision that represents a balancing of all available factors but which acknowledges that things may yet change and have to be looked at again.

    Applied ethics

    Earlier we said that applied ethics is the study of those issues we see every day with the application (hence the name) of the ideas we have looked at above. The key feature of an applied ethics problem is that it concerns something controversial and undecided; we do not, for instance, look at whether killing children is good because everyone agrees that it is not, regardless of how they come to it.

    Topics to consider

    By listing some of the ethical issues recently or still under debate, we can understand the importance of this area of philosophy and why all the techniques we have seen to date in our series have a bearing on our everyday lives. Consider:

    Should we permit stem cell research?
    Should we allow abortion (whether at all or at different times)?
    Should we keep information confidential?
    Should we tell the truth to worried relatives or dying patients?
    Should parents be allowed to decide what education their children receive?
    Should religious beliefs overrule doctors when choosing treatment?
    Should we stop people from committing suicide if we can, or discourage it?
    Should we allow or support assisted suicide?
    Should we allow animal research?
    Should we allow capital punishment?
    Should we support marriage as an institution?
    Should people be monogamous?
    Should we allow gun ownership?
    Should we act pre-emptively against potential criminals?
    This is just a sample of the matters we can see in the news or hear about while queuing in the post office and wondering about the morality of bending someone's ear when they only want a stamp or two.

    Philosophical techniques

    We can use the advice we saw in our second and eighth essays when approaching ethical issues, as well as the understanding of terms gained throughout the series so far. In particular, if someone proposes an argument for or against some action and we find it to be flawed, we may still be able to improve it or learn something from it. We can now finish our discussion by looking at an example of a way we could approach an issue—not the only way, though.

    An example

    Consider the matter of interaction on the internet. Since many people have had access to the wide and diverse community provided by internet resources, we have had to adjust to a new way of dealing with others. How should we behave towards those we meet online?

    There are probably as many possibilities as there are people, but we could look at some of them. For example, we could behave:

    As we would do in person.
    With greater care, since we cannot use body language or emphasis to clarify what we mean.
    However we choose, thanks to the anonymity.
    And so on. In these and other cases, we could ask why we ought to act in one way and not another. Instead of trying to see the issue from one perspective only, we can use those we studied above and see if we can arrive at any kind of conclusion as a result. Note that we assume some kind of idea about how to treat people in person and are asking how to extend it (or if we should) to interactions on the internet.

    Take first a duty-based approach: we have a responsibility to behave in a certain way towards others, whether it be due to a religious moral code, the categorical imperative or a stiff-upper-lipped injunction as to what a gentleman should or should not do; should it be extended? Suppose we say no; in that case, an injunction to act in the specified manner no longer holds simply because of the distance between people. After all, it could be that the person at the other end of the connection is halfway around the world, but they could be in our town, or in the next room. Does our duty to them alter as they move further away? That seems like a difficult claim to justify.

    Now suppose we say yes; are the duties we have magnified by the additional difficulties we face or do they remain the same? The latter appears to be a little simplistic, since we rely on many additional levels in our everyday interactions other than what we write. Indeed, we could think of it as akin to composing a letter to someone; we have to bear in mind how our words may be interpreted and be more careful than we would if speaking to them—especially if we are trying to make something clear or get across some important information (a dear John letter, perhaps).

    Secondly, we could adopt a consequentialist approach: what would be the result of each possibility? In this case, it very much depends on the goal we are aiming at and which form of consequentialism we employ. If we want to use the anonymity we are afforded to be free of some of the ethical constraints we might otherwise face, we have to consider whether what follows from this would be desirable for us, for others, and for society as a whole. The same would apply to the alternatives.

    A virtue perspective would follow closely the remarks we made for duty theories. If there are certain habits of character that are to be followed or encouraged, then the distance between people hardly seems like a good reason to abandon them. We do not necessarily know how to behave at particular moments, but the general advice would apply for the same reasons as before.

    The other areas we studied above can also help us here; indeed, the golden rule in both its forms would seem to be excellent advice and we would do well to bear in mind that different cultures have different ideas about how we should act. As a result of these considerations, we can come to a tentative conclusion about how we ought to act based on each of the methods. All of them may be open to further criticism and may change if and when other information comes to hand. Rather than choose between normative codes, though, we can see that looking at a problem from all angles (or as many as we can manage) is a useful way to approach applied ethics.

    This is a basic discussion and can be made a lot deeper with more thought and application. Already there are many such studies available (on the internet, no less) that try to understand how ethics apply to new situations like this one that arise as our world changes. The questions we ask about how to relate to one another in circumstances that are rarely—if ever—the same ensure that this aspect of philosophy will remain relevant to us all.

    Dialogue the Eighth

    The Scene: Trystyn and Anna are walking home, having left Steven and Jennifer. They appear to be proceeding at leisure, not taking the shortest route.

    Anna: How do you think Steven is doing?

    Trystyn: Not so good.

    Anna: Why not? They seemed to be getting on well.

    Trystyn: Perhaps some other time, but she's already involved.

    Anna: Oh. (An uncomfortable pause.) Does he know?

    Trystyn: I don't think so.

    Anna: And you didn't tell him?

    Trystyn: (Quietly...) No. (Another pause.)

    Anna: Why not? (She doesn't sound very happy.)

    Trystyn: I guess I didn't think of it.

    Anna: Did you think he'd figure out for himself eventually?

    Trystyn: I thought she'd tell him.

    Anna: When? You could see he liked her. It's a bit late to leave it until he makes a move and gets shot down. You knew it wasn't possible but said nothing.

    (Trystyn sighs.)

    It was pretty obvious what would happen and you could have prevented it. That's wrong, whichever way you look at it.

    Trystyn: What ways are there? He's not a fool; I can't hold his hand. I didn't tell him how to feel and I didn't even introduce them.

    Anna: Nonsense. You had plenty of opportunities to have a quiet word.

    Trystyn: I guess so.

    Anna: There's no guessing about it. The consequences of allowing him to believe he had a chance were pretty clear at the outset. You wouldn't want him to do the same to you, so why do it to him? Don't you think you have some kind of duty to your friends to keep them from being upset if you can? What if everyone acted this way?

    Trystyn: I had other things on my mind.

    Anna: What difference does that make? Questions of how to treat people don't come along at convenient times so that you
    Teaser Paragraph: Publish Date: 06/10/2005 Article Image:
    Michael Ruse is Lucyle T. Werkmeister Professor of Philosophy and Director of the Program in the History and Philosophy of Science in the Department of Philosophy at Florida State University, Tallahassee. He kindly agreed to be interviewed on the subject of the philosophy of biology and its part in the creationism debate, as well as on some wider, related issues. He is the author of many works, including the recent Can a Darwinian be a Christian?, looking at the relationship between science and religion.

    - Interviewed by Paul Newall (2005)

    PN: You recently worked with William Dembski on the volume Debating Design. Some commentators have complained that in so doing you have afforded Intelligent Design credibility it does not deserve. Why do you think engaging ID is a better option than ignoring it, as other academics like Gould or Dawkins have done?

    MR: Well, let us face up to it, I am neither Gould nor Dawkins, so what I have to say or to do has nothing like the effect that surrounds them. In one sense, I think I might be giving ID credibility, but in another sense not. I am very critical of ID and never conceal this fact. I put together a volume that has others very critical of ID. I am convinced of the power of reason - it is what I stand for - and by putting the stuff together I hope that some people might read and make the right decisions. Also, I am very much aware that before the Arkansas trial, we evolutionists did nothing, and then things happened. So this time, I try to do something! I did ask Dawkins to participate, and he refused and I respected that decision - I still do.

    PN: You have said with regard to Creationists that you "think our arguments are better than theirs and hence I am willing to be judged alongside them". One objection to debating creationists, however, is that skilful rhetoric and well-directed sound bites may win the day over reasoned argument, especially given the limited time allotted to most such encounters. How do you answer this criticism?

    MR: True, sound bites do succeed too well. The last presidential election showed this. But I would like to think that there might also be a place for reason and this is where I try to come in. If reason does not count, then close down the universities, especially the philosophy departments.

    PN: One of your own complaints against evolutionists is that some of them have been "too damn busy doing their research while Rome burns around them". What are the issues surrounding Creationism that have you so concerned and why are they important enough that academics should put aside their work to address them?

    MR: I think that creationism is fundamentalist religion of a particularly silly kind - rapture and Armageddon and all of that. It is also dangerous, as we see in the blind support of Israel by creationists. (Don't read me as being anti-Israel, I am not. I am anti-anti- any kind of critical discussion of the Israel-Palestinian problems, and I think the settlements are just dreadful and wrong. I also feel shame as a European that we made Europe so awful for the Jews that they felt they had to leave.) I see the Iraq invasion as part and parcel of the simplistic Christian attitude that other religions are bad and that we can distinguish black from white and clean things up readily. Saddam Hussein was an evil man, but sometimes in this world one has to be more subtle about dealing with evil.

    PN: What role does the philosophy of science (and in particular the philosophy of biology) have to play in the Creationism debate and the wider issues surrounding it?

    MR: Look at my collection But is it Science?, or my new book, The Evolution-Creation Struggle. I think we can try to understand things, both historically and conceptually.

    PN: Following your experiences in Arkansas in 1981, Larry Laudan was critical of the approach that sought to define science, thereby leading to Creationism being judged non-science and hence not suitable for the classroom. Why did you feel this was the best course to take and do you still think the same in hindsight?

    MR: We were fighting a court case. The US constitution bars the teaching of religion in science classes; it does not bar the teaching of bad science! We here at FSU are debating the possible existence of a school of chiropractic that the legislature has pushed on us. Many of us are arguing against it, but not on constitutional grounds. Larry Laudan is a Monday morning quarterback.

    PN: Other than Creationism, what are the major issues concerning the philosophy of biology today in your opinion?

    MR: The usual - species, reduction, the nature of evolutionary theory, teleology, and so forth. Evo-devo (evolutionary development) is the big topic in evolutionary biology at the moment and rightly attracting the attention of the philosophers.

    PN: How would you explain the relevance of the philosophy of biology to laymen and, more specifically, those who are hostile to philosophy in general?

    MR: Don't bother - I have other things to do. I defend my job as a teacher by showing that I am educating my students about important issues and teaching them how to write and so forth. For the rest, I simply say that man does not live by bread alone and leave it at that. (That is not quite true, because obviously I am interested in social issues - but I don't justify the doing of philosophy of biology as such in pragmatic terms)

    PN: What would you say the representative opinion of the philosophy of biology is among biologists? In general, do scientists appreciate the input of philosophers of science?

    MR: I could not care less, and that does not bother me at all. I think the biggest mistake a philosopher can make is to try to be a handmaiden for others. I have my problems and biologists have their problems. Leave it at that.

    PN: What projects are you currently working on?

    MR: I am trying to understand the natures of science and religion compared to each other. I am working now on a book about change and innovation in science and religion and whether they have similarities. We face in America today a big divide between science and religion, and I want to see if this is just contingent or necessary. I will not solve all of the problems in my lifetime, but I can try! I have another book coming out, Darwinism and its Discontents, that takes on the anti Darwinians - religious and otherwise.
    Teaser Paragraph: Publish Date: 06/10/2005 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    The notion of truth comes up in many contexts, not just philosophical, but very often a discussion can come to a grinding halt when it becomes apparent that differing understandings of the term are being employed and the dreaded question rears its genuinely ugly head: what is truth? Pilate found that no-one could provide him with an answer and, as we shall see, the answers we may give have consequences for both how we may tackle problems of wiseacring and what we are ultimately aiming at when wondering in the first place.

    What is truth?

    Before we can move on to consider the various theories of truth offered, we need first to look at the basic aspects any version relies on and some of the initial difficulties that turn up. To begin with, we have truth values: the truth or falsity (or otherwise) of something. Here we mean "value" in the same sense as in a sum of how many times we've given up on a debate as soon as truth was mentioned. A truth value could thus be "true" or "false", but other options exist in different logics (as we saw in the fourth article in this series) or the value could be "indeterminate"—like saying "don't know" in response to "true or false?", or even "can't know".

    What kind of things can be true?

    There are many options we could consider when asking what exactly has a truth value; for example, beliefs, statements, sentences, propositions and theories. These are all candidates but some have proven problematic; take sentences or beliefs, for instance. When we ask if a sentence or belief is true, do we mean a specific instance or a general one?

    In the first case, we could have a remark like "Hugo is dull", which is most certainly a candidate for being true, but what if there is no-one around to make up such sentences? The same would go for a comment like "philosophers cannot fly"; it seems like the kind of thing we might want to say is always true, but if there are no philosophers or skeptics around to compose the sentence or utter the belief (or if no-one chooses to do so) then it cannot bear a truth value. Another problem potentially could be that there are (presumably) far more true sentences or beliefs than could ever be stated or written, leaving us falling well short.

    In the general case, the concern lies in the very generality itself. If we want to say "I think Hugo is duller than a winter graveyard" then it may be true for some people when they take the place of the "I", as in "I, Count Duckworth-Smedley of Ditchwater, agree with the aforementioned sentiment", but not others—Hugo's mother, hopefully.

    In order to get around these issues, some people have suggested instead that propositions are what we assign truth values to. One benefit of this is that several sentences in different languages all end up describing the same proposition, instead of being distinct statements, sentences or beliefs. The hope for propositions is that they express something outside of time and not dependent on the existence of human (or other) observers; thus, "the earth orbits the sun in just over 365 days" would be true (or later false, in the event of some cosmic occurrence) whether or not people exist to think of or write down this particular notion, whereas "honestly, Hugo really takes the cake" depends on the existence of several persons and their valuations.

    Problems and paradoxes

    Propositions are subject to some of the same criticisms as we saw above and some thinkers find them deeply problematic, but they are typically employed in discussions of truth and so we shall do likewise to get any further in our analysis. In order to do so, though, we need to give a little more thought to the distinction between what is or is not a proposition.

    Usually we separate declarations, instructions and questions, saying that the former (declarative statements like "it was raining when Wilkinson dropped the winning goal") express propositions while the latter (imperative comments like "please land that drop goal" or interrogative remarks like "what was the weather like when Wilkinson kicked it?") do not. However, some famous examples of difficult declarative statements that may or may not do so have led to some disputes and paradoxes.

    Suppose a proposition employs a term that does not refer to any extant circumstances. The oft-used example is Russell's "the present king of France is bald"; there is no king of France today, so can we still say that the proposition is true or false, or is it meaningless? We could find other similar instances, like the properties of (supposedly) mythical creatures: can the propositions "griffins have bad tempers" and "pixies are helpful" have truth values? Some thinkers argued that these do not give us propositions, while Russell thought that they did—only false ones.

    Another issue studied by Russell (and others) concerned the liar paradox and similar "liar sentences" in general. If we consider the proposition "I am lying to you", we run into difficulties when we try to assign a truth value: if true, it would imply that the speaker is lying about lying and therefore telling the truth, rendering the statement false; if false, it means that he or she is in fact telling the truth, which would imply that he or she is lying to us, rendering the statement true. This is a paradox that seems to trap us every way we look at it.

    Although much ink has been spilled on this issue, one way around it that we have already touched on is simply to say that the original statement was not a proposition after all. Another could be to note when paradoxes come up but still use the approach where it works for the vast majority of cases.

    A third area to look at concerns ethical or aesthetic statements, such as "it is wrong to use animals in medical research" or "Beethoven's symphonies are more valuable than Elvis' work". Do these express propositions? Can we assign truth values? Some thinkers have argued in the negative, insofar as such remarks merely tell us the opinions of the person saying so. On the other hand, those who consider that ethical or aesthetic values can be determined in some way (for example, moral realists—as we saw in the sixth article and will touch on again soon) suggest to the contrary; after all, if we can decide what is right and wrong (to also be discussed in the next piece) then it will be a relatively simple matter to compose ethical propositions that are true or false.

    Lastly, what of propositions concerning those events that may or may not have happened in the past or might happen in the future? We glanced at this matter in the fifth article of our series, but for now we could consider the proposition "Caesar sneezed when crossing the Rubicon", or something similar: what can we say about its truth value? It's trivially apparent that if he did sneeze then it's true, or false if he didn't; what, though, if we don't have access to the information to help us decide? Short of a time machine, it's hard to see how we could come by anything to help us.

    Much the same occurs when talking about the future: take the proposition "Hugo will be more interesting tomorrow than he is today"; how can we know whether this will be the case or not? It certainly seems unlikely, but that doesn't help us assign a truth value—unless we are happy with "indeterminate" or something along those lines.

    One way around this is to note the way we actually reason about future events. We might say: if it rains tomorrow, we had better do the washing today; moreover, we have reason to believe that it will rain—a weather forecast, for instance. If someone then asks "why are you doing the washing today?" we could reply with the proposition "[because] it will rain tomorrow". This seems like a valid argument (and one that we use often enough in a similar form) but the proposition may not have a determinate truth value; it seems folly, though, to discard the argument on these grounds.

    Forms of truth

    So far we have discussed truth values and what may be true (or false), but what do we mean by truth itself? There have been many versions of truth put forward, each subject to critique in general terms or in favour of another that purports to address these shortcomings. We cannot cover them all, but in this section we'll take some possibilities and consider their strengths and weaknesses, trying to understand why we might choose one or more of them.

    The correspondence theory

    Many people use a form of the correspondence theory when speaking of truth: a proposition is true if it corresponds to the facts, or reality, or how things actually are. There is a subtle difference here, though: are "facts" and "how things actually are" the same thing? If we look at how we would state that a proposition is false, we can see the distinction: in the first case, the proposition would be false if it does not correspond to any fact; in the second, it would be false if it corresponds to how things are not. In the latter, then, we have a kind of comparison to something that doesn't exist—the way things are not.

    Although there is some dispute on this issue, we won't consider it in any further depth here. What, though, are facts? Looking back to our third article, we could ask what the ontological status of facts is supposed to be: take a proposition, say, like "England beat Australia in the 2003 World Cup Final"; is this a fact? If so, is it a fact consisting in a fact? If so, does it correspond to the same fact as the proposition "Australia were beaten by England..."? If so, does it correspond to the facts that "[team x] were not beaten by England", for any other team x? And so on. When we consider instead "the ways things are", we have other difficulties. If we say that a proposition corresponds to "the ways actually are", surely there is only one way, to which all propositions must correspond. This doesn't appear to say much beyond a triviality.

    Another objection could be to question the nature of the correspondence relation itself, which seems mysterious. It hardly seems likely that the words in a proposition correspond to the facts, but rather the entire proposition does. In and of itself, the correspondence theory doesn't appear to say much until we expand on it and note what it means: in our example, we want to say both that the propositions means what it says and that England actually did beat Australia. Lastly, is the correspondence theory true itself and, if so, what does it correspond to?

    The semantic theory

    In 1944, Alfred Tarski proposed his semantic theory as a successor to the correspondence theory, expanding on it somewhat but dropping the problematic concepts of facts and correspondence. He suggested that a proposition is true if and only if a claim about the world holds. Thus, the proposition "Hugo is dull" is true if, in fact, Hugo really is dull; conversely, if Hugo is dull then the proposition "Hugo is dull" is true. More generally, we have "p is true if and only if p", where p represents some proposition. A similar rendering would apply to falsity.

    This is an improvement on the correspondence theory because we can write the condition as follows:

    The proposition ("Hugo is dull") is true (1)
    if and only if (2)
    Hugo is dull. (3)
    In this layout, only (1) is talking about truth, while (3) is a claim (which may or may not be accurate) about the world; any reference to facts or correspondence is gone. Note that we are not saying "Hugo is dull if and only if Hugo is dull", which is trivially so (a tautology) but says nothing about truth. Tarski was concerned to separate what he called the object language (the part in quotations, describing the object of discussion) and the metalanguage (the rest of the sentence, containing the object).

    The semantic theory is a good deal more complex and involved than this sketch explains, but one of the other issues considered is that of contingent and non-contingent truths: the former are those that may or may not be true, while the latter are necessarily true. Examples could be the proposition we looked at before, "Australia were beaten by England..." which may or may not have been so, contrasted with "twice two is four" which must be (leaving aside certain special number systems). Can the semantic theory account for non-contingent truths, which appear to be true by definition?

    It seems that a distinction between these two may be drawn insofar as we can imagine a world in which Australia beat England (for instance, one in which Wilkinson was born in Perth), but twice two seems the kind of thing that must be four in any world—or all possible worlds, as the terminology often goes. How we go about finding out whether a proposition is true in each case differs slightly (i.e. we need not appeal to experience to justify mathematics, or so some thinkers say) but it does not follow that there is a differing form of truth at work. Moreover, the semantic theory is not telling us anything about how to go about such things, but only that the success of non-contingent truths like those of mathematics or logic may be due to their accurately describing our world.

    The coherence theory

    The main contender to the semantic theory of Tarski is the coherence theory. In general terms, the theory says that a proposition is true if it coheres (or agrees) with other propositions we already hold to be true.

    The easiest way to appreciate what this means is to consider an example: suppose a person drops an expensive vase when browsing in an antique shop and is asked to pay for it. Instead, the person offers the explanation or proposition "I dropped it because an African elephant knocked it from my grasp, since we were arguing over who should buy it". Why might we not accept this story?

    African elephants are not known to talk.
    African elephants are not known to be patrons of antique shops.
    African elephants are not found in this part of the world.
    No elephant was known to be within a certain number of kilometres of the shop.
    No-one else in the shop saw the elephant.
    And so on. Each item in the list is some other proposition we already hold to be true, or approximately so. Given, then, that the person's claim conflicts (or fails to cohere) with the set of propositions we have previously accepted, we reject it and call it false.

    Note that this is much the same way as we usually come by knowledge, especially on a day-to-day basis. Moreover, this has nothing at all to do with Ockham's Razor or the likelihood of different explanations; indeed, it seems that when people appeal to Ockham they are usually employing a coherence theory instead.

    To make the coherence theory general we say that "a proposition is true if and only if it coheres with x"; the problematic aspect of the theory—and the resulting critiques—come from what to use in place of x and what we mean by "cohere". The first—and obvious—objection, though, is that a proposition may still be true (in another sense) even if it fails to cohere; perhaps the propositions we already accept are mistaken and need to be rejected in favour of the new one? This has happened very many times in the history of ideas, as we saw earlier in the series, and often we need to see common facts in a new light in order to reinterpret or discard them altogether in favour of a different approach. Insisting on a coherence theory in this way, then, would be poor methodological advice and would have halted some of the changes in our knowledge that we now tend to regard as progress.

    Now we come to the question "cohere with what?" If we answer "those things we already know" then who decides what we already know? After all, there is hardly agreement about lots of things, least of all what truth and knowledge mean in the first place. What if individual people have conflicting sets of beliefs or ideas that they hold to be true and against which they compare new propositions? For instance, the claim "Australia lost because they had a bad game" might cohere with the prior proposition "Australia are too good to lose to England unless they have a bad game" which is accepted by one person (Campese, perhaps) but not another; in that case, we would have a proposition which is true for one and false for the other, which hardly makes sense if we want to maintain the notion that a proposition is either true or false.

    We could say instead that we mean cohere with the majority of people's ideas, or the judgement of experts, but why should we expect either to be a good choice? Moreover, many people believe contradictory things—like some of those we saw in the aesthetics introduction—and the idea of coherence with a group of propositions that are themselves contradictory scarcely makes any more sense. We could try suggesting that we use those propositions that are consistent and believed in by the largest number of people, but those people may still be wrong. Alternatively, we could say the same thing but with the caveat that the propositions accepted are those that would be arrived at when the limit of inquiry has been reached, a position put forward by Hilary Putnam; of course, we then have to decide how we know when this limit has been reached. Another approach could be to appeal to the set of propositions that we would accept if we were omnipotent, or which would be used by an omnipotent being, but we have the same difficulty. If we do not agree that this set—even if it is unattainable—is the one we must be aiming at, then we have to reject the idea that a proposition is, in the final analysis, either true or false.

    Lastly, what do we mean when we use the term "cohere" in this way? We could respond that it means "agree" or "consistent with", but what then do these mean in the context of truth? We want to avoid having to say that two propositions cohere because they may both be true together, since then we already assume the concept of truth in trying to define or explain it. In general, does the coherence theory help us to answer the question "what is truth?" or does it just give us a way to test for it?

    The pragmatic theory

    The principal proponents of the pragmatic theory were William James and C.S. Pierce; according to this version of truth, a proposition is true if it is useful to believe it. Another way of putting this is to say that those propositions that best justify what we do and help us to achieve what we are aiming at are true.

    For example, many people wonder if there is a God—however they understand the idea—and are want to insult one another on the subject rather than discuss it. Instead of throwing toys and running around the playground calling each other names, though, we could say that it is useful for some (indeed, many) people to believe in God; perhaps they want to make sense of their lives, or justify a moral code, or understand why a loved one has been lost. In this case, the proposition "God exists" would be true. Alternatively, they might choose to explain the way they live their lives or the goals they hope to attain on the basis of God's existence, much as an artist might; then, also, it could be true that God exists.

    One difficulty with this conception is that not everyone finds it useful to believe in God, for whatever reason. This would mean that some people find the notion useful while others do not, rendering the proposition both true and false—a disagreeable prospect just as before. Another criticism is to note that a belief we know to be false could still be useful: for instance, we could tell a dying patient that she is going to get better, or anything at all that might help ease her passing. This renders a false belief true, which is absurd. Lastly, and as with coherence, the pragmatic theory gets us no closer to understanding what truth is.

    The deflationary theory

    Another version of truth having been advanced or defended by many recent thinkers is the deflationary theory. It has many forms that differ slightly from one another, such as the disappearance, redundancy, minimalist and disquotational theories, but the basic idea is that we can deflate the notion of truth: to say that a proposition like "Australia were beaten by England" is true is just to say "Australia were beaten by England", and no more.

    Some deflationists consider that our attempts to puzzle out the nature of truth are never going to get anywhere because they are based on the assumption that such a nature exists; in fact, truth is just another piece of conceptual baggage that adds nothing to our understanding. Others say that the theory is to be favoured because it shows that a great philosophical problem can have the air taken out of it, so to speak, showing that there was no puzzle after all.

    It is easy to see the appeal of the deflationary theory. When we say that the proposition "twice two is four" is true, we just mean that twice two is four—there is no need to talk about truth at all, it seems. It is also useful, though, insofar as we can use it to make general a whole series of specific propositions. For example, suppose we want to say that the current England side will beat any opponent; to do this in propositional form, we would have to say something like "if England played France, England would win; and if England played Australia, they would win; [etc...]", which is much the same as "the proposition ‘England would beat France' is true, and the proposition ‘England would beat Australia' is true, and [etc...]". For some such propositions, we would be at the task for a very long time, especially if the intention was to involve an infinite number of teams (for instance, any team past or in the future).

    On the contrary, the deflationary theory allows us to reduce this to the common sense (and as we would actually say it) proposition "the current England side would beat any opponent". Moreover, it tells us the total content of the proposition without having to write it all out and without needing to involve any notion of the nature of truth.

    One way of formulating the deflationary theory is via a schema, so-called:

    x is true if and only if y where x is a name for y; that would reduce to "y is true if and only if y"—for instance, if x was "two plus two is four" and y was "twice two is four". This would imply that we could describe falsity in a similar fashion:

    x is false if and only if x is not true; or
    x if false if and only if not x is true.
    A strong objection to the deflationary theory arises from these and concerns those problematic areas we looked at earlier, particularly the possibility of propositions that lack a truth value. Take an ethical proposition, say, that does not have a truth value; that is, it is neither true nor false. In that case, following the schema above, the proposition is neither true nor not true, which is a contradiction. To avoid this we could dispense with the deflationary version of falsity, but it hardly makes sense to accept the deflationary account of truth while so doing. There are other difficulties for the deflationary theory that are still being investigated.

    Other theories

    Some of the many additional theories under study include the revision theory, the identity theory and various versions of the deflationary theory. We have considered the main ones and some of the arguments for and against them, but work in this field continues to advance our understanding of what we mean by truth and what we can do with the concept, if anything.

    Postmodern perspectives

    Later in this series we'll look at postmodernism, so-called, but for now we may consider one of the points made on the subject of truth by thinkers associated with this area that has since become known as postmodern.

    It has bothered some people that the notion of truth tends to have a kind of power to it, insofar as it could be interpreted as saying "this is true; your ideas are not". That is to say, truth is power just as much as knowledge is. This would be important politically and socially if one or more groups intended to supplant the ideas of another, or force their own on them; the sanction of truth being applied to them may make this easier, or at the least give the groups a justification for their actions that may convince them to be less than scrupulous. Alternatively, the ideas of certain experts or people of influence may have a prestige attached to them that may not be due to their merits. Thus the acceptance of what is true depends on many social and other factors; moreover, what we accept and hence becomes the consensus is what is true: if everyone believes that we are and always have been at war with Eurasia then who cares if an omnipotent being would know otherwise?

    The main criticism aimed at this thinking is that it is a good deal more plausible in certain areas than others. It seems easy to agree, for instance, that a thinker with many vocal supporters who shout down or ridicule their opponents may come to enjoy a greater standing in the intellectual community than a fair appraisal of their ideas might otherwise permit, or even that areas of research were chosen at the expense of others because of factors like envy, dislike, friendship with those controlling funding, and so on, leading to a true theory being neglected for study of a false one; however, this does not imply that we cannot fly simply because jealous academics have prevented the study of superhero properties in individuals. The standing of an idea in the social sciences, say, is a lot more likely to be due to factors other than its truth than one in physics or biology.

    Truth as a goal

    What is the goal of our efforts to learn about our world, in whatever way we choose to do so? Are we aiming for the truth after all? Perhaps instead we could try for useful ideas, or just those that help us get by according to the notions we happen to hold at a particular time?

    As we saw in our discussions of epistemology and the philosophy of science, there is by no means an agreement on this issue amongst philosophers, scientists or most other investigators. Our theories may be only approximately true, if they are not actually false anyway and shown to be so somewhere down the line. If we can in fact be content with usefulness, or theories that are adequate for the purposes we have, then should we worry about truth or finding true theories? This question is not easily answered and appears to depend on the valuations of the answerer.


    Even if truth is problematic to define or explain, or even not really required, we still have the vague idea that some theories are better than others—closer to the truth, whatever it is, or less wrong. This is what we mean by truthlike, or stating the degree of truth rather than truth or falsity of a theory; in Popper's terminology, as we saw previously, it is called verisimilitude.

    Consider the problem of discovering the temperature at which water boils at sea level, along with two estimates: 105 and 150 degrees. The propositions "the boiling point is 105 degrees" and "the boiling point is 150 degrees" are both false, but it seems that this doesn't say enough; in fact, 105 is a better guess, and so to be preferred (we would think). Can we find a way to analyse this in a way that makes it a meaningful tool to use in our studies?

    The approaches taken towards this issue by various thinkers have generally been too complex to go into here, but we can understand how to go about attempting to answer this question. An immediate problem would be that in order to say how far away from the truth a given suggestion is, would we not first need to know the truth itself? If so, we would have no requirement for truthlikeness anymore. On the other hand, if we do not (and perhaps cannot) know the truth, then how are we to measure the difference between it and the suggestion?

    Several possibilities have been considered that use mathematics to model the situation and to try to describe the notion of getting closer to the truth without necessarily knowing what it is. It was mooted in the past that a discussion of the content of two theories could help decide which was the better, but this is now recognised to be insufficient. Much work today is ongoing in the study of truthlikeness and it appears to be demonstrating the manner in which various disciplines such as mathematics, science and philosophy are interdependent.

    In summary

    As we said at the beginning, truth is a concept that comes up more often than we may suppose and upon which many a philosophical ship has foundered. Although, like most important ideas, it is subject to dispute and its nature is far from clear (if, indeed, it has a nature at all), it is one of those terms that we use all the time even in everyday speech and hence is sure to be the focus of much analysis for the foreseeable future. That proposition, of course, is quite true.

    Dialogue the Seventh

    The Scene: Now outside The Drunken Bishop, Steven has offered to walk Jennifer home. Trystyn and Anna head off in the opposite direction.

    Steven: Whereabouts do you live?

    Jennifer: Over by the Ferris wheel. It's a long walk.

    Steven: I don't mind. (He is looking at her awkwardly on occasion, trying to avoid her noticing.)

    (A short silence.)

    Jennifer: How do you think those two will get on?

    Steven: Pretty good, I guess; she was curious about him before they even met.

    Jennifer: Oh? How's that? Would it have anything to do with you?

    Steven: I don't know what you mean...

    Jennifer: He's my cousin, remember.

    Steven: Well, he doesn't do much of anything except reading those books.

    Jennifer: ... so you decided to give him a helping hand?

    Steven: I guess so, but it's up to them. She's nice girl; maybe they'll hit it off.

    Jennifer: ... and fall in love, do you think?

    Steven: I don't know about that—I doubt he even believes in it.

    Jennifer: Love? I don't follow you.

    Steven: All that bunkum about "true love", I mean.

    Jennifer: You don't believe in it, then?

    Steven: (Sighing...) I can't see any reason to, but all the same I hold out hope.

    Jennifer: What's true love anyway?

    Steven: The real deal; the genuine article. I'm sure you know.

    Jennifer: Perhaps I think I do, but that usually means I don't. How do you know when it's true love, and when it's just love? What makes it the real deal, as you say?

    Steven: I guess you just feel it; sometimes it's accompanied by the swelling of the score, if you're in a movie.

    Jennifer: (She smiles.) Surely "feeling it" isn't much help, though; suppose the other person doesn't feel it—then you'd have one person saying "this is true love" and the other saying "I assure you it isn't", or something. Those statements are contradictory, so they can't both be true.

    Steven: I guess it depends on what you mean by "true", then. (He shrugs.)

    Jennifer: Probably. What do you mean?

    Steven: Hmm. I'm learning not to take you philosophical types on when it comes to questions like that. What options do I have?

    Jennifer: Well, you could say it means something that accords well with what you already know, or think you know. That way, when you say "it's true that I love you" you're saying that the love is consistent with what you already have—like respect, admiration, devotion; that kind of thing.

    Steven: Okay. What else? (He is still stealing furtive glances.)

    Jennifer: You could say that truth is an agreement with the facts, whatever they might be. This time, then, you say "it's true that I love you" because the fact is that you really do love the person; that means the statement is true by virtue of agreeing with this fact.

    Steven: It seems that I'd rather just say "I love you"; what's the point of the additional worry about the truth of it if I mean it and say so?

    Jennifer: That's a possibility too, and a pretty plausible one. In that case, then, truth would have nothing to do with it.

    Steven: It'd just be a rhetorical flourish.

    Jennifer: Exactly.

    Steven: Still, it might make you feel better, or even both of you...

    Jennifer: Sure: if a philosophical analysis fails, it doesn't mean some mean-spirited academics are likely to turn up at your door every time you use a word they say is meaningless or flawed in whatever way.

    Steven: I guess I'd be a mute if that ever happened.

    Jennifer: (Smiling...) All of us, probably.

    Steven: What else?

    Jennifer: You could say that truth is determined by the circumstances, or the use you want to put something to. That's often what people have in mind, I think, especially when it comes to love.

    Steven: So I'd define true love for myself?

    Jennifer: Right. Otherwise we have the difficulty of distinguishing between love, true love and somewhere-in-between-but-not-quite-there love. What measure or test are we going to use? Take true love, which someone presumably knows the definition of, and compare yours to it to find out if you have the real deal, a close approximation or just a poor imitation—common or garden love.

    Steven: It defeats the object of it and kills the notion, I think.

    Jennifer: You're right, I'd say; people aren't talking in such terms when they say "I love you" or "it's true that I love you". Probably you have an idea in mind of what you mean when you say "this is true love" and it becomes the truth by being in accordance with the use you have for it, or the circumstances in which you're going to employ it.

    Steven: What if the significant other has a different idea in mind?

    Jennifer: That's the problem, isn't it? Do you say something and risk the other person having a completely different idea of what you see in your relationship, or do you take a chance on it? What are you aiming at anyway? Do you have to have exactly the same idea, or is there a compromise to be made? Perhaps your version of the truth is close enough to theirs to be compatible, and that will suffice?

    Steven: It's quite a step to take, though: what if you both have completely different understandings but you talk about true love as though you're speaking the same language—when, of course, you aren't?

    Jennifer: That's probably part of the attraction of the very idea of truth in the first place: no grey areas, or dispute, or uncertainty—this is the truth, and none other. In the context of relationships, it's quite comforting to think that there's a match out there somewhere, that one other to complete you. It's far less romantic to suppose there are plenty of people who'll do the job.

    Steven: No kidding.


    Jennifer: Here we are.

    Steven: Oh. Well, what do you think of all this?

    Jennifer: We never really talk about it.

    Steven: We?

    Jennifer: (She sighs.) I should've said something.

    Steven: Oh. (A pause.) Nevermind. Thanks for talking to me tonight; maybe I'll see you again sometime.

    Jennifer: Sure; I hope so.

    Steven: Goodnight. (He turns and walks away quickly.)

    Curtain. Fin.
  • Who Was Online

    0 Users were Online in the Last 24 Hours

    There is no users online