This site is supported by Nobility Studios.
  • Resources

    Our library of interviews, essays and reviews is entirely created and maintained by our members. These resources are aimed at all levels and our content aims to support learning and help people gain an insight into the many areas our members and contributors are interested in. To offer content or suggest an interviewee, please contact us.

    Teaser Paragraph: Publish Date: 06/09/2005 Article Image:
    By Paul Newall (2005)

    In this article we'll discuss Political Philosophy, from what we mean by the term and what it's good for, through some historical ideas and perspectives, to the common divisions employed today. We'll also consider some of the philosophical issues behind politics, including the approaches used or assumed before we even get to arguing which party is dragging us to hell in a hand cart quickest.

    What is Political Philosophy?

    There are many questions studied by political philosophy that come up so often that we hardly notice that philosophy is involved at all when considering them. What should be the relationship between individuals and society? What are the limits of freedom? Is freedom of speech a good idea, or freedom of action between consenting adults? When may government act against the will of a citizen, and when should a citizen act against his or her government? What is the purpose of government? What characterises a good government? And so on. Not everyone is interested in these things, of course, but they'll be answered in one way or another—affecting us all. Everyone has a political philosophy, we could say, whether it is thought out in detail or not.

    Political Philosophy is the study of these and other matters, more generally the first—the relationship between individuals and society. Sometimes the subject is nicely encapsulated in the question "how are we to live?" That is: given that few people live entirely alone, we may ask how best to govern our interactions. What responsibilities do we have to each other? Can we do as we please? Is society more important than the individuals that make it up? Political philosophy doesn't exist in a vacuum, though; the answers we might give will depend in turn on our ethical ideas, as well as what kind of world we think we live in and what we may consider the purpose of our time here, if any.

    Historical considerations

    There have been so many political theorists and theories over the years that we cannot hope to cover them all here. Instead we'll look at a few representative and important notions that vexed wiseacres of the past.

    When kings enjoyed absolute (or near-absolute) rule and used their positions of authority to dress like girls and sleep with their sisters, a major concern was how to check or limit the power of sovereigns. It could be a good thing to have someone above or beyond the law to ensure that everyone else was held accountable for their actions, but quis custiodet ipsos custiodes? That is, who guards the guardians themselves? It was realised that power could corrupt those who wield it and hence that there should be a means of ensuring it did not, or at least minimising the possibility of abuse.

    A way to address this (and other abuses) was to advocate the rule of law, not of men. In that case, it would not merely be left to the whim of a king or magistrate to do as they pleased; instead, they would be accountable to laws—a well-known contribution being the Magna Carta. One benefit of codifying expected conduct was that it would show clearly when violations had occurred and hence the contempt of the ruler for the ruled. Some thinkers suggested that any form of government could only exist with the consent of the governed, so even kings realised that they would have to regulate their behaviour according to law or potentially lose their heads.

    It is interesting to inquire how societies developed in the first place. One model proposed that men exist in a state of nature until they decide to join together based on the greater productivity of the division of labour; that is, more can be accomplished together than by individuals acting alone. In order to secure such an arrangement, it would be necessary to develop some form of agreement whereby people respect each other's person and (perhaps) property to improve their lot through co-operation—a kind of social contract. Another possibility mooted is that the political apparatus—rather than the society itself—is the result of the conquest of one group by another.

    A contrast could also be made between the desire to formulate a civil law or constitution—defining and demarcating the nature and scope of government and the rights to be enjoyed by people living under it—and the practice of amending laws and societal arrangements on a case-by-case basis, as some countries did according to common law. More generally, an important question was (and still is): can we achieve the benefits of setting down rules to describe what will and will not be acceptable in relations between people while at the same time taking into account the ever-changing content of and influences on those relations?

    A final issue to look at here is the understanding of what makes a society or political circumstance good or bad. Is strong state control important to safeguard the people, or is that government best which governs least? What middle ground may be found?

    Types of Freedom

    What do we mean by freedom? In a famous lecture of 1958, Isaiah Berlin proposed that freedom could be understood in two senses: positive and negative. By negative freedom, he meant freedom from intervention; while positive freedom is the freedom to do something. In the first case we are unfree insofar as other people can prevent us from doing what we otherwise might want to, while in the latter we are unfree insofar as the opportunity exists to do something but we lack the capacity to achieve it.

    The best way to understand and criticise these conceptions is by examples. Negative freedom, then, would be leaving consenting adults to do as they please in the privacy of their own homes; that is, they would be free from any intervention from government (or anyone else, for that matter) since their activities are no-one else's business. They would be unfree in this sense if, say, a law had been passed making homosexuality illegal even in these circumstances. Notice that we would be unfree in this context even if we are not homosexual, or prefer to watch the rugby on a particular day; the machinations of government have restricted our freedom, whether we choose to exercise it or not.

    Another example would be an unscrupulous landowner blocking access to a public right-of-way. This would be another restriction of our negative freedom because we could otherwise take a stroll and muse on whether philosophy makes the sunset any prettier, whether or not (again) we actually decide to go or prefer to stay in and watch the rugby repeat.

    Berlin explained negative freedom as follows:

    Thus we see that if certain of these doors are closed to us—perhaps because of our sex, religious opinions, colour, and so on—then our negative freedom is the less. A door is not closed to us if there is no way we could actually go through it: for instance, if door 1 is marked "fly to the North Pole with the sole aid of a red cape", our freedom is not restricted by its barring because we do not appear to be able to fly. Note that this negative sense of freedom is what people often mean when they use the word.

    Moving on to positive freedom, Berlin described it in these terms:

    It could be, then, that we want to master philosophy but there is just too much rugby to watch; that would mean that our inability to discipline ourselves and stick to the task at hand that we want to complete means we are not our own master. Any similar circumstances where we feel let down by being unable to attend to a goal because other desires that we cannot control get in the way (sometimes people refer to a distinction between their "higher" and "lower" selves in this regard) would represent a restriction of our positive freedom.

    A lack of freedom in the positive sense is thus associated with a disparity between what we truly want and what we actually end up with, thanks to a failure to become our own master. Berlin went on to discuss the history of these two concepts of freedom and noted that in the past the positive sense has led to forms of oppression and tyranny more often than the negative, calling this the misuse of positive freedom. The argument runs as follows:

    First it is noticed that there is a difference between the higher and lower selves that may make sense to those afflicted. Presently, though, groups with some form of political power may decide that they know what represents higher and lower better than particular individuals and take it upon themselves to insist upon definitions and impose them on those who disagree. It does no good to complain that in fact we want something other than what we are told we want because this is the result of our lower selves opposing what is actually good for us, and so on—a self-fulfilling prophecy. Even if we fight with every fibre of our being against the imposition of the better or more reasonable idea, this still only represents our lower selves struggling; the truth is that forcing us to think otherwise is going to help us in the long run and hence intervention is not only justified, but in our best interest; in the long run, we will learn to appreciate what has been done for our benefit.

    Berlin's survey of the history of ideas suggested to him that positive freedom had been abused more than negative, but there are criticisms that can still be made of the latter. For example, it could be that medical care is available to everyone irrespective of any distinctions and hence people are free to use it; however, if such care is prohibitively expensive, this freedom is beneficial to only a few. The others are not their own masters because even though the door to healthcare is open, they cannot go through it; the door is wide open and no one is blocking their path, but their financial situation prevents them. It may even be that these circumstances are no-one's fault, but the beggar still cannot go through even though he may have chosen to be a beggar and hence his negative freedom in this context is worthless.

    Is there any way of reconciling the two or preventing the abuse of either? Berlin did not think so and considered the bringing together of the myriad goals people have to be an impossible task.


    There are several ways we can approach political philosophy and they have an effect both on how we perceive problems and how we propose to solve them. A metaphysical decision is taken as to what to study, as well as an epistemological choice as to how to go about it. There are also ethical ideas that contribute, whether explicitly or as implicit assumptions.

    Holism and Individualism

    In the first case, then, we can distinguish between individualists and holists: a methodological individualist is concerned with the individuals that make up a society or group, while a methodological holist (also called collectivist) considers the whole greater than the sum of its various parts. Suppose, for example, we take a statement like "it would be good for society to do x"; to a methodological individualist, this would make no sense at all unless it was understood as "it would benefit the members of society if x was done".

    What is the best way to approach political problems? The answer is not clear and it appears difficult to reduce either of these methodologies to the other. On the one hand, any ideas we have or decisions we take are going to effect individuals—not a collective noun like Danes (although some of the individuals may have the particular merit of being Danish); on the other, we might want to use such terms to describe trends or actions—especially since a general theory of how individuals behave would no longer be general, as well as being a tall order in any case.

    The point at issue is whether a society (or any other grouping) is made up of its parts (the individuals) or greater than their sum. We can try to find explanations that refer to what individuals did, or groups did; perhaps more helpfully, though, we could use both approaches to see what they suggest.

    Rationalism in political philosophy

    Another (epistemological) question to consider is the extent to which reason is (or should be) involved in political philosophy. Should we search, for example, for an account of how we should behave that everyone would have to submit to, or do the sometimes irrational desires that people have get in the way? To what extent do people employ reason in their political (and other) thinking in any case? Are they instead more inclined to listen to their passions, their social groupings, cultural or religious ideas, and so on?

    The difficulty for a reasoned political philosophy is thus to take note of all those apparently unreasonable things we do. Some thinkers have worried that too much theorising about how to construct a rational utopia could lead to forcing people into a framework that doesn't allow for the subtle or overt differences between them and hence to a form of tyranny. Others have pointed to the diverse ways of living that have developed throughout history all over the world and wondered if it is fair or meaningful to judge them from the point of view of only one of them—for example, the so-called Western way.

    From Berlin's analysis, we could be concerned that if we suppose there to be only one correct manner of living—whatever it is—we might also be more inclined to support the idea of enforcing it on others, ostensibly for their own good. John Stuart Mill recognised this possibility and suggested that what he called "experiments in living" should be supported. In his work On Liberty, which we have already touched upon elsewhere in this series, Mill said:

    He justifies his position in the following way:

    Here he is noting that one form of life (a rationalist utopia, perhaps) is only superior to others insofar as anyone was and is free to try another way and show by example whether it betters the former or not. This is quite a subtle point: consider, for example, the statement "it is better to live in England today than in Australia"—however we choose to define "better" (it could be by reference to drop goals). If we remove the second part—leaving "it is better to live in England"—then it no longer makes any sense: better than what? When we add the reference to Australia, it only supports the statement if we have some kind of information to go on; perhaps we lived there for a time, or know someone who has. Without performing the experiment of living there, though, we have no idea if it is better or not—a kind of certainty through ignorance. Even appealing to measures of some kind is based on the same thing.

    According to Mill, then, it is only by testing a way of life against others that we can appreciate whether one is preferable to another for whatever purposes we might have. If, on the other hand, we believe that it is possible to find criteria by which to judge which experiment in living is superior that no-one can reasonably argue with, then we may after all be able to discuss utopia and bringing it to our world. These criteria, of course, are what have been argued over for many centuries.


    In much political discourse the question is the same one that we began with: what is (or should be) the relationship between individuals and society? However, recent work in ethics (along with older perspectives) has suggested that we should not leave our environment out of our considerations: what about the relationship between individuals and their world, or societies and the world that supports them? Perhaps we have obligations to our fellow humans, but do we have similar responsibilities to our environment?

    Generally speaking, then, environmentalism invites us to take account of more than just human concerns when deciding on a political philosophy. We need to search for that arrangement of our affairs that is most beneficial to humans and those others areas that some thinkers consider to have rights or intrinsic value. The problem lies, of course, in just how to achieve that: is there a political system that can be adapted, and an economic one? Can the world remain largely as it is? Some environmentalists, for instance, have suggested that we need to return to a more basic form of existence—sometimes called "primitive", although it need not mean running around naked and clubbing each other. Critics say nothing of the kind is possible; adjusting to such a lifestyle would result in the deaths of very many people that can only be supported by our modern methods that are supposed to cause environmental issues in the first place.

    Behind much of environmentalism lies the ethical work that treats of what rights animals and other non-human life have; this will be discussed in a later article.

    The Harm Principle

    Again in On Liberty, Mill suggested his famous Harm Principle in the following terms:

    Using Berlin's terminology, this is a negative conception of how free we should be. The principle states that we may do as we please as long as we do not harm anyone else. By way of example, then, we may get a tattoo if we choose to because it harms no-one else. If the strain of reading still more Holblingian prose gets too much, we could also take flight from a tall building. Although in both cases we do ourselves a (differing somewhat) measure of physical harm, no-one has the right to stop us; by the same token, the immorality or otherwise of our actions is no reason to step in either.

    Although at first glance the principle may seem plausible and is similar to what many people have in mind when they think of how we should interact with each other, it is not difficult to draw out some criticisms. The main problem is what we mean by harm: where do we end and others start when we are considering the harm done by an action? Depending on what tattoo we get, for instance, we could cause a great deal of offence to some people and it is not at all obvious that this shouldn't count as harm. Alternatively, we would probably cause a lot of harm to our family and much strain on the members of the emergency services who have to pick what's left of us off the pavement after our swan dive, assuming that gravity applied on this occasion.

    The general point is that it is not so easy to split the world up into discrete individuals who exist separate from one another; instead, every action, however small or apparently insignificant, has an effect of one kind or another. How are we to determine whether something harms someone else in any case, excepting by his or her own testimony? If someone says "all this philosophy is making my head hurt", who are we to say otherwise? Similarly, how do we decide which claims of harm are genuine and thus require action to prevent?

    In spite of these difficulties, can we salvage anything from the harm principle? We can if we are not so concerned with concepts like harm standing up to close scrutiny and prefer instead to employ them on the basis of intersubjective agreement (that is, an agreement between those using the term as to what it means on different occasions, rather than a fixed definition), then we can say that an action causes harm by considering cases on their individual merits. When we propose to do something and someone else reports that it will (or later does) result in harming them, we can talk it over, investigate a little and decide if in this instance any harm has been caused, even if in the final analysis there may be some people who vehemently insist that it has and others that it hasn't. Thus a more charitable interpretation of the principle leads to something that can be used in everyday life, which is probably what Mill intended.

    Political Philosophies

    There are a wide variety of political philosophies, of which we can only consider a few here. Although many of them may be familiar, we can apply the concepts discussed above to them and perhaps see them in a new or different light. Below, then, we'll look at some of the philosophical aspects only. The standard division runs as follows:


    A great many political ideas may come under the broad banner of socialism, but generally speaking there is an economic decision that the ownership and planning the use of the means of production should be held centrally and publicly in some way, rather than privately. Often this is based on a critique of capitalism, but the idea is that the former method is more ethical or beneficial to people living under such arrangements. It is important to remember that not all socialists have a red hue and live under the beds of decent, right-thinking people.

    There are degrees to which socialism is preferred to some form of market economy. Given the failure of some attempts to control economies centrally, some have instead opted to allow a market to operate while maintaining control of certain areas that may be seen as fundamental, such as health services, travel networks and so on.

    The principle philosophical difficulty for socialism is how to distribute resources fairly. If we hope to give to people according to their needs, what do we mean by a need? How do we distinguish between true and false claims of need from people? Moreover, if we don't continue to impose controls on the distribution of these resources, wouldn't they eventually become unequally distributed?

    In the face of such problems, it is often useful to ask what we're aiming at with a political philosophy: if the answer for socialism is a more just or fair world then even if these concepts prove impossible to attain, we may still choose to at least try.


    A distinction is often made between modern and classical liberals, owing to the change in meaning that occurred during the nineteenth century. Before that time, liberalism was concerned with—as the word suggests—liberty; that is, providing for toleration of ideas and ways of life, as well as granting as much freedom as possible. This was a negative understanding of freedom, but more recently some liberals began to pay more attention to the notion of positive freedom and sought to provide for fairness and justice. By way of analogy, we could say that early liberals wanted to ensure a level playing field while their heirs wanted to see that everyone had the chance to get a game. Some classical liberals suggest that these latter are not liberals at all, since their plans call for intervention on the part of government.

    Despite their differences, the liberal hope in general is to provide that form of government that best allows people to work towards their goals and adopt the form of life that they choose. What do we mean by "best" here, though? How do we provide the most level playing field when it seems that a purely negative conception of freedom is problematic, as we saw earlier? If we want everyone to get a game, it seems that some people will need more help than others. How much help should they be given before we are being unfair to others who could perhaps use the time and resources to excel or to address some other issue?


    If it ain't broke, don't give it to Hugo—or so runs the ancient wisdom and any handbook to a modern appliance. Conservatives note that many of our political (and other) ideas have developed over time; those that didn't work or were no longer of any help tended to fall out of use on their own accord. As a result, they are generally reluctant to accept change for the sake of it and want to know why a new notion is going to be of benefit to us.

    There are now—like all of these overview positions—very many variants of conservatism that disagree amongst themselves but similar criticisms made be found. It is not obvious that political institutions survive because they work or have proved their mettle over time; on the contrary, they may have been imposed on people in the first place or too few alternatives considered. How long should we give a new idea to establish itself before the conservative finds it worth defending? Conversely, how long should we wait if the idea is too important to delay?


    Although often used in a pejorative sense, anarchism means a political system without a hierarchy—not a lawless free-for-all of Durdenesque proportions. That does not imply a complete lack of social structures, though; instead, people may voluntarily choose to live according to certain rules or ideas and may similarly choose to do otherwise at a later date.

    Anarchists have the difficulty of determining which structures are natural and which are imposed, a fact which need not be readily apparent. There is also the question of security: how does the anarchist society protect itself against those states that do not share its ideas and would conquer or otherwise oppose it?

    There are lots of forms of anarchism, of course—some more radical than others. The easiest way to learn the content and differences is to try the experiment of telling several of them that their ideas are ridiculous and then discovering rapidly that they are not.

    Economic issues

    Many of the issues in political philosophy now turn or are dependent in many ways upon economic analyses—the best way to provide and allocate resources being an example. Nevertheless, these may themselves have been influenced by political and philosophical ideas, so there is interdependence at play between them. To ignore either is problematic: we need to know the best way to achieve our aims, but we also have to decided what to aim for in the first place and what forms of solution we are inclined to accept.

    In summary, political philosophy is central to everyone and effects our lives whether we like it or not, and whether we play a part or take an interest in political ideas or not. Asking questions of how we should interact with each other and our environment occurs in all cultures and at all times, and is probably far too important to leave to the politicians.

    Dialogue the Sixth

    The Scene: Our philosophical friends are back at table, where Steven is hoping to discuss more philosophy with Jennifer—touching on aesthetics a little, perhaps.

    Jennifer: (To Trystyn...) Jeremy is up at the bar. (She motions with her head...)

    Trystyn: (Looking...) Oh dear. (Someone waves at him from across the room and he is forced to smile weakly and wave back.)

    Steven: Who's Jeremy?

    Jennifer: I went to school with him. He's training to be a politician, apparently.

    Steven: Training? How do you do that?

    Trystyn: Just start talking and don't stop to catch a breath or a thought.

    Anna: Here he comes now.

    Steven: (Looking longingly at Jennifer...) What's his problem? Why can't this character leave us alone? We want to talk about philosophy, not politics. (He sighs dramatically.)

    Anna: I'm not so sure there's a separation.

    Trystyn: Let's find out... (Glancing up...) Hello, Jeremy.

    Jeremy: Greetings to all. I spy potential voters.

    Jennifer: What are you standing for?

    Jeremy: Well, you haven't offered me a chair.

    Steven: Look... we were kind of having a discussion...

    Jeremy: (Extending a hand...) Well met, friend—and who might you be? Have you voted? (He pulls up a chair and sits down.)

    Jennifer: This is Steven... and Anna; friends of ours. They're both studying at the physical sciences campus.

    Steven: Voted for what?

    Jeremy: A good question, Mr. Steven, and well asked: President of the Student's Union, of course. You'll have read my position paper, no doubt. The other candidates have all but conceded. I don't envy them—it was an impossible task. The gracious thing would be to bow out now.

    Trystyn: Still honing the rhetoric, I see.

    Jeremy: At least I know I can count on you to do the right thing, dear Trystyn. There are no sidelines when it comes to the issues facing students today.

    Anna: What issues are they?

    Jeremy: Tuition fees, funding, interest on loans...

    Anna: These are all financial matters...

    Trystyn: Students are hard done by.

    Jeremy: As a matter of fact, they are; in any case, students are the future of this country. We don't have time to worry about where the next meal is coming from—students need to be free to exercise their intellect as it takes them.

    Trystyn: You can see that there are a lot of poor students out drinking tonight.

    Steven: (Quietly) Please leave.

    Anna: I don't understand. Why should we be free to do that? Don't we have responsibilities to the people paying for our education, or providing the opportunity for us to have one with their taxes?

    Jeremy: Nonsense. Students are the future.

    Jennifer: You already said that. How about addressing Anna's point? Students aren't a class of superior beings, to be supported by the underlings. If they want financial assistance with their studies then they have responsibilities to those paying.

    Anna: What do you mean by freedom in this context anyway? Why should we be free to waste taxpayers money on useless courses?

    Jeremy: Yikes! More philosophy... (He looks at Trystyn.)

    Jennifer: These are philosophical questions because you bandy around concepts like freedom without any understanding of them, and political because they concern the interactions between people and society. If you want votes then you'll have to address them.

    Steven: Or you could just leave... (He is looking at Jennifer.)

    Jeremy: Of course I've thought about them, but we need action—not mere words. Students want a fair deal.

    Anna: What's a fair deal? What makes a deal unfair?

    Jennifer: You haven't answered Anna's question about freedom.

    Jeremy: (To Trystyn...) Help me out.

    Trystyn: You should know better.

    Steven: I heard the couple at the next table talking about voting...

    Jeremy: Look—students need to be free from interference—whether it be financial intrusion or some moralistic nonsense. We all know what I'm talking about.

    Anna: Financial intrusion?

    Jeremy: Some people are suggesting that we should pay for our education—all of it.

    Trystyn: It's preposterous...

    Jeremy: Exactly! (He rubs his hands together and appears to be ready to launch into a monologue.)

    Jennifer: What moralistic nonsense?

    Jeremy: Eh? The point is that students must be free of any interference. Would you want anyone telling you what you can or can't study?

    Anna: Suppose that we have this freedom you're talking about—what then? It doesn't mean we'll achieve anything; in fact, if we can do as we please then probably many of us will do as little as possible and come out with a qualification all the same.

    Jennifer: How can you ensure that removing any restrictions will lead to a positive result?

    Trystyn: Instead of leading to the bar...

    Jeremy: This is just talk. I don't see how this philosophical mumbo-jumbo has any point at all.

    Steven: I guess you could leave, then.

    Anna: What about some positive incentives for us to get the most out of our time? Staying in bed all day is just a waste of time and money.

    Jennifer: Perhaps paying for our education might prompt us to take an interest in getting more from our time? The removal of restrictions alone doesn't imply that studies will go any better for us.

    Anna: It seems just as plausible as your "students save the world and make it home in time for tea" notion that allowing students as much money as they like won't have any positive effect at all.

    Jennifer: You haven't answered the other point yet, either. What is the relationship between students and the rest of society, or what should it be? You seem to be taking us in splendid isolation, but we have obligations like everyone else. What's your position on this?

    Trystyn: Perhaps this is just more talk?

    Jeremy: It is indeed. While you all sit around musing, someone has to act to help people.

    Anna: You don't get it, do you? Acting without giving your ideas a basic critique is going to leave you acting on bad advice or achieving the opposite of what you want. There's no separation between thought and action anyway: we act because of what we think and we amend what we think as a result of our actions.

    Jennifer: Meaning there's more to politics than just rhetoric. Relying on people not having enough time to vote against you is all you have, though.

    Jeremy: So are you going to vote or not?

    Steven: No.

    Jeremy: It's as I thought. Consider this, though: for every principled objection or person critical of whatever ideas I or anyone else may have, there are others who vote and decide for you. Any of you can think what you like about me, but come the weekend you'll have a new president all the same. Are you going to have a say in it or not?


    Maybe I should leave now...

    Steven: Well, I already said...

    Jeremy: (To a girl walking past the table...) Excuse me, friend—have you voted? (He moves away.)

    Curtain. Fin.
    Teaser Paragraph: Publish Date: 06/09/2005 Article Image:
    By Paul Newall (2005)

    In his The Possessed (also known as Demons) and other works, Dostoevsky employed an advocatus diaboli device familiar to and used by the Schoolmen and the Church whereby he offered and defended in detail those notions he wished to subsequently challenge, taking care to develop them to their strongest possible form before attempting to show why they are flawed. This is rarely moreso than in the case of the character of Kirilov (or Kirillov).

    In an early chapter, during the first description of his thinking (entitled "Another man’s sins" for a good reason), Kirilov explained some of his ideas and received the retort:

    The scorn of the narrator here is a marker intended to caution us against a simplistic disregard. Kirilov replied:

    The deception discussed here is quite subtle and is explained in response to the narrator’s remark that people love their lives because they are afraid of death. Instead, said Kirilov, life is fearful and unhappy – precisely because it seems to have no meaning - until men become afraid of death and transfer their fear to it, rendering life something to be loved. He did not find this impressive because he wanted to learn whether life can be loved on its own terms, given its apparent absurdity. This is the question that Camus wrestled with, asking if we could conclude anything from the seeming meaninglessness of life. It may be that the fear Kirilov spoke of comes of the silence that answers our attempts to find meaning.

    Later on, in the chapter "A very busy night", Kirilov expanded on his thinking in syllogistic form when talking with Verkhovensky:

    (Verkhovensky’s comments have been removed.)

    In order to explain why he concludes as he does, Kirilov set out come of the consequences of these premises:

    These comments are replete with religious imagery. The general case of the problem that Kirilov was discussing is that of meaning: it does not appear that any meaning exists for our lives (or that is not defeated by the fact of death), but we apparently require that meaning to cope; as a result, mankind has invented meaning – not once, but very many times – in order to avoid the dilemma that meaning is needed but can never be found.

    If we take the problem in its most general form, it goes back at least to Ecclesiastes:

    It seems that all significance is stripped from life by the fact of death. Camus’s attempted solution was to rebel against this thinking and say that we should live in spite of the absence of meaning, refusing to allow this argument to have any power over us. This does not answer the problem so much as advise going on regardless. Others have since insisted that we can give our own meaning to our actions, but this is not the meaning Dostoevsky was considering nor is it clear how it can survive the challenge of Ecclesiastes. Kirilov’s point, instead, was that inventing God had permitted people to dodge the issue entirely by creating a constrast between death (to be feared) and life (to therefore be loved). In order to reject the dilemma, however, someone would first have to show that it could have no dominion over man. The extent of the freedom thus granted could never be clear until someone expressed it fully by choosing to reject it.

    In many ways, Stavrogin is the most fascinating character of Dostoevsky’s oeuvre and one who shows by his actions in The Possessed the thinking that Kirilov tried to explain. He, too, found life without meaning and refused to invent a fiction to save it – eventually killing himself quietly and without fuss. While he lived, he frequently sought out ridiculous situations in which he acted in such a manner as to confound expectation and cause trouble for himself, because this was the only way he could feel alive after his rejection of the power of the dilemma. Kirilov, of course, had figured this out, and had Verkhovensky spell-bound when he explained:

    When he eventually hung himself, Stavrogin left a note saying "no one is to blame, I did it myself."
    Teaser Paragraph: Publish Date: 06/08/2005 Article Image:
    By Paul Newall (2005)

    Although he had claimed upon its completion that the Three Colours Trilogy would be his final work, Krzysztof Kieślowski was writing (with his long-time collaborator Kryzsztof Piesiewicz) a second trilogy at the time of his death, to include films entitled Heaven, Hell and Purgatory.

    When the Polish master died, the script for the first was passed to the German director Tom Tykwer who had himself already plumbed the subjects of fate and coincidence, as well as "the relationship between the two", and was therefore a perfect choice to interpret a work sitting squarely in Kieślowskian territory.

    The plot of Heaven is quite straightforward. Philippa Paccard, an English teacher working in Italy, is distraught at the extent of the local drug trade and the impact it is having on her students, one of whom has just hung herself leaving a note saying simply "throw me out with the trash". She is recently widowed, her husband having himself overdosed and been involved with a man named Vendice whom she believes to be controlling much of the trafficking. She has been writing to the Carabinieri over an extended period but they have done nothing, due (as we later learn) to at least one of their officers being involved himself. Having discovered a bomb in her apartment constructed (apparently) by her dead husband, she decides to take matters into her own hands and manages to plant it in a wastepaper bin in Vendice's office. At the last moment a cleaner arrives and empties the contents, making her way to a lift to continue with her duties on another floor. A man and his two young daughters already in the lift are killed along with the cleaner when the bomb detonates. Paccard is arrested (having confessed by telephone) and interrogated, believed by the investigators to be part of a larger terrorist network.

    At this point we meet Filippo, an officer in the Carabinieri and son of the former head for Turin. Acting as a scribe for the case, he offers to interpret when Philippa insists on testifying in English. She tells the Carabinieri that she has records of all her correspondence with them, explaining the drug problem and her suspicions, but none are found (due, it is implied, to a Maggiore Pini destroying them to cover his own tracks). Philippa believes she has accomplished what she set out to do until she is told that she killed innocents instead, at which point she breaks down (a spellbinding performance by Cate Blanchett, it should be said). She faints and Filippo rushes to her aid, whereupon she wakes up gripping his hand – a shot that Tykwer lingers over just as surely as Kieślowski would have. Filippo resolves to help her escape, later giving as his reasoning that his younger brother Ariel was in her class and she was his favourite teacher. He passes her recorded instructions, which Pini is able to eavesdrop on. The latter confers with Vendice and plans to let her go, in order to bring about her death on recapture and hence keep their involvement secret, but Filippo changes his plan at the last moment and they lure Vendice to Pini’s office while the Carabinieri are searching for them. Philippa shots and kills Vendice with a gun Filippo provides, whereupon the pair go on the run and ultimately escape. Where they escape to, however, is the important detail.

    Tykwer has said of Heaven that "the basic theme is redemption". It is how this comes about that provides the depth of the movie, in which the action and the dialogue – even between the main characters, which may not be obvious on first viewing – is minimal. In his message to Philippa, Filippo explains that once they have been able to break her out of custody,

    Even before this, after watching Philippa cry on learning of the deaths she did not intend, Filippo has admitted to his father that he is in love. At first, however, she is not convinced that anything special is occurring and tells him that she agreed to escape not to avoid punishment, which she fully deserves, but only to kill Vendice.

    It is important to realise the situation in which the viewer finds him- or herself at this early stage in proceedings: Philippa has slain four innocent people, even if unintentionally, including two children. Tykwer is careful to spend time with them beforehand as they chat tenderly with their father; in the lift itself, they count the floors as they travel upwards. There can be no suggestion that Tykwer is minimalising the extent of what Philippa has wrought, or presenting an apologetic for it. She then goes on to kill Vendice, a man who could easily have been portrayed as the embodiment of evil but instead is given a scene in which he calls his partner to lament his being called away and hence arriving home to her late. Moreover, when Philippa shoots Vendice it is with Filippo's help, the latter holding the door closed to prevent Vendice avoiding his death. For the more observant, too, we can notice Philippa touching wood for luck on her way to plant the bomb and valuing her own life when she is almost knocked down at a road crossing even as she is about to take that of another. She also calls Vendice's receptionist to ensure no one other than her target is hurt, which demonstrates how calculating and considered her actions are.

    What we find, then, is that both Philippa and Filippo are perhaps as far from our sympathies as they could be, and yet there is something wonderful about his unquestioning and immediate love for her in spite of everything that makes us curious about what will happen to them, or how matters can possibly be saved or put right. Tykwer hints at this in a typically Kieślowskian shot (recalling Delpy in the hotel room in White) when the two wake up together in their hiding place, staring into one another’s eyes in silence. It is easy to regard Filippo's behaviour as simplistic, or as a moral failure on his part to realise the magnitude of what she has done and that, straightforwardly, she should be brought to justice for it. Nothing was straightforward for Kieślowski, though, particularly moral issues. Where others had and have the confidence to pronounce on what should or should not be, Kieślowski explored the grey areas where easy answers were seldom (if ever) to be found. The question prompted for us – as viewers – to answer is: given these circumstances, complicated by Carabinieri corruption but where guilt is nevertheless clear-cut, can Philippa be saved?

    Another aspect of life that Kieślowski was fascinated by is synchronicity, and often in his work lives that ostensibly were disconnected would meet and prove to be intimately related, the most detailed example being the mirroring of Joseph Kern's mistakes in August Brunner in Red, giving the latter the chance to choose differently and hence save the former. This theme of salvation not through grace or faith but rather the simplest of gestures runs through Kieślowski's oeuvre and we pick it up again in Heaven. The synchronicity involved in achieving this becomes apparent when Philippa and Filippo are on a train bound eventually for Montepulciano and she asks him his age. It turns out that their birthdays match, Filippo having come into the world at the moment when Philippa was receiving her first holy communion. We notice (if we have not already) that their clothing is identical, and begin to realise why Kieślowski had Philippa say to Filippo "I don't even know your name", as if there could be any doubt. When they visit the barber and together have their heads shaved, the implied is made concrete.

    With the benefit of hindsight, Giovanni Ribisi was perhaps cast perfectly for the role of Filippo, his constant look of untroubled innocence founded on the certainty of love helping the viewer to suspend disbelief and realise that the congruence of these two lives in so many details is crucial to the exploration taking place before our eyes. Arriving at Montepulciano, Philippa remarks that "it's as if nothing ever happened" and we understand in a flash, as it were, that this line summarises events so far because of Filippo's intervention. When she meets her friend in the middle of the wedding celebrations going on around them, Philippa is slapped in the face and then hugged, forgiven in spite of what she has done. Likewise, Filippo's father meets with them covertly and embraces his son silently, forgiving him, too, as he comments – apparently proudly – "I do know you a little". Both have sinned, whether in the religious or moral sense, but are forgiven in an instant by people who love them.

    Religious aspects were central to Kieślowski's work, particularly after his Decalogue series on the ten commandments. In the Montepulciano church, Philippa engages in what we recognise to be a confession (an impression Tykwer emphasises by opening the shot on the boxes themselves, curtains drawn as she speaks). Sat beside Filippo, she lays out her sins in detail and tells him that she has "ceased to believe in sense, justice and life". Head bowed, he hears her out before looking into her eyes and saying simply "I love you". Later, when Filippo’s father asks her if she loves his son, too, she begins to shake her head and tries to say no, wanting him to take Filippo with him and not allow her to drag him down with her needlessly, but she is unable to. Is it a coincidence, then, that Filippo – her saviour – was born on the day she entered the Church, only to hear her confession years later?

    Given somewhere to stay for the night by her friend, the two venture out into the Tuscan sunset and there follows surely one of the most beautiful pieces of cinematography ever conceived. Shot from helicopter, Tykwer captures the two as they shed their clothes and embrace, silhouetted against the burning sky and symbolic of the angelic Filippo purifying his double. There is no music and no sound save the gentle rustling of the trees. The landscape becomes the canvas on which this individual act of redemption is painted. This, for Tykwer, is what we have witnessed: "somebody who is completely lost is taken out of the darkness and brought into the light".

    Heaven opens with Filippo flying a simulated helicopter, apparently undertaking lessons. In a moment of difficulty he evades danger by taking the craft upwards, to the limit of the program. "In a real helicopter you can’t just keep flying higher", his teacher complains, to which he replies “how high can I fly?” This scene makes no sense at all throughout the movie until the very end: the Carabinieri descend on the farmhouse where the two had been sheltered, but they avoid them initially because they had spent the night on the hills under the stars. As a helicopter swoops and lands, they make their way back and stop at the fence, hands clasped together tightly. The attention of the Carabinieri is directed elsewhere and the pilot steps out, seemingly curious at the events unfolding in front of him. Filippo looks slowly at Philippa and asks her something: "now?" She nods, and they run to the helicopter, which Filippo pilots upwards. We watch from beneath as it rises higher and higher, the Carabinieri shooting in vain. The music stops and the shot lingers, the image becoming smaller and smaller until we can only see the sky into which it has faded.

    At Montepulchiano, Filippo's father had posed a rhetorical question, frustrated at himself: "why can we never do anything at the important moments?" He did not realise that he had done everything possible, absolving his son in a moment just as Filippo would save Philippa and himself in the process. We see that the question has been posed and answered: can a person find salvation through love? Dostoevsky’s Raskolnikov was saved by Sonya, and the Russian master's work was a huge influence on Kieślowski, covering similar ground. Philippa and Filippo have found one another and found forgiveness, ending their journey by ascending to heaven.
    Teaser Paragraph: Publish Date: 06/08/2005 Article Image:
    By /index.php?/user/4-hugo-holbling/">Paul Newall (2005)

    In this article we'll expand on the second discussion in our series on Doing Philosophy and take a look at reading. Although we can all presumably already read tolerably well, it is well-known that some philosophical writing is so dense as to seem impenetrable and requires a great deal of patience to tackle, let alone understand. With that in mind, then, we'll explore the different tools and tactics we can employ when faced with a philosophical text or argument and see how they can help us get the most out of a piece, as well as improving our own work. It goes without saying that everything below should be obvious, but it's all too easy to find frequent instances of it all being thrown out the window (and not just to test if gravity applies to rhetoric).

    Reading Philosophy

    Ploughing through a piece of philosophical bluster may seem little different from any other reading and often it isn't; however, we assume that the point of reading philosophy—at least in part—is to learn something, even if we only discover that your narrator is not convincing anyone. Perhaps some people hope to belittle their opponent in a debate or win an argument at all costs, but what else is gained from refuting a position that we know (or suspect) could be made stronger and altogether more interesting?


    Much like trying to beat the English at rugby in recent years or trying to win the heart of a reluctant other, charging ahead regardless of circumstances may not be the optimum strategy to employ when faced with a piece of philosophy. There are, for example, several questions we could ask of a passage before we even set to reading it:

    What is the author's subject?
    What are the author's conclusions?
    What arguments does the author employ?
    What is the purpose of the piece?

    The importance of these is that they provide a context for our reading that may aid our understanding. To that end, the first thing we could try is to skim the text with these considerations in mind, looking for answers to them. The first two should be easy to find, even if the writer is so obtuse that the answers scarcely make sense. The third may be more difficult, but we can gain a fair idea of the points of attack, or where they most strongly support the conclusions. The last is somewhat more subtle: perhaps the arguments made will eventually be found to stand up to scrutiny, but if they do not have any bearing on the purpose for which the piece was written then we may not even need to spend any time on it at all.


    Having established a framework for the text under consideration, we can now read through it in greater detail. As we pass along, we may spot remarks that seem fallacious (using the resources we discussed earlier and will cover again in more detail later); in that case, we could make a note of them to return to later. However, the existence of fallacies need not end our investigations, especially if we hope to take anything from the experience.

    The Principle of Charity

    In conjunction with its two companions (see below), the principle of charity is perhaps the most important tool to master in any situation where we are approaching an argument (or arguments) critically. It's a method: a way of working with philosophy that tells us to proceed in certain ways if we hope to get the most from a piece; it advises us to take the fairest, most plausible and reasonable interpretation that we can. It could apply to the questions we looked at above as follows:

    1. The subject: If the topic is one we have no interest in, or which we have strong opinions on, we may be inclined to not read the author as carefully as we could, or as someone not so disposed. In such circumstances, we need to make an effort to employ the principle of charity in order that we not dismiss decent arguments; without it, an opportunity to learn something may be lost.

    2. The conclusions: In a similar fashion to the subject, we may find the results of a discussion distasteful or in conflict with what we think we already know. It may be, however, that the author has a new argument to present, or else that a deeper flaw can be found that will work with other similar positions. The principle of charity should apply as before.

    3. The arguments: An uncharitable approach to arguments may result in weak criticisms or—alternatively—not developing them as far as they could be.

    4. The purpose: Dismissing an argument because we don't approve of what we assume it will later be used for isn't very charitable, nor much of a criticism; a sound argument doesn't become flawed by virtue of being used for nefarious ends.

    The idea is not to foster some kind of emotional detachment, but rather to keep in mind that we already have ideas before we look at an argument and because of that try to minimise (not exclude) their influence. A reading that employs the principle of charity will not dismiss an author because he or she makes what seem like huge errors, or because the point argued for appears to be futile; instead, we can make early steps to avoiding the possibility of rejecting an author's work unfairly. The principle, then, is a methodological one, whereby we realise that we can only criticise an argument when we have adequately understood it.

    Author's Advocate

    The next step on from trying to read an author charitably is to attempt to advocate his or her ideas for ourselves, along with the converse (to be covered next). Suppose an interesting point has been raised but in an unclear way; how, then, can we clarify it? Perhaps the author's argument is flawed; is there any way we can strengthen it, or build on implications that may have missed?

    The idea here is to think through and dispute the point on behalf of the author in order to provide ourselves with the strongest possible case to counter. Moreover, we can continue the process: we may try to rebut the newly constructed position and then adopt the author's perspective again, and so on. The question to ask at all times is: how would the author respond to this? In this way we arrive at the most detailed understanding of both the conclusions and arguments leading to them, which is a long way from throwing out a notion just because of a minor error.

    Advocatus Diaboli

    The Devil's Advocate was a device used by the Church to argue against the beatification or canonisation of saints, ensuring that every possible objection was heard before agreeing that the procedure could go ahead. When reading philosophy, we use much the same approach to provide the converse of the author's advocate. It may be, for example, that in employing the principle of charity we have been too kind to the author, or else that we agree too much with what he or she is saying to be properly critical of their arguments; if so, we play Devil's Advocate and look for any detail, however large or small, that may prove to be a flaw or error in the author's writing.

    Once again, our aim is to end up with the situation we may learn the most from. We can use the two advocacy methods to do this: first, by asking "how can we make this argument better?"; second, by responding with "how can we critique it?"; third, by wondering "how can we reply and salvage the argument?"; and so on. In this way we improve both the author's ideas and our own counterarguments at each turn and give ourselves the opportunity to understand how convincing the former are by viewing them in their best light.


    To understand how these remarks apply to reading and doing philosophy, we'll now look at two pieces and try to apply what we've learned and see what difference it makes. In the first we'll break everything down and over-emphasise the process, while in the second we'll try to approach the text as we would normally.

    1. Mill's arguments for proliferation

    Consider the following excerpt from Mill's On Liberty:

    This is only the first paragraph of an extended section, but the first thing we need to do is skim the piece, looking for answers to our four questions. Thus:

    Looking at the sections in bold, then, we have:

    What is the author's subject? Mill seems to be discussing the freedom of expression and the possibility of silencing it.

    What are the author's conclusions? Mill appears to be asserting strongly that any silencing of opinion is a bad idea and to be opposed.

    What arguments does the author employ? Mill is offering here some of the points he will later expand on: if we try to stop a viewpoint from being heard, it may turn out to be true; people are not certain to be right and so may well be wrong; being sure about something as far as we are concerned is not the same as being absolutely sure of it; and so on.

    What is the purpose of the piece? It seems Mill is intending to make some political points and wants to show, in particular, the folly of stopping people from expressing an opinion.
    These give us a context for the rest of the piece, and for studying the arguments therein. Now we can try to read through this section again, continuing to the following passages; starting with the one we already have, we can also employ the methods we looked at above.

    One of the first things to consider is our initial attitude to the subject matter at hand; do we have prior opinions on the suppression of ideas? Your humble narrator does, and is inclined to be overly charitable to Mill. If we take one paragraph at a time, we can see how each of our methods can be applied.

    The first point, then, is that any opinion we decide to silence may in fact be true. To put this point into a more recent focus, we might want to prevent creationism being taught alongside evolution in schools, but we should note—on this argument—that the former may be true; if so, it would appear to be a bad idea to exclude it. Playing devil's advocate, though, we all know—don't we?—that creationism has been shown to be a hopeless notion, while evolutionary theory is one of the most successful we have, after the quantum theory; it would be folly, then, to suppose that creationism has anything to offer.

    In the next line, Mill responds to just such a point: those people who have decided that creationism is untenable could still be wrong, and even if they aren't they still have no right to decide the matter for everyone: why shouldn't I, for instance, have my children taught creationism if that's what I want? To oppose him again, the question could instead be to ask what right I have to insist on my children being taught ridiculous ideas? Shouldn't I prefer that they be taught the best ideas we have available, and if I want to keep them ignorant then perhaps the education of my children shouldn't be my business? Have i any right to insist that my idiocy in all matters be preserved for future generations?

    Mill suggests we are conflating (or confusing) two different versions of certainty here: the fact that all the work done to date suggests one theory over another does not imply an absolute judgement that the one is better than the other, or will always be. If we prevent creationism from being heard in schools, we are assuming that our own certainty is absolute certainty. To counter him, though, we could note that it must always be borne in mind that children have to be educated now, doing the best we can for them; it may be that what we teach them turns out to be wrong, while the excluded topics or theories are later shown to have been right after all, but we don't have time to wait around for absolute certainty—we have to teach them today what we think are the best ideas.

    How would Mill respond to this criticism? Perhaps he would say that the important point he is making here is only that we should not assume our ideas to be infallible; of course we have to teach something, but why not try teaching why we judge creationism to be not worth our while, and the contrary for evolution? In this way, we would be showing children how to learn for themselves, rather than instructing them in what they should learn as facts and what they shouldn't.

    Acting again as devil's advocate, we could say that learning facts may not be the only thing to education but it's still important, and that some facts are so far beyond doubt—as far as we can tell—that they ought to be taught as such. On the other hand, we could say that children may benefit from the alternative approach, but they would have to be old enough first to cope with it; in the meantime, they need to be taught the best information we have to hand, even if it we acknowledge that it could be flawed.

    Mill could say that the first point is just an opinion, while the second is not obvious and relies on information we don't have to hand here. We could look at what teachers' experience tells us about the prospects for this idea, or else investigate it further.

    Let's now move on to the next paragraph, which is a lot longer and will be easier if we split it up:

    What point is Mill making here in this lengthy sentence? It seems he is noting that while most people admit that in theory an opinion may be wrong—even one of their own—they in practice rarely allow for the possibility. Excluding your narrator, is that a fair characterisation? From the devil's advocate perspective, it seems uncharitable; perhaps we don't spend all our time bearing in mind that our opinions could be mistaken, but we still take precautions to avoid it. Moreover, who is to say when and how often we have such thoughts in mind?

    How could Mill answer? He could say that it isn't obvious that people pay much attention to this problem, but it isn't any more obvious that they don't—we're at something of an impasse and comments here seem to be largely rhetoric. On the other hand, he could have been speaking generally, but that carries little weight in a philosophical argument. It seems hard to do anything more with this passage.

    Here Mill declares first that those who rarely find their opinions challenged are unlikely to worry about the possibility of their being wrong. Is that a charitable reading of him? He does temper his statement by saying "usually", but it seems to be an extension of his previous comments. If we oppose him again, we could say that many examples speak to the contrary: there were and are mathematicians, say, whose work can only be understood by a few other people in the world, meaning that contrary opinion would be hard to come by; nevertheless, they still have their doubts about their ideas. The same applied, for example, to Aurelius: he was a very powerful man, but he still spent the better part of his time musing on matters, questioning himself and his understanding of the world. What of those who are at the very top of their chosen field, to whom others defer? It hardly follows that they are any less concerned at their fallibility than the rest of us.

    Can we advocate Mill's position against these criticisms? We could clarify this argument and make it more plausible by saying that people whose opinions are not tested regularly are less likely to consider them fallible than the converse; that, it seems, is the sense of his following sentences. This appears stronger, but we could still raise the same objection that it doesn't necessarily follow, particularly for examples we could find. A way around this could be to add the remark ceteris paribus—all other things being equal (we add it in Latin here because that is how we sometimes find it in texts): thus, someone whose opinions are not tested regularly is, all other things being equal, less likely to question them than if they were. This is now a general argument, but apparently a good one; although particular counter-examples exist, it does suggest that subjecting our ideas to frequent criticism is a way to avoid stagnation in our thinking.

    We can now use the more charitable interpretation so developed as we proceed. The other argument Mill makes in this section is—in the terms we have established above—that we are less likely, other things being equal, to question those ideas supported by the vast majority of those in similar circumstances to us—whether it be our political party, class, church, workmates, and so on—than those that are not. To support Mill further, we could say further that this seems to be one of the reasons that communities form in the first place: shared ideas are of course less likely to be subject to challenge than new or unfamiliar ones. However, this is no reason to suppose them less fallible, so we should be on our guard the more against those commonalities that are rarely—or unwillingly—called into question.

    Since it is a general point, as before, it is not as easy to find reasons to oppose Mill here. We could counter that he assumes the truth of an opinion more important than the society that excludes it: why should we adapt our needs to the truth, rather than the other way around? This is a deep question, though, that is better left to another time.

    In this section, Mill is making two very important arguments that have applications elsewhere. The first is that what we believe—and how we came to believe it—is often influenced by the circumstances of our upbringing and the environment it occurred in. It does not mean that the truth is decided by such things (although that is a subject of much discussion in philosophy today), but rather that some of the ideas we hold to be self-evident are not at all obvious to those in different cultures, and vice versa. The second is that most of the beliefs held in the past have turned out to be mistaken, so it could be that those we hold today may go the same way.

    Let's try to take the most charitable readings of these points and render them in their strongest form. For the first, it would be a weak criticism to say that just because other people believe other things, it doesn't necessarily mean our beliefs are wrong; that is one way of understanding Mill, but we can do better. We could also note that he is unfair with his remark "it never occurs…", but - again—we want to avoid dismissing him too quickly. Suppose, then, that we read the argument as saying that the fact that different cultures believe different things should make us more cautious in accepting our own as the truth of the matter; not only does this provide us with something to consider (i.e. not everyone can be right), but it also offers advice (i.e. we should remember that other factors influence what we believe, not just whether they're true or not).

    How can we criticise Mill here, having taken a more charitable version of his comments? He doesn't insist that the existence of contrary views renders ours more or less likely to be wrong, or that we must be cautious, but only that the multiplicity of ideas should give us pause. We could say that this is too general to be of any use, but is it? By acting as author's advocate, we have improved the passage considerably and made it a good deal more interesting.

    For the second point, we could note that this argument is inductive: many ideas have been wrong in the past; therefore, most (if not all) of ours today will probably go the way of the dinosaurs also. Given that inductive arguments are problematic (as we've already seen earlier in the series), we could call this unimpressive and leave the matter here; that would violate our principles, though, so let's try to understand it in another way. We could say, for example, that the fact of most other notions having been replaced suggests the possibility that many current ones will have to be also; this, again, would be to provide methodological advice: be careful not to assume we've finally got to the truth of the matter, since we could be wrong like everyone else was. We could also say that the historical failings would lead us to believe that today's ideas are less likely to be true than if we had a better record in the past.

    The second rendering is open to severe criticism because probabilistic moves in epistemology have been fraught with (philosophical) danger, and it would need a lot more argument to make this convincing—argument that Mill doesn't provide. It could be a topic for further discussion, but we'll concentrate on what we have in the text here. Instead, we could take the first version and note that it says something similar to the other point; combining them, we get more succinct and stronger advice: different people in different cultures have believed different things, both now and throughout history, so we should perhaps be more cautious in assuming the accuracy of our own ideas today.

    Mill himself considers some possible counter-arguments in the next paragraph:

    We note here that Mill is not satisfied with providing only his own thoughts; he adds a lengthy list of remarks that could be made against him, playing devil's advocate for himself. Some of these we have covered, and others not, but we could easily draw a list from the passage:

    1. Public authorities are no more or less likely to be wrong when disallowing bad ideas than in any other decision.

    2. We have to act now - not when all argument has ceased—so, fallible or not, we have to do the best we can with the information we have.

    3. Stopping, say, creationism from being taught is not assuming that we know what is true and what isn't; rather, it's doing what we are duty-bound to do—trying to do the best for our children, even though we may be completely wrong.

    4. If we never act until we are absolutely sure, we would never act at all.

    5. Since we are never going to be certain of our ideas, we must do the best we can—even though our best may have failed in the past and may yet fail again.
    We could extend the list, or else come up with further points of our own, but Mill has done just what we explained at the start of this article: put forward his own ideas, then criticise them. He then attempts to answer these, and so make his arguments the stronger for having withstood the best objections he could find:

    Since we cannot study all of Mill's subsequent discussion, this answer may not read as convincingly as it could; nevertheless, it's a very interesting remark indeed: disallowing any idea prevents it from subsequently showing its worth, whereas supposing another to be true because all objections to date have failed is a working hypothesis—not the end of the matter. To use creationism again to put this into context, there's a significant difference between the quite reasonable assumption that creationism is false because it's failed to convince us to date, having had many opportunities to explain why it's superior to evolution as an explanation, and preventing it altogether from having the opportunity to do any better. This doesn't only apply to education: if people vocally insist that only imbeciles could possibly believe in creationism these days, the prophecy is likely to be self-fulfilling (and include your narrator). In Mill's conception, we can of course teach evolution; what we cannot do is bar creationism from being taught or studied, because then it has no opportunity to develop.

    Acting as devil's advocate, we could say that Mill's advice is interesting but not convincing: just how long are we supposed to give ideas that fail on every occasion they're tried? We only have so much time and so many resources to devote to education, investigation and the like, so why waste any energy on ventures like creationism? Isn't there a practical limit here to how much failure we're prepared to put up with? Isn't Mill also advocating a freedom from intervention that is itself open to critique?

    How could Mill reply? He does so in the succeeding sections, which we'll leave to further study for those interested (the whole piece can be found here), but one way pursued by later thinkers was to point to other ideas that took a very long time to develop—recall our example in a previous article of atomism, which needed around 2000 years. One of the reasons why Mill's On Liberty has been referred to as "immortal" by some thinkers is that it leads to so many other questions like this. Your own study may turn up other avenues, or else be critical of this preliminary one.

    2. Hobbes' ideas on our natural condition

    As a second example, let's consider a few paragraphs from chapter thirteen of Hobbes' Leviathan, found here, in which he discusses "the Natural Condition of Mankind as Concerning Their Felicity and Misery". This time we'll use less detail but still try to draw out the salient points using the methods we used before:

    Although we join the chapter mid-way through, this passage sets the scene. Hobbes is discussing what follows from living without authority to enforce laws—what would life be like without laws and rules? He concludes that we would be at war with one another, and offers the argument that in the absence of political power of some kind we live with the possibility of conflict hanging over us. His wider purpose in writing appears to be political: if the consequences of life without government can be shown to be undesirable, it would seem to follow that some form will be necessary—and that Hobbes has an idea or two about it. This is our context.

    What points are being made here? Firstly, Hobbes says that if we live without any security other than what we get from our own strength—or lack thereof—it is the same as being at literal war with each other. Is that a fair interpretation? Hobbes is apparently quite explicit: "the same is consequent", he says. Can we oppose him? On the face of it, it doesn't follow at all that just because we don't have an over-arching political power to govern our interactions with each other, we are all at war. Plenty of people get by without the saving grace of security from afar—what about people in an actual war-zone, for example, who might be beyond the reach of any enforcement of laws but who still get along without communities or relationships disintegrating? Consider also two people marooned on a desert island—must they be at war with each other? They could be, but it hardly seems likely.

    How could Hobbes respond? He could say that he didn't intend his comments to be taken literally; rather than implying that everyone would be at each other's throats, he meant that they have no reason not to go back on any agreement they could make, save for the strength of the other party—that without a political power to enforce laws, there would be no reason why we shouldn't obey them one moment and not the next. We could counter again that even with laws, we have no-one standing by at every instant to ensure we follow them; they act as a deterrent, not a guarantee. How could Hobbes reply to this?

    Hobbes makes another point here: he says that in the unfortunate circumstances he describes, there would be no efforts made to improve the conditions of life because the outcome would be uncertain, and that as a result there would be no arts, crafts, sciences and so on. Are we being charitable in this rendering? He is again quite emphatic—no knowledge, no arts. Let us try instead to critique him, then, and thus to improve his position as a result.

    Does it follow that the lack of certainty in an endeavour implies that no-one would try it? The argument looks like this:

    P1. There is no certainty in doing x;
    P2. People only act when certain of the results;
    P3: Only government can secure the results of our endeavours;
    C: Therefore, no-one will act (in the ways he describes) without government to secure their efforts.

    It seems that all the premises here are open to severe doubt. Perhaps instead we could back off slightly and recast his intentions as to note that people might be less likely to invest their time in such endeavours if they have no guarantee of their lives other than what their strength provides? Does this help? It appears better than before, but we could find counter-examples again: didn't Wittgenstein write philosophy while huddled in the trenches of the Great War, when he could've been killed at any moment? Don't marooned people build rafts to escape from their island without any guarantee of their venture's success? How can we help Hobbes now?

    In this passage, Hobbes offers an instance that he intends as justifying his point by example. Doesn't the fact that we lock our doors, or arm ourselves (in Hobbes' time, or in some countries today), or travel in groups rather than alone, imply that we are suspicious of our fellows? Suppose further that it does: can we then say that no-one will be inclined to call an action wrong until a law has been made to declare it?

    We could argue against Hobbes here by counter-example: many communities still exist that do not lock their doors—especially in his time and particularly in the countryside. Alternatively, we could say that we lock our doors because there is a minority of people who will steal our belongings if given the chance—not because we distrust everyone. In this case, as in others, general behaviour could be the result of specific problems, not widespread or universal fear and suspicion.

    How can we help Hobbes now?

    Here Hobbes offers a criticism of his own and then answers it: he notes that it could be objected that the situation he envisages has never in fact existed; in response, he says that although he is willing to concede that it isn't the case everywhere, there are still many examples to refer to—like the "savage" peoples of America. This is an empirical argument: is his characterization accurate? If not, his claim is unsupported. To advocate for Hobbes here, then, we would need to check; the result would give us the information to argue for or against him.

    A second empirical point is made by Hobbes: we can see what would happen to people who survive under no laws by looking at the history of what happened to those who lived through civil wars. Hobbes is suggesting that an investigation would show that society degenerated without the influence of government, but if we found the contrary then we would be able to play devil's advocate and insist that the claim be restated in a more plausible way.

    Now Hobbes mentions that independent kingdoms and principalities (or other arrangements nowadays) may maintain the lives of their citizens, but they nevertheless are perpetually at risk of war. Is this a fair reading? Apparently so, since he actually employs a great deal more rhetoric. Can we oppose it? We could ask why this has to be the case at all: why may not agreements between nations be made? Why should the potential for war be the default, when it could just as easily be that peace reigns unless something happens to end it? This is a general point to be made against Hobbes throughout: could he not have argued instead that without the security of government people are inclined to live together in peace, with the ambitions of governments being largely responsible for the conflict he saw everywhere? Perhaps this idea is flawed, but in order to make his case stronger Hobbes could have been a better devil's advocate himself and offered it as best he could, thereby demonstrating its flaws and why his own understanding is to be preferred. This, then, would be a way for us to tackle the issue and employ our methods still further.

    Hobbes work was very influential in his time and after, but already in this short extract we can see points of attack in his arguments that were seized upon by others. It shows us that time spent on criticising and hence adapting our own ideas is perhaps as important as working on our initial justifications.

    In summary

    The point of these exercises is not to argue for or against any particular opinion, but only to demonstrate how we can approach philosophical pieces and some methods for getting the most out of them. Any of the remarks made or criticisms raised could be the starting place for a more detailed discussion or attempt to refute an idea, but that is for another article at another time.
    Teaser Paragraph: Publish Date: 06/07/2005 Article Image:
    Thomas Lessl is Associate Professor in the Department of Speech Communication at the University of Georgia. His work involves the rhetoric of science, looking in particular at the meeting of science with the public sphere. I was fortunate enough to be able to ask him some general questions about rhetoric as well as focusing on its role in scientific debate.

    - Interviewed by Paul Newall (2005)

    PN: How would you define rhetoric and why should we study it?

    TL: Most simply I would define rhetoric as the art of public communication. Anyone who engages in public communication is practicing the art of rhetoric. Art can also mean a body of principles pertaining to its practices, and this is true of rhetoric as well.

    Its most active practitioners are our social architects, most typically those political actors who craft the policies, ideologies, and shared identities that create polities. Scholars who study the rhetorical art, like critics and theorists of other art forms, are typically interested in instances of expression that have some particular significance. That significance may arise from a message's place in history, its creativity, or simply from the fact that it represents the features of a particular milieu.

    Rhetoric is a subject of importance because its study enables us to better understand the processes of communication that underpin decision making in free societies. Judgments on matters of public policy take their cues from rhetoric, and so an understanding of any society's rhetoric will tell us a lot about its ideas, beliefs, laws, customs and assumptions - especially how and why such social features came into being. We don't typically think of it this way, but every law that is on our record books began as an act of rhetorical undertaking by some public or private citizen trying to fix a problem. Statutes and policies are the ends; rhetoric is the means. If law is the architecture of public life, rhetoric is the art that brings it into being.

    PN: How is rhetoric used in communication? Does its influence depend on the subject of discussion?

    TL: I'm not sure I would say that rhetoric is "used in communication" because that phrasing would suggest that it can be separated from communication - that there are some forms or instances of public communication that are rhetoric and others that are not. This is what American politicians and journalists often imply when they describe a particular message as rhetoric. For politicians to call an opponent's messages "rhetoric" is to accuse him or her of some duplicity. This is an unfortunate misunderstanding that pervades our culture. Rhetoric is not a category or strategy of communication. It might be better to think of it as a particular property of speech - its persuasive property. To use a simple analogy, physicists tell us that "heat" is one property of matter - which in quantitative terms is its degree of molecular motion. Some objects have very little heat and others have a lot, but they all have it. Absolute zero does not occur in nature, or in the lab. Speech is like that too. All acts of speech have some rhetorical potential, which is the potential to bring about change - some in small ways and others in large ways. But all speech can affect human judgment. So wherever there is speech there will be rhetoric.

    How influential rhetoric will be does depend upon how this persuasive property plays out at any given moment of history. Lincoln's Gettysburg address was influential because the American experiment with democracy was in crisis in 1863, and there was great uncertainly about what to do to fix it. That speech proposed a compelling solution. Persuasion plays a greater role when there is great uncertainty and great potential for change. And so subjects that introduce high levels of doubt in volatile times are going to be treated by messages that are "hot", that are rhetorical in a pronounced way. We're less dependent on rhetoric when there is a higher degree of certainty. People don't talk much about what is certain. What's the point? We talk about issues that are in doubt.

    Rhetoric has never been understood in my field or in its history going back to classical antiquity as something optional, something that public actors can turn on or off. It is a term that denotes what human beings do whenever they enter into public communication. Anyone who engages in public communication is engaging in rhetoric in the same sense that anyone who paints portraits is a portrait artist. There are good and bad painters, but all are artists. And similarly, while it is possible to judge rhetoric as honest or dishonest, effective or ineffective, it is not possible to engage in public communication without also practising this art.

    Rhetoric is used typically to persuade, because the public contexts for which it is created are ones marked by disagreement and competition. Political speech is persuasive because politicians are trying to win elections or to get legislation passed. But disagreement and competition characterize other public situations that are not ordinarily associated with rhetoric. The television news is persuasive because its producers want us to watch it at six o-clock rather than reruns of the Twilight Zone, or even worse the news on a competitor's channel. And, of course, I would strongly insist that scientific communication has this persuasive aspect as well.

    Discourse in the public arena has certain characteristic properties, and many of them are undesirable. This is because the public arena is a place of competition and conflict, and so those who enter into it are often tempted to speak in unsavoury ways. This fact has always given rhetoric a bad name. But rhetoric encompasses the whole of public communication; it includes the high oratory of Martin Luther King Jr. as well as the demagoguery of Huey Long.

    PN: How would you characterise the role of rhetoric in science?

    TL: There is a popular and widespread misconception in the world that scientific communication is distinctly different from other forms of public communication, but this is not really so. Its persistence is explained by an old adage in my field, which I think comes from Roderick Hart at the University of Texas, which says that rhetoric is most effective which disguises itself as something else. And I would have to say that science is the master of disguises. This is a pattern that began to manifest very early on in scientific history, I would say in the rhetoric of Francis Bacon in the seventeenth century. Bacon idealized scientific thinkers as ones with "minds washed clean from opinions", as if to suggest that scientific method is an alternative to debate. Here's a longer example of how Bacon contrasted science against rhetoric.

    Bacon, of course, was a rhetorical genius. He was trying to establish a place for science in English society and across Europe more broadly. What better way to do this than by creating the impression that science, by dealing in certainties rather than probabilities and demonstrations rather than arguments, might provide an alternative to humanity's endless squabbling?

    In saying this I am not trying to suggest that science is not a profoundly powerful form of inquiry, that its truth claims are without substance or that many scientific questions cannot be answered with a definitive yes or no. But scientific communication has all the same kind of properties that we typically find in other arenas of communication. A chief reason for this is the fact that scientists are forever at the frontiers of knowledge. They're not concerned with what has been established but with what is still in doubt and still contested. Contrary to Bacon's spin, this means that science is all about arguments and opinions - the very stuff of rhetoric.

    Many people confuse the rhetorical perspective on science with the radical subjectivism of post-modernists, but generally speaking that is not what we're saying. The position of rhetorical scholars who specialize in the study of scientific communication is just that science is mostly similar to other forms of public communication. Science, in other words, is argument and debate.

    PN: How have studies of rhetoric in recent times impacted understanding of science?

    TL: Rhetorical study of science is part of a much broader and growing academic interest in this area. The fields of sociology and philosophy have really been the pioneers here. In those two fields inquiry was initially driven by questions about the astonishing success that science has enjoyed since the sixteenth century. In earlier times the project of the philosophy of science and to some extent the sociology of science had been to figure out what had made science so singularly successful. The holy grail of the philosophy of science was to pin down the precise epistemological conditions that made science different from other kinds of inquiry - its boundary conditions so to speak. No such defining philosophy of knowledge has ever been identified - something that prompted Paul Feyerabend to create the impression that science is "anarchy" in his book Against Method. This of course was a rather reactionary stance, one, I suspect, that Feyerabend came to regret taking. But it illustrated a real problem. Every effort to define what science is has managed to exclude from consideration certain arenas of inquiry that most of us would regard as scientific. The most familiar example of this was Karl Popper's exclusion of Darwinian evolution as a "metaphysical program", because it did not satisfy his defining criterion of falsifiability. But in the late nineteenth century we have the case of positivism excluding atomic theory because of its inferential character.

    There seems to be no singular path to truth about nature. The history of science has shown that different approaches work for different problems. Although people often think of science as something governed by certain methodological rigors, one can always find success stories in its history that don't fit that mould. The collapse of the whole demarcationist project late in the last century has given a tremendous boost to the sociology of science and of course to the rhetoric of science as well. The sociological approach goes back much farther than this, to the 1930s when Robert K. Merton began to suggest that part of the puzzle of science's success was to be found in social factors, in an institutionally enforced ethic or ethos that made science distinct from other forms of inquiry. It has now been succeeded by the more radical "strong program" out of the UK which is more sympathetic to the anarchy interpretation of Feyerabend.

    A pivotal turn in the understanding of science seems to have come with the translation of the philosopher Marcello Pera's Scienza e Rhetorica into English in 1994. The University of Chicago Press unfortunately dropped the word "rhetoric" from Pera's Italian title in translation, probably in deference to readers who are put off by the term, but this is precisely the volume's approach. Pera presents the scientific efforts of Galileo on behalf of the Copernican theory as argument rather than demonstration. This is to say that Galileo tried to establish the Copernican position by appealing to whatever he thought would persuade interested readers. Galileo appealed to experimental evidence and to other specialized rigors of mathematical representation, but that is only a part of his case, and to Pera's mind not necessarily its most crucial part.

    Like many of his successors, Galileo tried to make it seem that his case was base based on "proof", that it was merely an assemblage of facts. This was destined to become a characteristic rhetorical move for his successors. Of course Copernicanism was consistent with a multiplicity of facts, but that didn't prove it. This position was won by argument, much in the same way that other disagreements are resolves. Successful arguments create consensus, not proofs.

    PN: How has your research agreed or disagreed with others looking at the rhetorical dimensions of science?

    TL: The main thrust of work in this area deals with rhetoric as a model for looking at the professional and technical discourses of scientists. It brings a rhetorical perspective to scientific work. So, for instance, rather than presuming that scientific discourse belongs to a category of communication all by itself, rhetoricians of science have treated it as a discourse that follows the same conventions as other forms of public communication. Scholars like Gross, Fahnestock and Bazerman have done this with classic presentations of scientific work. Others, such as Taylor in his work on demarcation and Ceccarelli in hers on the creation of new scientific disciplines, look at how rhetoric comes into play in more specialized cases - but ones still having to do with the execution of scientific work.

    My own focus has had a more public character. I'm interested in how science has established and maintained the bases of its patronage by speaking to its various publics. This kind of rhetoric has direct bearing on scientific work, since science is utterly dependent upon patronage.

    PN: You have written that the public discourse of scientists often employs a "priestly voice", unwilling to accept interference from the public and "scientising" them rather than popularise science. Is this a resistance to "dumbing down" or something else?

    TL: What I call science's "priestly voice" is the outcome of several hundred years of experimentation with different ways of relating itself to its patrons. Patronage is a perennial problem for science, one of huge proportions. Science is at once an exceedingly costly undertaking and also one that does not necessarily offer any immediate return on investments. We all know that science has produced applications of immeasurable benefit, but in history when scientific patronage has been dependent upon the promise of such payoffs, science work has suffered. This is because most of what we call basic science is exploratory and can't promise applications. It produces knowledge that winds up in science journals but not in pharmaceutical patents or medical applications. The characteristic expectation of Americans that science is valuable because it pays off has traditionally deterred scientific growth. This was why the U.S. remained a backwater province of theoretical science until after WWII - when the public began to realize that theory might pay off in things like atom bombs. But more generally, scientific culture has responded to the pressures of patronage by trying to construct a priestly ethos - by suggesting that it is the singular mediator of knowledge, or at least of whatever knowledge has real value, and should therefore enjoy a commensurate authority. If it could get the public to believe this, its power would vastly increase.

    There's this old adage, Chinese I think, that says that if you give a man a fish you feed him for a day, but if you teach him how to fish you feed him for a life time. The priestly character of scientific rhetoric reflects a similar logic. The approach that would sell the public on the worth of science on the basis of its practical payoffs is like making it a scientific patron on particular issues - which only feeds science for a day. But if the scientific culture can convince us that deep down we are all scientists, or at least that we should all aspire to this elite realm of knowing, then science might enjoy patronage for life. Priestly rhetoric, in other words, tries to recreate society in science's image.

    Priestly rhetoric is not so much about a disdain for "dumbing science down". Scientists have reservations about "popularization" for good reasons. The priestly character of scientific rhetoric has to do with the need to identify science with the most essential human values by making it a world view - by creating a public culture based in scientism. The best known example of this approach to scientific communication in recent memory would be that taken by Carl Sagan. Perhaps more successfully than any other popular writer of the last century, except perhaps H. G. Wells, Sagan was able create the sense that history has a scientific destiny.

    PN: In your essay Heresy, orthodoxy and the politics of science, you argued that the public rhetoric of many scientists is aimed at maintaining advantages like epistemic privilege or material benefits such as funding and grants. How did you arrive at this conclusion?

    TL: That essay was based mainly on the internal dialogue that was going on among scientists during the creationist controversies of the early 1980s, and at the time the scientists who were most vocal about the threat of creationism were also likely to express these concerns. In some sense it would be reasonable for them to have these fears. This goes back to my previous comments about the precarious nature of scientific patronage. Evolution has always been a fairly unpopular subject with the American public, and so if creationism were able to gain some official sanction as science, as it threatened to do in the Arkansas and Louisiana cases from that decade, evolutionary biologists might very well have found themselves competing with creationists research societies for funding.

    This is not a way of saying that evolutionary scientists act in bad faith in opposing creationism. I assume that they speak their personal scientific convictions in doing so. But this doesn't change the fact that science is driven by other motives as well - those of the pocketbook and the ego as well as those of the intellect. This is why a rhetorical perspective on science is helpful, since it is a perspective that traditionally tries to be persuasively holistic, to take into consideration every aspect of an argument.

    PN: How would you describe the importance of rhetoric when considering the demarcation problem in science?

    TL: The rhetorician Charles Alan Taylor has used the term "ecology" to describe what I just called the holistic character of the rhetorical perspective. Like any complex web of living organisms sharing a common environment, activities of scientific inquiry occur within a larger rhetorical ecosphere. This greatly complicates the problem of scientific demarcation. It may be meaningful, for instance, to demarcate science from religion, but not in any absolute sense. If science is embedded in a social environment that has certain religious characteristics, science is likely to reflect them - though this is not the same thing as saying that it is determined by them. The scientific work of classical cultures tended to be rationalistic because its religious culture was rationalistic. The same kind of religious culture that gave us a philosopher like Plato was also likely to give us scientific thinkers like Euclid and Pythagoras.

    This means, among other things, that the issue of scientific demarcation is both an intellectual and a social problem. It doesn't denigrate science to acknowledge that its public communication may reflect social or institutional concerns. By definition science would have to be concerned with these matters because inquiry cannot be undertaken except where there is an institutional framework capable of sustaining it.

    This rhetorical perspective explains the origins and endurance of the popular but clearly false belief that science and religion exist in a perpetual state of war. One would expect there to be disagreements between science and religion. There always have been, but nobody every called this a "war" until late in the nineteenth century. Even the famous and singular case of Galileo and the Catholic Church was as much an internal scientific feud as it was a science-religion feud. Although the Church believed that Copernicanism was a threat to the faith at that time, it also thought it was coming down on the side of good science in deciding to oppose Galileo. Urban VIII acted precisely as scientists wish for current Popes to act on the issue of evolution. They want the church to side with the scientific majority that stands on Darwinian evolution against a small minority of scientists who favor a design model of origins. Siding with the scientific majority was precisely what the Church did in the seventeenth century. So why do people believe that this incident demonstrates that science and religion are natural enemies?

    The pervasiveness of the warfare metaphor, I think, reflects the pressures for demarcation. The metaphor first became prevalent, as the historian John Moore has shown, in the late nineteenth century. This was a time when science was in the midst of an institutional crisis. In the Victorian era science was making a move on the academy in Europe and the U.S. It was trying to greatly enlarge science's place in an academic culture that had been created by Christianity. The scientific culture needed to gain a stronger foothold in the universities in order to continue its growth, and what better to do this than by creating the idea that religion was science's evil stepmother?

    History can't account for this belief but demarcation can, provided that we recognize that this is as much an institutional problem as it is an intellectual one.

    PN: With regard to the creationism debate, it has been claimed that the scientific community had "retreated into orthodoxy" in response to the creationists, invoking "threadbare epistemic chestnuts" to define creationism as pseudoscientific. Why was this approach taken, rather than an alternative, and what were its consequences? What course do you think should have been followed instead?

    TL: The retreat into orthodoxy is a logical response of institutions whose authority is tied up with any particular belief system. My early work on the scientific response to creationism drew its inspiration from research on the sociology of deviance (especially that of Kai Erickson and Lester Kurtz), which seemed to suggest that institutions have a certain attraction to deviant insiders or heretics. This is because heretics provide institutions with counterpoints against which they can articulate their official positions. While it is often difficult for institutions to say what they believe in any definitive sense (they may not really know, or there may be disagreement among elites), they can create consensus around what they reject - heresy. This is one of the reasons groups gain solidarity in having a common enemy. But having heretical enemies is particularly advantageous. This comes from the fact that heretics (as opposed to pure infidels) are more similar to their orthodox counterparts and thus capable of providing this useful contrastive benchmark for their right-thinking foes.

    Deviance studies suggest that heresy hunts are likely to occur at moments of institutional insecurity. You might not get this impression from listening to anti-creationist rhetoric, except to the extent that it focuses so largely not on the scientific case for evolution as on secondary issues of method, metaphysics and motive. It is more often concerned with showing why creationism is not science than on showing why Darwinism is. This draws attentions away from difficulties that may plague evolutionary theory.

    The difficulties that make creationism an attractive enemy for science are not necessarily intellectual ones - though they could be. To use Taylor's metaphor again, I'm of the opinion that public discourses are best regarded as belonging to some larger "ecology" of meaning. Science, when it goes public, may be concerned about advancing scientific truth, but it is also going to be concerned with a larger set of issues relating to patronage, authority, its place in the academy, etc. Were science merely a technical arena of inquiry, creationism wouldn't be a threat. The fact that a majority of Americans remain sceptical about evolution and the fact that some of these folks claim that science supports the religious doctrine of creation doesn't directly interfere with scientists' ability to pursue the naturalistic program they prefer. But creationism does threaten to disrupt the more fragile linkages between science and public culture that make patronage possible. Creationism is an important threat, but it is an indirect one. Scientists understand that public attitudes about science matter, because they understand that the flow of patronage that keeps research going is likely to be affected by public dispositions toward their work. Obviously if all Americans embraced the evolutionary paradigm with the same enthusiasm that Darwinists have for it, it would enjoy the kind of finding that supports research on cancer and birth defects.

    PN: How does the response to the advocacy of Intelligent Design differ, if at all?

    TL: One consistent pattern in the scientific mainstream's response to ID has been to try to identify it with scientific creationism, to paint it with the same brush so to speak. Such allegations are still frequently made - that ID is merely "creationism dressed up in a cheap tuxedo". This is what movement scholars call a strategy of "evasion", an institutional effort to slow the momentum of a movement by pretending that it doesn't exist - or in this case by pretending that it is made up of merely radical fundamentalists of no account. This strategy is still being plied in the mass media, for public audiences that remain largely ignorant about the differences between these two movements. But in many of the more academic settings where ID is being debated this stopped working long ago. On the inside there has been a more direct and sustained response to intelligent design. Scientific creationism was largely ignored by scientists - except when it tried to legislate for equal time in various states. But ID is not being ignored. As movements evolve the strategies of evasion initially plied by the institutions they challenge typically give way to strategies of confrontation and coercion. We see a confrontation approach in the whole cottage industry that has grown up within the scientific culture among writers like Kenneth Miller and Robert Pennock for whom the refutation of ID has become a full time job. Incidents of coercion are more localized but pervasive nonetheless.

    PN: How can the study of the rhetorical aspects of these debates improve our conduct within them, and in similar discussions of pseudoscience?

    TL: In a lecture way back in 1967 Stanley Jaki noted that science lacked an academic sub-discipline devoted to the criticism of science. Other disciplines, such as literature, history, and even biblical scholarship have a critical voice, but not science. A few reflective voices have emerged in the scientific community, such as that of Michael Polanyi and Thomas Kuhn, but vulgar positivism still persists.

    The rhetoric of science has a distinct role to play in the emergence of such a critical perspective. Some scholars view rhetoric as a kind of philosophy of public life. In his 1991 translation of Aristotle's Rhetoric the renowned classicist George Kennedy used the subtitle A Theory of Civic Discourse. This, I presume, was both an interpretation of how Aristotle saw rhetoric and a summary of how most rhetoricians see it now. To understand practices of public communication is to also understand how it can best serve the public good. It is likewise in the public's interest to understand scientific rhetoric, since science is now a major player in public life. The field of rhetoric brings the accumulated wisdom of 2500 years of study to this subject.

    PN: What are the wider implications of the increasing number of papers and books considering the role of rhetoric?

    TL: Broadly it means that our culture is beginning to recover from an unfortunate side effect of the Enlightenment. Modernism in large was a reaction against traditional sources of power in both the religious and political realms, and because rhetoric was tied up with these by various accidents of history it fell into disfavour. Although the Enlightenment championed the notion of a civilization based in political and personal liberty, by abandoning the traditions of rhetoric it also tended to undermine the only philosophy of communication that could sustain such changes. Rhetoric never died out entirely in democratic countries, which never wholly embraced the Enlightenment project, but it was virtually driven into extinction in the Soviet Union. When Americans started to go into the former communist empire shortly after its collapse they were astonished to discover how helpless its citizens were in their efforts to establish democracy. Democracy takes more than a constitutional plan. It also requires a critical mass of citizens capable of doing the work of democracy - which is the work of public deliberation and debate. It takes much education to cultivate such skills, and the Soviets had abandoned that part of the West’s intellectual tradition.

    Rhetorical scholarship is also growing in our universities because lots of bright students are discovering how intellectually rich this curriculum is and also how eminently practical it is when they bring it to the world outside.

    PN: What have been the main influences on your thinking?

    TL: I tend to have one foot planted in the work of intellectual and cultural historians such as John Greene, Frank Turner, Adrian Desmond and Frank Manuel who are especially concerned with the larger societal implications of science. The work of intellectual and cultural historians is especially helpful to rhetoricians because it is history that has been forced to take a rhetorical perspective. To understand the history of ideas is to understand public debates and the artistry that make one side or another victorious.

    The other foot is planted amidst an eclectic assortment of writers whose work has most shaped my own rhetorical perspective - Northrop Frye, Kenneth Burke, Hayden White and Clifford Geertz. The common thread that draws these scholars together is their exploration of the idea that narrative, a category of speech usually associated with fiction, is just as much a category of public communication. Like many others, I an inclined to regard this as one of the most significant developments in the humanities during the last century.

    PN: What are you currently working on?

    TL: I've recently completed a book manuscript under the working title Rhetorical Darwinism. This title reflects my efforts to situate the emergence of the evolutionary world view within its broader discursive context - in particular that part of this communication environment that has to do with science's institutional development. The volume's thesis is that for rhetorical reasons evolution of necessity develops both a scientific perspective and a scientistic ideology when it enters into the realm of public debate. This isn't to say that evolutionary biology is not a legitimate scientific pursuit. That's a judgment I'm not capable of making. As a rhetorician I've been educated to diagnose the features of public communication, and in its public presentations evolution has always been a blend of science and scientism. It may be grounded in evolutionary science but other added features of language always transform it into a kind of exercise in the architecture of ideology. My book tries to explain the historical and rhetorical reasons for this.

    The most of familiar example of an evolutionary ideology is what was once called "social Darwinism", but that is not my precise subject. Rhetorical Darwinism is a phrase I use to characterize those public discourses used to instantiate the scientific identity - in the broadest sense. I argue that the highly professionalized identity that science developed in the nineteenth century found its most ideal expression in evolutionary symbols. These didn't originally come from Darwin. They came from the Enlightenment, but they have subsequently become tied up with evolutionary science because these scientific ideas do the most to give them a priestly status. Evolution is the naturalization of history, and it is from history that western societies have always drawn their notions of social authority. If you can define history you can define everything.

    PN: What is your involvement with the project and what do you hope to achieve?

    TL: My involvement with this project is minimal, limited to one book chapter that is forthcoming in Randy Harris' Rhetoric and Incommensurability volume. My contribution examines the role played by Thomas Huxley in the emergence of the Darwinian paradigm in the nineteenth century. I enlarge upon a point made about Huxley and Darwin by the historian John Greene, namely that the scientific culture of that period was committed to evolutionism long before any scientific theory of development appeared. I contend that the emerging positivism of the Victorian period, which precluded design, was both a philosophy of science and an institutional ideology. Evolution and design became incommensurate for ideological reasons not intellectual ones.

    PN: How do you see the involvement of rhetorical studies in discussions of science developing in future?

    TL: I couldn't begin to predict what will happen in academic circles. What I hope to see more broadly is a growing rhetorical literacy in our culture that will make people more intelligent consumers of scientific information and argument.
    Teaser Paragraph: Publish Date: 06/07/2005 Article Image:
    By Chen-Roy Simpson (2005)

    Abstract: Wag the Dog is a film about media manipulation. In the first section of the paper, some relatively unknown "Wag the Dog" cases are explored. In the second section, the complicity and lack of vigilance of the Media with regard to manipulation is discussed. In the final section, the paper assesses the impact of both the manipulation of the media by government and the media's own complicity in such manipulation.

    Far from Fiction: Wag the Dog and the News Media in Wartime

    Released in 1997, Wag the Dog was quite prescient in its central scandal (a president charged with sexual misconduct) and effectively chronicles the many ways in which the news media is manipulated. Two weeks before election, the president in Wag the Dog has allegedly had sex with a minor. In order to divert attention from this scandal, and increase support for the president, Conrad Bream, a sort of Public relations professional (played brilliantly by Robert De Niro), invents a war with the country of Albania. Bream hires the services of Hollywood producer Stanley Motss (Dustin Hoffman) and the movie is a sustained focus on their clever manipulation of the media. Among the many manipulations the two devise are the in studio creation of a war – digitally creating a poor Albanian village ransacked by the war; staged ceremonial events congratulating the president on his efforts in Albania; anonymous leaks to the press; and finally a public relations campaign used to engender sympathy for a 'lost soldier' when the fictionalized war is abruptly put to an end by the CIA. Initially, some of Wag the Dog's scenarios seem to be ludicrous but on closer reflection many of the tactics used in the film are quite real.

    "Wag the Dog" Cases

    While the relationship between media and military have never been amiable, after Vietnam a new adversarial relationship developed. Convinced that the media portrayal of the Vietnam War significantly contributed to its negative perception by the public, military officials sought new ways to suppress negative press coverage. The new set of rules were first implemented in the U.S. invasion of Grenada, 1983. Ironically, the Grenada invasion is the first war cited by Conrad Bream as an example of media distraction. When asked how the appearance of a war will distract attention, Bream says:

    Bream, of course, may be taken to imply that the Grenada invasion was specifically intended to distract media attention by "changing the story, changing the lead." This, in fact, was the claim of many writing at the time of the Grenada war. While it is doubtful that the Grenada invasion was undertaken specifically to distract attention from the Beirut killings (the decision to invade was made three days before the bombing), it is hard to imagine that the its political benefits did not play a part in its role. The Grenada invasion took place (as Bream says) just 24 hours after more than 200 marines were killed by a suicide bombing in Beirut. In his book On Bended Knee, Mark Hertsgaard states that the Marine's death had the potential to be a political disaster for President Reagan because of widespread fear in the public that Reagan could take the country to war. In fact, the war became a political triumph. Public opinions polls rose sharply after the war, spurred on by Reagan's own explanation of the war as an example of the restoration of American power (Hertsgaard, 1988 ).

    It is here that we see many parallels with Wag the Dog. In the days leading up to the invasion, information was leaked to the press about it. When asked about the possibility of invasion, officials declared that the idea was "preposterous". The decision to lie was not an isolated incident by war planners but a directive from the highest offices to mislead the press. The invasion was unknown to the White House press offices or the Pentagon until one hour after the attack had already begun. One reporter managed to make it to the island but was detained on a U.S. Navy vessel (Hertsgaard, 1988 ).

    The government further subverted the Press by barring reporters from going to Grenada to report on the invasion. As a result, most of the pictures and video of the war came from the government. The video footage was carefully selected: the government videos consisted of paratroopers dropping on the island and shots of students kissing the ground as they returned to America. But this distorted the reality of the invasion. Only a few students kissed the ground when they returned to America. In fact, the students as a group were divided about just how much danger they were in. Other government-supplied videos were of warehouses stockpiled with weapons, purportedly the weapons with which Cuba was going to take over Grenada. One of the reasons for invading Grenada, according to the Reagan administration was that it was being turned into a Cuban-Soviet military base whose purpose was to disrupt the Carribean and Central American region. When reporters were finally allowed access to the island, the claim that Grenadian warehouses were full of Soviet missiles and old weapons, tenaciously reported in all major news outlets and ‘supported’ by government supplied videos of said warheouses were falsified, refuting one of the rationalizations for the war. A second rational for the war was to “rescue American students” but Hertsgaard asks the interesting question “rescued from what? The clutches of Cuban trained Marxists or the combat ignited by U.S. invaders?.” Substantial evidence exists that the Americans could have safely returned without military rescue. The weekend before the invasion, for instance, Cuba and Grenada both made arrangements for Americans to depart the country if they wished (Hertsgaard, 1988 ). Thus, another war rationale was refuted. Nevertheless, the Grenada invasion was a political victory because the administration lied to the press, barred reporters from entering Grenada, and provided news outlets with their decidedly sanitized and favorable images of the war.

    In the book Toxic Sludge is Good For You, authors John Stauber and Sheldon Rampton state that after being shut-out of the Grenada War, journalists raised enough controversy to lead to the creation of bi-partisan committee that tried to best balance the media and press in wartime. The idea of a media pool was developed and faced its first real challenge in the 1989 invasion of Panama to oust General Manuel Noriega. Ostensibly, the pool was to provide journalists with quick and easy access to the military and war but quickly turned out to be yet another way to subvert the role of reporters. Media pool members got to the island late, after being delayed two hours by the Pentagon. When the reporters arrived they were detained on a U.S. military base another five hours and therefore missed all the major combat actions which took place during this time. Moreover, the Media Pool was fed outdated information by the U.S. embassy instead of being taken into combat (military personnel refused to take journalists into the combat zone). Overcoming technical difficulties (with a fax machine in the Pentagon), the first pictures of the war surfaced four days later, most of which were taken by the government. The pictures and videos were of parachuting U.S. troops and the reports mostly consisted of U.S. casualties, reporting nothing on the battlefield.

    However, media manipulation was not regulated to these short wars. In the 1980's, the Reagan administration secretly tried to overthrow the Sandinista government of Nicaragua, which in 1979 had ousted the American-friendly dictatorship of Anatasia Somoza. In order to win public support for U.S. actions against Nicaragua (trade and economic sanctions), the Reagan administration in January 1983 directed CIA director William Casey to set up an office of "Public diplomacy", described as "a set of domestic political operations comparable to what the CIA conducts against hostile forces abroad; only this time, they were turned against the three key institutions of American democracy: Congress, the press and an informed electorate ... the administration built an unprecedented bureaucracy in the [National Security Council] and the State department designed to keep the news media in line and to restrict conflicting information from reaching the American public." (Stauber & Rampton, 1995).

    Following the advice of the then leading Public Relations professionals, the White House created a "communications function", of which the Office of Public Diplomacy (OPD) was primarily to discredit the Sandinista government in the eyes of the American people. Soon a mythical crisis was developed which greatly resembles Bream's first actions in Wag the Dog. Bream, after finding out that the president has been charged with sexual misconduct with a minor, hatches his first plan to subvert the story: he tells his aides to leak a story about a B-3 bomber so his press office can deny press that there is B-3 bomber. Such denial means he’s not lying.

    Bream cleverly concocts a diversion. Since the president is in China for trade relations, it is imperative that he stay there for a few more days in order to avoid having to answer to the allegations made against him. However, for it not to seem like the president is extending his stay in China because he does not want to face the allegations, he must give the press something to think about – a crisis involving a B-3 bomber. Bream instructs his aides to "let it slip" to a Washington reporter "I hope this [the president being in China] won’t screw up the B-3 program." Of course, the reporter will ask "what B-3 program and why should it screw it up?" to which his aide will reply "to avert the crisis." At this point in the film Bream does not know what the "crisis" is but leaking the story buys enough time to allow him to create it.

    When told by one of his aides that the "story won’t prove out", Bream responds "It doesn’t have to prove out. We just have to distract them." Predictably, the reporters in the film, instead of skeptically addressing the bomber story, spend most of the time asking if the president's stay in China has anything to do with the B-3 bomber and rumors of an "Albanian Ops center." This allows the press office to deny knowledge of any B-3 bomber which only furthers speculation. One reporter asks:

    To which Bream, watching the press conference on television, happily responds:

    In other words, all that needs to be done is to leak a false story and enable the curious journalists to expand on it, in the process ignoring allegations against the president. After all, "Muslim fundamentalists" and "anti-American Uprisings" present a threat to national security and "The American Way of life", which are much graver than the sexual misconduct of the president. While the obvious comparison is the President Clinton/Monica Lewinsky scandal*, the Nicaraguan example is more apt since no actual action was taken as in the film. Following directives to find "exploitable themes and trends", the Office of Public Diplomacy during the Reagan administration leaked uncorroborated stories purporting to show the military threat Nicaragua posed to the U.S. One such story was the 1984 "MIGS crisis." The White House leaked information to the press that claimed Nicaragua was on the verge of receiving Soviet Fighter planes. Later research showed that the story did not "prove out" but the story served its purpose. Television news frequently played the story of the MIGS crisis, to the extent that regular news programs were interrupted to give "special bulletins" about it. Moreover, the story diverted attention from the Nicaraguan election, which was held that week and in which the Sandinista government - the one Reagan was trying to overthrow - won by a large margin. In fact, the election was the first "free" Nicaraguan election, though it was soon dismissed by Reagan as a "sham." (Hertsgaard, 1988 )

    The most striking example of the similarity between the events in Wag the Dog and the news media in wartime may be the 1991 Gulf War. In the film, faked news footage of a young "Albanian girl" (an actress) fleeing a ransacked, digitally created "Albanian village" is the central means by which Hollywood producer Motss hopes to convince Americans that there is actually a war going on, and most importantly compel enough sympathy and a sense of urgency to distract attention from allegations made against the president. Motss refers to the girl as his "young girl in the rubble" meant to "mobilize" public opinion in favor of the war. While the Gulf War was real, in order to justify and encourage support the Bush administration needed its own "young girl in the rubble." That girl was 15 year-old Nayirah who testified to Iraqi atrocities at the 1991 Human Rights Caucus.

    In Wag the Dog the challenge was to convince Americans that there was an actual war; however, the Gulf presented the challenge of making the case for war - a task which involved doing two things: making Hussein the unique embodiment of evil and sanitizing the image of Kuwait in the eyes of Americans. In the book, Second Front, John MacArthur explains many of the details in Pr campaign of the Gulf War. Up until a week before the invasion of Kuwait, Saddam Hussein was an ally of the United States. Throughout the 1980's Hussein's ruthlessness was ignored by the United States since they both shared a mutual enemy in Iran. But the invasion changed all that. The invasion of Iraq would cause a political upheaval in the region as well as impugn on U.S. ability to control resources**. The sanitizing of Kuwait meant the portrayal of Kuwait as a young burgeoning democracy and thus worth the blood of U.S. soldiers. In the cynical words of an Army PR Hal Steward:

    Further complicating matters was the fact that Kuwait was not a young bourgeoning democracy. In 1991, Kuwait was ruled by a family oligarchy, the al-Sabah, who disbanded the Kuwaiti national assembly in 1986 giving all executive power to an Emir chosen by and from the family. Even before the disbanding of the national assembly, women were excluded from the political process and only 65,000 males out of a nation of two million were allowed to vote. Kuwait also crushed the small democratic movement it had growing, banned political rallies and had a bad reputation because of its near enslavement of its major workforce who were foreigners (MacArthur, 1992).

    In order to turn public opinion for Kuwait, the Kuwaiti government financed one of the largest public relations ever. The first step was the creation of the "Citizens for a Free Kuwait" who were represented by the public relations group Hill & Knowlton. For all its efforts in the Gulf War, Hill and Knowlton received nearly 11 million US dollars in fees from the Kuwaiti government. The reach of Hill & Knowlton's campaign was wide: national days for Kuwait were scheduled; t-shirts were made; and all form of news media were bombarded with pamphlets, clips and videos. On October 10, 1991, three months before the war, the congressional Human Rights Caucus held a hearing on Capitol Hill, officially the first formal opportunity to present evidence for Iraq's human rights violations. However, the Human Rights Caucus was not a committee of congress. This is in the context in which Hill & Knowlton presented Nayirah, a 15 year-old who, in a tearful testimony, claimed to have seen Iraqi soldiers loot Kuwaiti incubators, leaving the children to die on the cold floor. Nayirah, like other witnesses that day, did not reveal her last name, citing fear of Iraqi reprisals against her family. In fact, Nayirah was the daughter of Kuwait's ambassador to the United States, Saud Nasir al-Sabah, and thus hardly a reliable witness. Nevertheless, the story gained national coverage. MacArthur says of the story:

    The story was repeated by the president, all major media outlets (TV, radio, newspapers) and later congressmen cited the story as one of the primary examples of Iraqi barbarity and thus reason for war. Eventually the incubator story made its way to Amnesty International who knew nothing of it before the day of Nayirah's testimony. But the story was false. Kuwait's own investigators could not confirm the story, nor could Amnesty International who later retracted it after further investigations. Nayirah herself was never made available for testimony (MacArthur, 1992).

    Perhaps the most insidious form of media manipulation stems from the use of pre-packaged news, or video news releases. It is still not known, for example, what videos Hill and Knowlton produced for air play on TV in the Gulf, nor are they willing to reveal this. In the article Under Bush: a New Age of Pre-packaged Television News, David Barstow and Robin Stein detail the expansion of pre-packaged news in the last decade. Pre-packaged news is news segments specifically produced to be indistinguishable from regular Network TV news, complete with scripts, interviews, and suggested lead-ins. While pre-packaged news existed in the time of the Grenada and Panama wars, in the form of favorable or "sanitized" video footage, the use of video news releases has grown exponentially since and has become much more sophisticated. These videos feature "reporters" who report on an issue, just as if it were regular news. Public relations professionals are careful to not overtly push a message though the segments never feature criticisms of their positions. The segments are distributed to various media outlets and subsequently played to millions of viewers. Networks regularly edit these video news releases by, for example, cutting the paid government employee out and using their own reporters to read the Government-written or Public relations-written script (Barstow & Stein, 2005).

    Another carefully orchestrated plan of media manipulation in the film Wag the Dog remarkably resembles the use of Video News releases to engender certain sentiments in the public. In light of the President's return from China, Bream stages a ceremonial event: an Albanian young girl and her grandmother thank the president for his help in their country, and thus the segment broadcasted live in the movie serves to rationalize the fictional war to the American public by showing the good fortune it is bringing about. This deliberate act of manipulation is virtually identical in the first case the New York Times article cites, in which a jubilant Iraqi-American says "Thank you, Bush. Thank you, U.S.A." to a camera crew in Kansas City for a segment about reaction to the fall of Baghdad. The segment was produced by the U.S. State Department.

    However, the war on terrorism began before the war in Iraq - in Afghanistan. The Administration used video news releases to justify and support the war in Afghanistan. A total of 59 segments (according to the Times article) were produced, "reporting" how successful the U.S. war on Afghanistan had been. The video news releases explained that as a result of U.S. action, Afghan women were "liberated", now being free to go to school and participate in their country's politics. One such video featured reporter Tish Clark, who later learned that the segment was government produced. Clark, following standard industry practice, had edited the tape and read the script giving the segment the reality of "real news." The segment was broadcast to millions of viewers who were not made aware that the segment was a product of the government. The effective blurring of the lines between "real news" and government or PR produced news, through censorship and media complicity, has led to marriage between the media and the government, in which the government is the greater benefactor.

    The Complicity of the Media

    In any marriage, one expects to find some storming and shouting. But how much storming and shouting did the press do? While each of the wars from Grenada to the Gulf elicited enough anger from various journalists that new committees were formed with the expressed purpose of better accommodating the demands of journalists in Wartime, journalists consistently complied with prima facie cases of military censorship rather than vigorously challenge them..

    The Grenada Invasion is case in point. A day before the invasion, two ABC reporters captured footage of U.S. Navy jets and Marine helicopters on the neighboring island of Barbados. The airplanes landed, transferring soldiers and equipment to helicopters and men in business suits to the Jets. Sharon Sacks, one of the reporters, called the U.S Embassy wanting to find out if the event was signal of invasion but was told it was an evacuation of students (later shown to be false). When this story was filed to ABC News, editor Robert Fyre declined to run the story citing Washington stories that the prospective invasion of Grenada was "preposterous". Further comments from Washington suggested that a U.S. carrier in the region was there to ferry stranded foreigners, not prepare for invasion. All of these were lies, yet the press did not run, lead with, or feature this curious case of Government censorship and deliberate misinformation in its newscasts (Hertsgaard, 1988 ).

    John MacArthur, in the book Second Front, details many of the cases in which top newspapers and TV editors simply conceded to the military, showing a nascent lack of vigilance in pursuit of news and complicity with government censorship. After the failure of the National Media Pool system in the Grenada and Panama invasions, another committee was created to address the problems of the press and military in wartime. The compromise was to be tried in the Gulf war but failed miserably. Journalists were confined to hotels and could not independently report; a military escort was necessary on each report; journalists in pools could not choose stories and were instead assigned "slots" by the Pentagon; and journalists' stories had to undergo a "security review" before they could be filed, which is essentially censorship. As a result of these rules journalists agreed to in going into the Gulf War, they were effectively prevented from reporting any other news other than what the Pentagon wanted. Popular stories of the war praised the accuracy of U.S. bombs, and exaggerated the might and quantity of the Iraqi army. IF pre-packaged news represents the most insidious ways of media manipulation, then the conventional acceptance and use of Video News releases in the Journalism industry is probably its most unsettling. In a country that receives over 80 percent of its news from television, what is one to make of the fact that much of what seems like "news" produced by the stations is in fact VNRs produced by the government PR professionals? Are we to na vely assume that the seamless blending of sponsored news and "real news" has negligible impact? The specific purpose of VNRs is the promotion of a product or ideology. The news media's ostensive purpose is to report on different products, whether they be material goods or ideologies. As such, a primary feature of actual journalism is criticism. In order to fulfil its function as an informer, the journalist must be wary of promoting different ideologies, either by short shifting an idea or lacking vigilance in their reporting. The producers of VNRs have a decidedly different objective: the promotion of a product which precludes criticism. Thus, there exists an essential tension between the VNRs and the practice of journalism.

    But if VNRs are common then their must be reason for their use. The primary argument for the use of VNRs is its financial benefit to stations: News networks have radically downsized while expanding coverage. The use of VNRs is a cost-saving measure that allows news organizations to gather footage that would either be too expensive to be produced or which they cannot afford at all. Unfortunately, while the argument explains the widespread and conventional use, it says nothing about ethical questions. In Media Codes of Ethics there are no articulated standards for the use of VNRs. While some codes state that work not produced by news organizations should be clearly labelled, this can hardly be sufficient. VNRs still expound, not report on or critique products or ideologies. As a result, the public is still being fed a particular ideology by a news organization. Should the government-produced events of happy Afghan women be accompanied by anything? Is merely noting that this is a product of the government sufficient to curb the ideology produced? Is the later correction of misleading statements or images sufficient? It is impossible to determine precisely how images and videos affect audience perception but that they can should give us caution in thinking that merely labelling uncritical "reporting" is sufficient to prevent propagandizing.

    The Impact of "Wag the Dog" Cases

    In Wag the Dog, the result of media manipulation is decidedly bleak: the Hollywood producer mysteriously ends up dead and there remains a public permanently deceived about non-existent war. In real life, the consequences are not so different though the casualty is the ability of the public to discern the truth. In the film, some characters express worry that the public will know or find out about the deception but Bream, who refuses to assert the truth or falsity of any statement in the film, gives a devastating response to such naive sentiments:

    Asked if the above is true, Bream responds:

    This is the crippling skepticism that results when the lines of "real news" and government sponsored or public relations are mixed. The "point" is that the Gulf War images fed to journalists could have been easily been faked by the government. Bream has outlined a possible scenario in which the bombing could have been fictional, asserts it as "truth" but resorts to agnosticism when asked if what he asserts as true is indeed so. That is, how can you tell the difference? We simply don't know.

    Moreover, and as Bream argues in the film, it is the visceral impact of the images that matter, not their truth or falsity. While the News media prides itself on its ability to issue public statements correcting mistakes, by the time the media has corrected misleading visual images the point has already been made. Says Bream:

    Bream’s language is important. Firstly, that the Americans "bought" that war, and the comparison of War to show business suggest that the images function to sell Americans a rationale for war and therefore, the images function as arguments justifying and also celebrating military action. Since the country of Albania is little known by Americans, images need to explain the terrorist threat and danger they pose. Why the terrorists are dangerous and the reason for war is explained by the video of a young girl running through the streets of Albania screaming that she has been raped. Why, again, is it justified that the president has gone to war with Albania? The compassion and gratitude shown to the president by an Albanian girl and grandmother when the president arrives. The staged event explains the success of a vague war on an unknown country. Visual images, whether they communicate truth or not have lasting impact.

    But of course this conclusion isn't too distressing? Perhaps, the lesson for the public is to develop a "healthy skepticism" toward the Media itself. This is the conclusion of Robert Charles in the informative (but slightly dated) article "Video News Releases: News or Advertising" Charles argues that the charge of "fake news" levied at VNR's is misguided since VNRs often have general news value. For example, in 1993 Pepsi published VNR'S disproving reports that there were syringes in Pepsi bottles which were broadcast on news networks. However, Charles concedes that VNR'S are problematic because they can be used (and are in fact used) to communicate uncritical 'reports' on products or ideologies products. While Charles even-handed approach is appreciated, the conclusion that this should encourage a "healthy skepticism" misses a crucial point: journalists are supposed to be the skeptics. Can the public be expected to have the same time to do as much research as is assumed of journalists? Can the public, for instance, be expected to research reports from overseas? It can easily be replied that a "healthy skepticism" simply entails a suspension of judgment and compels the public to do no more than acquiesce in the face of news. However, this is insufficient. Journalists report information that is supposed to inform the public: this information can then be used by the public to act. Journalists do not present their work in a vacuum in which there is no effect on public attention. Thus, the concept of a "healthy skepticism", while a valuable concept, ignores the demand this will put on a public now expected to critically research the reports of those who are supposed to be its researchers. Though it is unacceptable for the public to take the media simply at face value, it is equally unacceptable that journalists cannot be trusted enough to give fair accounts in the absence of individual research.


    Wag The Dog is often referred to as a satire because of the 'absurdity' of its central plot: the creation of a fictional war in order to distract the public from sexual allegations made against the president. On closer inspection, however, the tactics used to manipulate the Media in the film are everyday practices in the world of newsreporting - exemplified by some unknown "Wag the Dog" cases in the Grenadan, Panamanian and Gulf wars. Staged ceremonial events, anonymous (false) leaks to the press, and the creation of sympathetic characters used to explain and justify war are just some of the tactics used in the film which are replicated in the world of news reporting - all of which contributes to making Wag the Dog far from fiction.



    * I say the obvious comparison because of speculation after the 1998 U.S. embassy bombings that President Clinton's retaliation, on the very day he was to face questioning about Lewinsky, was used to distract attention from his trial.
    ** For more information on the political aspects of the 1991 Gulf War, a good resource is John Pilger whose website is available here.


    Barstow David and Robin Stein. "Under Bush: a new era of pre-packaged news." New York Times 13 Mar. 2005 (a copy of this article can be found here).
    Charles, Robert. <A href="">"Video News Releases: News or Advertising", Sep. 1994.
    Hertsgaard, Mark. On Bended Knee: The Press and the Reagan Presidency. New York: Farrar, Straus, Giroux, 1988.
    MacArthur, John. Second Front: censorship and propaganda during the Gulf War. New York: Hill & Wang, 1992.
    Stauber, John and Rampton, Sheldon. Toxic sludge is good for you : lies, damn lies, and the public relations industry. Monroe: Common Courage Press, 1995. (An excerpt from the book which goes into a little more detail about the Gulf War is available here.).
    Teaser Paragraph: Publish Date: 06/05/2005 Article Image:
    Jolly Mathen is an independent philosophical researcher residing in San Fransisco. His paper On the Inherent Incompleteness of Scientific Theories is a fascinating look at the consequences of some important concepts in the philosophy of science and mathematics, which I was able to question him on recently.

    - Interviewed by Paul Newall (2005)

    PN: Can you explain the basic aim of your paper?

    JM: Primarily to provide an argument for the incompleteness of scientific theories. Secondarily to show connections between scientific incompleteness, belief, arbitrariness, self-reference, and some ideas in the philosophy as science, such as the Quine-Duhem thesis, the underdetermination of theory, and the observational/theoretical distinction failure, and some ideas surrounding the concept of mathematical incompleteness, such as complexity, infinity and computational irreducibility.

    PN: What do you mean by an "incomplete" scientific theory? In particular, you claim in your paper that "every field of scientific inquiry stands incomplete." Why?

    JM: By an incomplete scientific theory, I mean that there will always be non-trivial questions left unanswered by the theory. In regards to my claim, "every field of scientific inquiry stands incomplete", it is just an observation of the present situation. Some scientists may believe that they may be close to a complete or final theory, but I don't think any of them would claim that their theories are presently complete; they are obviously still working on it. Can anyone today identify a complete scientific theory? Of course, in my paper I am arguing that, even in the future, scientific theories will remain incomplete.

    PN: You point out that a problem for scientific theories is the "experimental dilemma"; that is, whether novel observations will ever cease and allow us to call a theory complete. Do you think this requirement can ever be met?

    JM: Any scientific measurement is always accompanied by an error. That is, every measurement is taken with a certain amount of precision, which can always be improved. Because this is characteristic of every scientific measurement ever performed, I think we can take it to be a fundamental principle. This is just an inherent feature of scientific measurements and, more broadly, of our every day natural experience. We can always zoom in and get a more detailed picture of any natural phenomena. Some contend that this may come to a stop someday, for example at the ultra-subatomic level of the Planck scale; but I don't think so. In this sense, I don't think that novel observations will ever cease. Besides precision, I identify in my paper three other aspects of a measurement that can be tuned to give us a new look at the phenomenon under consideration: perspective, range and interaction.

    The more important question though is whether the continued improvements in the precision of any measurement will reveal any surprises or not. If not, then the novel experience issue becomes a moot point. Presently no scientific theory, or meta-scientific theory, can guarantee that more precise measurements will continue supporting a theory. Unless such a guarantee is given, it will remain uncertain whether improved measurement precisions will result in surprises or not. Because of the arguments and results in my paper, specifically the theorem of undefinability of valid observations, I take it that no such guarantee can be given, and therefore novel experiences are a point of concern and not moot. Strictly speaking, however, the empirical evidence alone does not require such an interpretation. I believe that it is worthwhile to examine the issue further and see if there is a more clearer connection between novel experiences, the theorem of undefinability of valid observations and theoretical incompleteness.

    PN: In your paper you compare the philosophical debate on the existence of God with the scientific debate on whether a Theory of Everything (TOE) is possible. Why are the two related?

    JM: On a surface level these questions are related because the demonstration of a miracle (which would prove the existence of God) would depend on our having a complete TOE (or complete theory on any specific domain). For example, when Moses parted the Red sea, how do we know that some alien race whose knowledge of physical laws is far superior to ours was not orchestrating some complex technological drama for their curiosity and amusement? The only way we can know that this is not the case is that if we understand those laws completely. For then we can say whether the parting of the sea under the given circumstances is a physically possible event or really a miracle. (For those of you who are Star Trek fans, there was a Star Trek Next Generation episode where a female goddess was subjugating the people of a planet to all sorts of catastrophic "miracles" until the Enterprise discovered that the "goddess" had an invisible ship orbiting the planet.)

    On a deeper level these questions are related because belief (religious or otherwise) and understanding are related. For instance, one might ask, "Can't I still believe in God in spite of having achieved a complete physical theory? What do my beliefs have to do with scientific theories?" The argument here is that our cognitive capacity to believe prevents our cognitive capacity of understanding from being complete. Theories fail to be complete because of our capacity to doubt them and believe in God (or some other theory); or, vice versa, belief is only possible because of the inherent incompleteness of our ideas about the world.

    PN: As a result of this comparison, you conclude that the existence of God is "empirically undecidable". Can you explain what this means, how you argue for it and how this position differs from the traditional atheist/theist/agnostic spectrum?

    JM: By "empirically undecidable" I mean that no matter what we observe or experience, as long as our scientific theories about those experiences remain incomplete, we cannot use those experiences to argue for or against God's existence. In other words, proving whether some observed phenomenon is a miracle or not is an impossible task in the light of scientific incompleteness. From the answer to the above question we can see how proving whether some observed phenomenon is a miracle is impossible as long as our understanding of physical laws remains incomplete. By similar reasoning, we can also see that as long as this understanding remains incomplete it is not possible to rule out the role that God plays: observations that the incomplete theory is thus far unable to explain maybe ultimately due to God; moreover, the theory being incomplete may then turn out to be actually incorrect, thus possibly ceding the explanation of all our observations ultimately to God.

    Since I argue in my paper that scientific theories are incomplete for cognitive reasons, then because of the above implications I am also arguing that the question of God's existence is undecidable for cognitive reasons. Therefore, on reflection, I guess the argument supports an agnostic position.

    PN: In your paper you relate the incompleteness of scientific theories to G del's work in mathematics. How closely are the two related?

    JM: They are very much related, but not in the manner of the traditional argument stating that any math based physics is incomplete owing to G del's proof that arithmetic is incomplete. Scientific theories are incomplete regardless of whether their mathematical models exhibit undecidability or whether they even have mathematical models. Let me elaborate. Since Goedel's time, mathematicians have discovered many other systems that exhibit incompleteness and undecidability, for example, computers and cellular automata. By studying these systems, they have been able to identify two fundamental features necessary for incompleteness: infinity and complexity. Using arithmetic as an illustration, if we limit the natural numbers to some maximum, no matter how large, say, for example, one billion, then arithmetic can be given a complete description. The same holds true if we lessen the complexity of arithmetic by removing either the addition or multiplication operation (even if we allowed an infinite amount of numbers).

    I suggest in my paper that the interaction between nature and our sensorial-cognitive system gives rise to processes that are also complex and infinite in character, thereby preventing our experiences and the theories of our experiences from ever being complete. We can easily see how infinity is involved. As pointed out above, we can always have novel experiences of natural phenomena, by either improving the precision of a measurement or by tuning its range, perspective or interaction. Thus we can continually count up new experiences as we can count up new numbers. How complexity is involved is more difficult to assess at this point. Besides the complex processes going on within our sensory-cognitive system, I would also guess that languages used to describe our natural experience must have a minimal complexity. The complexity question is certainly an area for future research.

    Besides infinity and complexity, there are a couple of other features that we can associate with incomplete systems: self-reference and arbitrariness. Self-reference is a central feature in both Gödel's mathematical proof and in my demonstration of the undefinability of valid scientific observations. In both instances, it is employed in a manner similar to the construction of the liar paradox, "This statement is false". Incomplete systems are also characterized by the fact that they lead to multiple arbitrary formulations, without singling out any one true formulation. For example, there are multiple formulations of arithmetic, set theory and geometry. In science, arbitrariness was recognized early on in the 20th century by philosophers as the underdetermination of theory, which states that it is always possible to have multiple theories on a given domain of phenomenon. The relationship between incompleteness and arbitrariness is this: because our language cum theories cannot completely capture our experiences (or mathematical ideas), we must allow for flexibility and mutability in its usage. In summary, we see that mathematical and scientific incompleteness share many features in common: infinity, complexity, self-reference and arbitrariness.

    PN: What is the relationship between incompleteness, theory-ladenness and underdetermination, in your view?

    JM: The underdetermination of theory is a philosophical position that states that an observation(s) does not determine a unique theory, but allows for multiple competing theories. Deservingly so, it has been a point of much contention and confusion. How is it possible that two or more mutually inconsistent theories can possibly describe the same observational data? I argue that it is only so because the observational data is incomplete; if the observational data on some domain could be complete, then only one unique (class of) theory can be supported. Let me make an analogy. If you were given a low resolution image of a photograph and asked to guess what it represents, you may entertain several possibilities. But as the resolution of the image improves, the possibilities that you're willing to entertain becomes less and less, until finally the resolution matches that of the human eye and you can see exactly what the image represents. I suggest that in science we are always looking at a low resolution "image" of our experiences, which we are continually trying to improve the resolution of (for example, by making our experimental measurements more and more precise), but never reaching perfection. Due to the cyclical nature of scientific progress, there are times when the observational data on some domain appears almost complete and times when it is found wanting. During the prior times, scientists will settle on one theory (assuming, of course, that any technical hurdles in candidate theories are resolved) and any claims of underdetermination would only fall on deaf ears, whereas during the later times, scientists are willing to entertain multiple theories.

    Incompleteness also makes clearer the much discussed connection between underdetermination and theory-ladenness, or holistic models of science. If an observation is incomplete, then the description of the observation is also incomplete. The observation and its description, like the low resolution photograph, are both fuzzy around the edges. This as a result undermines our ability to assign a unique observational term to the observation. At the same time, it allows us some play in the description. We can ply and mold it along its fuzzy edges to fit different theories. On a higher level, the descriptive incompleteness of multiple observations gets translated into an integrated theoretical incompleteness. All the language terms in the theory, being incomplete, are now laden with one another--observational and theoretical terms are inter-laden with the likes of both--resulting in a holistic web of inter-laden terms, in which there is a massive and multidimensional pliability along the fuzzy edges of the new theoretical superstructure. A pliable holistic model as required by underdetermination is therefore granted by the inherent incompleteness of the observational and theoretical terms occurring in the language of any theory.

    As a note, I would like to add that theory-ladenness is more fundamental than incompleteness and under-determination, and is the cause of the later two. The theory-ladenness of observational terms, or the observational/theoretical distinction failure, is the scientific manifestation of a cognitive symptom: the inseparability of sensory and thought processes, an issue that, like the observational/theoretical distinction failure, is much debated. I think that it is a worthwhile program of future research to determine if this inseparability in fact exists and why, and how it leads to incompleteness.

    PN: Can you explain briefly what you call "the problem of the undefinability of valid observations" and its relationship with other critiques of completeness?

    JM: Our scientific theories are supported or refuted by our observations about the world. Therefore we need to have a clear idea when a certain observation has taken place. At first, one may say what is the big deal, isn't it obvious? Some philosophers and cognitive scientists have argued it isn't; our theories and background knowledge affect what we see with our naked eye and how we interpret our scientific instruments--the issue of theory-ladenness. Whether this is so and the extent of it is much debated. Second, even if we can distinguish observations independently of our theories, can we clearly distinguish among different observations? For example, is some large bush perhaps really a tree? Can we offer distinguishing criteria, perhaps based on sub-definitions of the tree's components, that will be sufficient? Finally, can we give definitions of observations that are thorough enough so that we can't be fooled by the best technological imitations or even some virtual reality simulation? (After all, we can't have some imitation observation dupe us into thinking that some scientific theory is true or false.) The above, taken as a whole, is the problem of determining valid observations.

    Many of the issues we're talking about here, such as completeness, under-determination, theory-ladenness, and the identification of miracles, come to a head on the ability to determine valid empirical observations. For example, if we could determine valid observations, then we could assign definite, theory-free observational terms to them. In my paper, I produce a self-reference argument to show that there can exist no scientific procedure to determine valid empirical observations. The novel experience problem, or observational incompleteness, also lends some credit to this conclusion: because observations are always incomplete, or fuzzy around the edges, we can never make a clear determination of their occurrence.

    PN: You give several critiques of the notion of completeness. Which do you consider the strongest - and why? Is your argument cumulative or does it follow from any of the objections to completeness?

    JM: Presently, the two strongest reasons are the theorem of undefinability of valid observations, which is based on a self-reference argument (as just mentioned previously), and the novel experience problem, which is based on the observation that the precision of all scientific measurements can always be improved. The argument does not have to be cumulative, but a cumulative argument, as given in my paper, can serve to flesh out the connections between the many different critiques.

    PN: Your paper explores the philosophical pedigree of your thinking, concentrating in particular on the Duhem-Quine thesis. Can you explain this thesis and why it was important to your argument?

    JM: The Quine-Duhem thesis is a generalization of the under-determination of theory and, like it, presupposes that all the observational and theoretical terms, concepts and laws of a scientific theory are interconnected in a holistic web, and that by suitably modifying aspects of this web, any theory can be accommodated to any observation. This curiously, and alarmingly, makes science a somewhat arbitrary affair, and has therefore become a hotly debated issue. The Quine-Duhem thesis is important to my paper on two points. As mentioned above, scientific incompleteness, like mathematical incompleteness, requires theories to have a certain amount of arbitrariness to them. The Quine-Duhem thesis fulfills this requirement. In fact, the pliability implied by the Quine-Duhem thesis, like in the case of the underdetermination of theory, is due to the incompleteness of the observational and theoretical terms occurring in a theory.

    The Quine-Duhem thesis also brings to the surface the role played by belief in science. As I mentioned earlier, scientific incompleteness and belief are joined at the hip. But this is not restricted to religious belief, but any kind of belief, even a scientific one. In fact, the two components of belief, faith and doubt, find exact parallels within the Quine-Duhem thesis. The Quine-Duhem thesis itself has two components. The first component is the underdetermination of theory, which again states that an observation(s) does not determine a unique theory, but allows for multiple competing theories. The second component is the underdetermination of observation, which states that a theory can accommodate multiple incompatible observations. As you may guess now, the underdetermination of theory plays the role of doubt: we can doubt some theory in favor of some other theory. The underdetermination of observation plays the role of faith: we can always hold onto some given theory no matter what the observational evidence.

    In light of this parallel, we can draw another parallel between the God existence debate and the Quine-Duhem thesis. The inability to prove that God does exist can be likened to the inability to prove whether any given theory on a given domain of phenomena is the correct one. The failure is due to doubt or the underdetermination of theory. Second, the inability to prove that God doesn t exist can be likened to the inability to prove whether any given theory on a given domain of phenomena isn t the correct one. This failure is due to faith or the underdetermination of observation.

    PN: What consequences do you see for your paper and the critique of completeness?

    JM: Some are: (1) The study of meta-science can tell us something about our ability to believe. (2) That no matter what we may observe, "miracle" or "scientific fact", we can never prove that God exists or doesn't exist. (3) That we will never achieve a theory of everything in physics nor a complete theory of any domain of phenomena, i.e., of chemical interactions, genetics, cancer, star formation, evolution, etc. (4) That our understanding (theories) of all natural phenomena will continue to evolve. (5) That we can't rule out the development of seemingly impossible technological advances, such as faster than light travel and anti-gravity devices.

    But more importantly, what do the above consequences tell us about sentient entities like ourselves and how they sense and understand the world around them? About living organisms and how they interact with their environment? It seems to me that there is something peculiar and deep going on here. I think there are bigger questions ahead.

    PN: What are your next projects? Will you be continuing to work on the ideas in this paper?

    JM: This question dovetails on the end of my last response. So let me expand on that. For starters, how is it that our experiences can never be complete, that is, our scientific measurements can never be 100 percent precise? Is this purely an extraneous feature or something that arises out of our interaction with the external world, and, if the latter, how does it arise? Another question is whether cognitive processes are necessarily wrapped up with sensory process and, if so, why? I think that cognitive science would be an excellent avenue of pursuit for these questions.

    Also, the study of language and formal systems may be able to tell us something about the complexity requirements, if any, of scientific (and natural) languages. (As I mentioned earlier in the interview, a minimal level of complexity is a requirement for incompleteness.)
    Last, I think that quantum physics may be able to shed light on scientific incompleteness. Quantum physics has taken science to the point where the role of the observer has become an integral part of the theory. It doesn't merely tell us about the world, but about our knowledge of the world. Further research may then tell us whether this knowledge can be complete or not. (In this spirit, the uncertainty principle and the novel experience problem may appear related, but, non withstanding further investigation, I can only see a superficial connection between the two.)

    PN: Why are you interested in the philosophy of science? What prompted you to consider this issue at all?

    JM: The philosophy of science, like epistemology and cognitive science, is interesting to me because it addresses the nature of knowledge and understanding itself, one of the great mysteries. I didn't pursue this interest seriously in the past because I thought philosophy to be too speculative and, at the same time, I was already taken by the mysteries of physics. Then about four years ago I decided to finally find out what was all the fuss concerning G del's incompleteness theorem in mathematics, a result that I had only heard about here and there but never really understood. What G del had discovered stunned me. How could such a formal and logically tight system as mathematics be eternally incomplete? To me, this said something about the nature of knowledge. It harked back to some ideas I had during my philosophy and history of science courses as an undergraduate. The new-found mystery and the rekindling of old ideas prompted me to investigate whether incompleteness was a more widespread phenomena, and what were its causes. My paper represents a momentary culmination in this on-going investigation.
    Teaser Paragraph: Publish Date: 06/05/2005 Article Image:
    By Paul Newall (2005)

    M. Night Shyamalan's The Village has aroused vociferous responses from viewers and commentators alike but there have been few detailed studies of the themes and ideas explored in considerable depth in the movie. In this essay we look at the celebration and criticism of utopian communities alongside the love story that forms the core of the film.

    It is worth noting in passing, however, that Shyamalan is not thought the modern Hitchcock for nothing. In particular, the use of editing in The Village to maintain the deception until the last possible moment (especially when Ivy Walker is set to leave) is masterful. The role of colour, too, and the manner in which it adds to the tension and sense of separation, is quite brilliant. Some viewers reacted badly to the plot twists, perhaps because of his other works in which a similar experience of having the rug pulled out from under them had made them expect such a device, but here protests miss the point of the film and ignore what has been achieved over the course of the story. By examining it more closely we can learn how actually nothing has changed in the community between the opening and closing scenes, save our feeling for what was important about the village after all.

    Before shooting began, Shyamalan put all his actors through a form of "boot camp" in which they were introduced to the skills required to live off the land self-sufficiently. From their own comments, it seems this period helped them appreciate how they would need to rely on one another as well as understand how much pleasure could be derived from such a life. Joaquin Phoenix even carved Bryce Dallas Howard a guide stick for her blind character Ivy Walker that made it easy work to pretend to be in love with him, she has said. It is against this backdrop, however, that Shyamalan examines the utopian ideal and the many questions associated with it.

    Covington Woods is a community isolated from the outside world by the presence of "those we don't speak of", hostile creatures who live in the woods in something of a truce with the inhabitants of the village: they stay away so long as the people maintain their border unbreached. A line of markers and a watchtower mark the detente. These circumstances are apparently coincidental, however: the elders moved to Covington originally to get away from the nearby towns – "wicked places where wicked people live". Having lost loved ones to crime, the founders of the community have journeyed away from the decadence they saw in the world to try again and provide a better life for their children.

    Shyamalan uses his village to plumb the depths of this life for philosophical insights into the nature of the utopian venture. The first and perhaps most general issue is whether we can secede from the world to avoid societal problems or whether these are inevitably part of life? Can we create a community without them? Many people, it seems, believe we cannot, holding wars and criminality to be unavoidable and a part – if an unfortunate one – of the human condition. This is August Nicholl's opinion, who wakes with a start mumbling "… like a dog can smell you." When Lucius Hunt asks him what was said, he expands:

    Later, near the close of the movie, he repeats the lesson he has learned from recent events, including the death of his son, saying "we cannot run from heartache… heartache is a part of life. We know that now." Somehow this can strike us as an easy answer, though: certainly the community of Covington Woods does not appear to have any of the concerns faced by the distant towns, although we might say that Shyamalan has created a fiction only. There is a distinction to be made between unavoidable sorrows due to death and accidents, on the one hand, and malaises like rape and murder. Can people come to terms with the former while learning to avoid the latter? Is it unduly pessimistic or instead realistic to answer in the negative? The Village shows us that this demarcation is a sound one, but its accuracy depends on the extent to which we take the film to reflect genuine possibilities in our world. We can also look to actual utopian communities to help us decide this issue.

    Even if they are not perfect, of course, we can still wonder whether Covington Woods and other utopians societies are better than the outside world, and what "better" can mean in this context. The villagers seem genuinely happy, for example, but again we can object that this is based on Shyamalan's imagination and may not be representative. Nevertheless, those of us with experience of life in smaller communities can attest to the value of closer integration with the people around us while, conversely, the alienation due to modern life in large cities has been the subject of much study by sociologists and psychologist, among others. When we listen to Jake (actually a cameo by Shyamalan himself) holding forth on how to best work for the Wildlife Preserve, all the stories in his newspaper concern murders or combat deaths.

    What factors limit the success of separatism? The most important one for the story is the medical constraint. Stabbed by Noah Percy, Lucius lies dying from an infection that can be cured by medicines available in the dreaded towns and this knowledge weighs heavily on the mind of Edward Walker. The additional irony is that Lucius had himself requested permission to travel to the towns to return with potential new medicines, his intention being to improve the quality of life in the village and perhaps help Noah. This, of course, is a common objection to utopian ideals (and also to primitivism): we may rue the evils of modern life, but are we prepared to do without the many advances in medicine if we give up on it and try to start anew? The implication is that either we have to make just such a bargain or we cannot consistently reject the outside world.

    When it comes to the threat faced by Lucius, indeed, Walker does not insist on the separation of Covington Woods that he has fought so hard to protect. His wife, on the other hand, is the voice of the critic of utopia:

    This idea that "no good can come without sacrifice" is one some people accept, such as those who refuse blood transfusions or invasive surgery that could potentially save them in order to maintain a principle that holds these are morally (or otherwise) wrong. For Walker, however, matters are not so straightforward. "It is a crime what has happened to Lucius", he insists, and although this event occurred within the community, there is a sense in which the intentions behind it have been violated by the attempted murder, a justification Walker uses to appeal to outside help. Is this acceptable? More importantly, is it not somehow an absurd question to be asking?

    This narrow view of utopias relies on a strict separation of them from the outside world, but on the face of it there is no reason why we should judge them according to a criterion of self-sufficiency. The Amish, for instance, prefer to live apart but occasionally trade with others. Can the utopian community, by its example, show that a different life is possible without necessarily giving up on everything? Shyamalan hints at this interpretation with the apparently incidental character of Kevin, the patrolman who guards the border of the Walker Wildlife Preserve. Confronted with the desperate figure of Ivy Walker, he is won over by her appeal and obtains the medicines she needs. More than this, however, we notice the mixture of quiet awe and fascination with which he sits in his van when she has gone, deep in contemplation. This is a particular case of the appeal of utopian ideas, of course, in which we wonder if the lives we lead are not really taking place in the best of all possible worlds in spite of the technological advances we claim to have made. Perhaps it is for this reason that Walker has arranged not to keep people in his community, a function carried out by "those we don't speak of", but to keep them out – achieved, as we learn, by paying government officials to prevent plane routes from passing over the woods as well as by the setting up of the preserve.

    In addition to the matter of how the village should interact with the towns, there is the converse: how should wider society treat utopian communities? Do people have the right to secede if they wish? Here again we come to the medical critique: if a group proposes to settle apart from others, should they then be denied access to public institutions and services? The issue here is that in most states we are expected to contribute to the maintenance of order and other provisions, such that withdrawing into a separate community would deprive others of funding. Why, then, should such people remain entitled to healthcare? This tension is what Shyamalan exploits with the dilemma faced by the elders when Lucius' life hangs in the balance.

    It remains the case that Walker breaks his own oath, if not himself then through his daughter, by allowing contact with the towns. With this exception, he and the other elders maintain the pretence of the creatures inhabiting the woods to sustain the separation of their community. Are "those we don't speak of" an example of a noble lie, a shared falsehood used ostensibly to bring about positive consequences? We see, eventually, that the creatures are only "farce", but they sustain the integrity of the village and its ideals. Here we find another interesting question explored by Shyamalan: are such noble lies required by utopias, in one form or another? Plainly other societies cope without the threat of beasts clad in red, but is there a necessity for an inward-looking mentality of one form or another, or at least a feeling that there is no need for the trappings of the outside world? For one thing, the suggestion seems to be that it is difficult to leave behind everything, as Lucius observes when he tells his mother that "there are secrets in every corner of this village. Do you not feel it? Do you not see it?" Her justification is that she does not want to be ruled by her memories but at the same time does not want to forget them and the reasons why she decided to become part of Covington Woods in the first place.

    The noble lie, in any case, does not scare Lucius and he is conscious of this medical problem. Warned of the possible danger, he insists that the creatures will sense his motives: "they will see that I am pure of intention and not afraid." Where the death of Daniel Nicholson plays on his father’s mind, convincing him (as we have seen) that sorrow is unavoidable, for Lucius it outweighs the threat of potential harm – in short, it does not suffice to keep him content to stay in the village. This restlessness, caused in Lucius by a desire to help others, ifs effectively countered in his fellows by the reminders of "those we don't speak of" (particularly the warnings in the form of the skinned animals). That some, like Lucius, are not satisfied suggests that to preserve an order based on a lie those who seem likely to disregard it must be brought into the deception. This is the subtext when Walker asks "who do you think will continue this place, this life? Do you plan to live forever? It is in them that our future lies. It is in Ivy and Lucius that this way of life will continue."

    This, though, is one of the most difficult aspects to the story and Shyamalan is careful with it, not providing a clear – if any – answer. The optimistic interpretation is perhaps that Ivy and Lucius will appreciate the value of the community they are a part of and agree that its survival is more important than the truth, such that a relatively harmless lie is a price worth paying. Walker tells his daughter that "there is no one in this village who has not lost someone irreplaceable, who has not felt loss so deeply that they question the very merit of living at all", but he is speaking only for the elders. When he insists that "it is a darkness I wished you would never know", there is no reason to doubt his sincerity and yet it is the morality of the means that disappoints Ivy. "I am sad for you, Papa", she replies, and this is the indictment of the endeavour as a whole and the point on which the story turns (with one exception, discussed below). Is the life in Covington Woods justification enough for the lie, or does it show us that utopias are predicated on a discontinuity with the rest of the world that in reality does not exist and hence can only be sustained by lies? "What was the purpose of our leaving?", Walker asks. "Let us not forget – it was out of hope of something good and right." Challenged that he has put the community at threat, he ultimately retreats to a moral argument:

    Here, it seems, is our answer: it is better always to do what is right, and the perspective of his beloved daughter has won the day.

    Although this is an impassioned speech and it convinces the objectors, Shyamalan gives us a different kind of innocence to that Walker appears to have had in mind. All the exploration of utopian communities and ideals serves as a backdrop to what The Village ultimately is – a love story. For all the critical commentary on the twists in his plots, in this movie his direction takes second place to his writing, with some beautiful dialogue underscoring the depth of the relationship between Ivy and Lucius. The latter part, according to Shyamalan himself, was written specifically for Phoenix, and his soft, breathless delivery perfectly compliments his nervous yet quietly confident role as Ivy's guardian angel. "How is it that you are unafraid while the rest of us quake in our boots?", she asks him, and he responds in a way that helps us understand Shyamalan's verdict on Covington Woods and experiments like it:

    For Lucius the threat of the depravity to be found in the towns or the dangers lurking in the trees has no effect because of the attitude he takes towards life. His utterly unconditional love for Ivy is but one facet of a character that seeks only to achieve what is necessary and no more. In many respects he may strike us as something of a simpleton, but this speaks to our own prejudices that Shyamalan is challenging. Lucius senses that the separation of the community from the world outside is one side of a false dilemma and wishes to travel to the towns not for his own benefit but for that of others. He is simultaneously trapped by his indecisiveness in personal matters, which Ivy summarises by telling him that "sometimes we don't do things we want to do so that others won't know that we want to do them." The tension arising in him as a result of these two aspects, in which his quietude and willingness to selflessly help others restricts his ability to tell Ivy how he really feels, is something of a microcosm for the strange way in which utopian communities exist outside the modern world even as their very closure limits what they can achieve.

    For her part, Ivy is the heroine of the tale as a whole and ultimately saves Lucius. She is the leader-in-waiting of the community and is able to quiet Noah as no one else can. The interesting thing about her is that she is blind. For Shyamalan, likely intentionally, this has two consequences: firstly, she cannot see the outside world when she ventures into it – nor that the creature that attacks her in the woods is really Noah. When her father tells the Percys that "your son has made our lies real", his daughter’s lack of sight is an equal factor and gives them a chance to maintain the pretence if they wish to. The moment at which the elders all stand in agreement is significant, as we will come to appreciate below.

    Secondly, she is truly self-reliant in a way that the other villagers (excepting Lucius) lack the courage to be – and even Walker, when faced with the conflict between his principles and his concern for Lucius, choses the former. We could say, of course, that this actually is the brave decision to make, but there is a strange disconnect between his words and his actions. He tells Ivy that the burden of travelling to the towns to find medicines is "yours and yours alone", but he sends two escorts to accompany her and tries to convince the other elders that this is the right thing to do. Moreover, that Lucius might have died was due to his living Covington Woods, where not only were barriers erected in the form of "those we don't speak of" to prevent villagers going to the towns to improve medical conditions but also the preserve is strictly controlled to ensure that no influences from the towns reaches it. When Noah stabs Lucius, then, it is difficult to accept that responsibility for remedying its consequences falls solely to Ivy when the elders have brought about the circumstances that would lead to death without the towns. Indeed, we could argue that the movie demonstrates that the effects of our choices are far wider ranging than we might suppose, in this case for all intents and purposes condemning a man to death for the sake of a principle. Walker is consistent initially but changes his mind when his daughter tells him that she will die with Lucius.

    To understand how the problem is answered by Shyamalan, we have to look to the relationship between Lucius and Ivy and how it is used to conclude The Village. There is no better way to stress the importance of these characters than the beautiful porch scene in which Lucius quiets Ivy. "What can you not say what is in your head?", she asks him, after taunting him with talk of their wedding and making plain that she knows how he feels about her. "Why can you not stop saying what is in yours?", he replies, and then it begins, leaving the viewer spellbound:

    Lucius loves Ivy completely, in a fashion that takes no account of where they are and for what reasons. Ivy, likewise, is utterly certain of their love and tests her faith in him when the creature first visits the village. As touching as this may be, though, what relevance does it have to any of the foregoing or to interpreting The Village? The answer lies in the closing moments, when the elders have made their pact to perpetuate their stories and the community the way it is. As they stand in unison, Ivy returns and ignores them all, rushing to the bedside. As she grips Lucius’ hand, the questions of where she is or what principles should guide our lives fade into nothing and the words of her father ring in our ears. "She is led by love. The world moves for love. It kneels before it in all." Covington Woods is many things – an experiment or a critique of society and the utopian communities that try to improve on it – but it is the place where yet again the world moves and buckles under the weight of love. What, in the final analysis, is most important? Ivy Walker answers in two words and all else comes to nought: "I'm back."
    Teaser Paragraph: Publish Date: 06/03/2005 Article Image:
    By Paul Newall (2005)

    A familiar sight in the philosophy of science is reference to the underdetermination of theories by the available evidence. In this short paper we will explore some examples of this phenomenon and the reasons why it is posited as a problem.

    In 1543, Nicholas Copernicus published his De revolutionibus orbium celestium and, in the years that followed, some philosophers and astronomers took up the idea of a Sun-centred universe with a moving Earth. However, when Cardinal Bellarmine had occasion to write to Foscarini about whether or not Galileo and others had been able to demonstrate the truth of heliocentrism, he suggested that

    He was discussing the notion that the Copernican system was able to save the appearances; that is, that it was possible to explain what was observed in the heavens on the basis of Copernicus’ theory. Although Bellarmine allowed that this theory could be considered better than the "eccentrics and epicycles" of the Ptolemaic/Aristotelian system, in fact it employed approximately the same number of these devices. Although Galileo was able to point to his telescopic work, this was unable to provide the demonstration he sought. The judgement of the day, then, was that it was impossible to choose between the Copernican and the Ptolemaic/Aristotelian systems on an empirical basis; and this is an assessment that philosophers and historians of science agree on today.

    We call this situation underdetermination: the available data do not permit us to make a decision between two (or more) rival theories. Although some thinkers have suggested that this is only a minor problem, since it occurs only rarely, this is not the case. In particular, if we consider gravitational theories or the situation in contemporary physics since the advent of quantum theory, this position is untenable. Nevertheless, it is important to clarify the difficulty: underdetermination is found when we compare two large-scale theories, not isolated ones. This is because when we talk about a "theory", we do not mean (and cannot mean) a singleton, considered on its own. Following an argument from Quine, our theories are always interconnected, mutually supporting one another. In particular, any theory needs a host of auxiliary hypotheses in order for us to use it, which forms a criticism of methodological falsificationism.

    Given that sometimes theories are undetermined, then, how can we decide between them? An obvious answer, of course, is not to decide at all. If we cannot find a way to make a demarcation then we could simply take an agnostic position and admit we do not know which is "better". In that case, we could divide our efforts between the two (or more) and see if there is subsequently a difference that comes to light as they are developed further. The is sometimes called methodological pluralism or the proliferation of theories.

    A second response is to realise that empiricism does not hold the status once ascribed to it: we do not accept or reject theories based solely on the evidence for them but also on account of many non-empirical criteria, such as parsimony; internal consistency; beauty (for example, Copernicus’ certainty that a Sun-centred system was more aesthetically appealing); explanation; the ability to make novel predictions; and so on. This does not answer underdetermination so much as accept it as a limitation on empiricism, which can thus only take us so far in the matter of theory evaluation and choice.

    Strong Underdetermination

    The recognition that evidence is not the only heuristic we employ in deciding between theories allows us to distinguish between two forms of underdetermination: strong and weak. The first of these tells us that there is no way to distinguish between theories with the same observable consequences – called empirical equivalence – and points to the existence of an infinity of possible theories consistent with any finite data set. For example, the theories "general relativity" and "general relativity plus 'New Zealand will win the next Rugby World Cup'" are equally supported, but their comparison seems absurd.

    Indeed, strong underdetermination is typically rejected because it fails to note that we do not claim to be able to choose between empirically equivalent theories on the basis of empirical criteria, which is impossible by definition. Moreover, it relies on an implicit separation of theory and observation: when we say that the evidence underdetermines the theory choice, we run up against theory-ladenness. Since we cannot distinguish between theory and observation in a straightforward fashion, we cannot appeal to or rely on theory-neutral observations and say that these disallow the possibility of making a choice. After all, the observations that give us this problem of underdetermination in the first place are themselves theory-laden. In brief, then, we cannot say that underdetermination makes theory choice impossible because we already use theory in obtaining the evidence that leads to underdetermined theories to begin with.

    The combination of this limitation and untenable theory/observation distinction makes strong underdetermination too bold a claim.

    Weak Underdetermination

    The second form of underdetermination acknowledges these difficulties but makes a weaker claim; that is, that it is always possible to construct alternative theories which are empirically equivalent and also share many of the characteristics we desire in scientific theories. For example, suppose that a theory T1 represents the entirety of science at a given time and that P stands for the set of all observable phenomena – observable whether "naturally" or by extension using instruments. Assume then that T2 is a rival theory that has the same consequences in P. It follows that T1 and T2 are underdetermined and – more importantly – that no amount of advance in instrumentation will change the situation, since we can always construct a similar argument.

    Another instance of underdetermination to concern ourselves with is that provided by Goodman’s New Riddle of Induction, discussed when looking at confirmation. By applying predicates like grue we can find theories that agree empirically to date but make differing predictions at some point in the future. In general, weak underdetermination is the recognition of the limits of evidentialism, the notion that we hold to our ideas insofar as they are supported by evidence.

    To summarise, underdetermination is almost an acceptance that we are creative in our explanations and can typically find more than one for a given puzzle. It speaks against a naive form of empiricism and is only a problem for those who suppose that there is nothing more to science and scientific theories than an appeal to data.
    Teaser Paragraph: Publish Date: 06/03/2005 Article Image:
    Gonzalo Munévar is a professor in the School of Humanities, Social Sciences and Communication at Lawrence Technological University in Southfield, Michigan, and a former student of the philosopher of science Paul K. Feyerabend. He is the author of several books, which are linked to in the body of the interview, and also a keen writer of fiction that his teacher enjoyed (see his The Master of Fate, for example). I was fortunate enough to be able to ask him about Feyerabend - both the man, his approach and his ideas - as well as some of Professor Munévar's own thinking.

    - Interviewed by Paul Newall (2005)

    PN: How did you first become interested in Feyerabend's work?

    GM: I was writing a Master's thesis in the philosophy of mind when I ran across his papers on what later came to be called "eliminative materialism", the view of which the Churchlands are the main exponents today. Those papers made me realize that to do philosophy of mind properly one had to place it in the context of philosophy of science. When I went to Berkeley for my doctorate and became his student, it was only natural to read his work in the philosophy of science. He was so mesmerizing, one wanted to read his papers. That was four years before the publication of Against Method.

    PN: What did you learn from your time as Feyerabend's student?

    GM: You have to understand that at the graduate level Feyerabend never taught his own work, at least not directly. Everyone in his seminar was supposed to give a presentation, on a topic of the student's choosing, and he criticized every presentation in a very forceful way. If his views came in at all, they did so in the discussion, in the argumentation going back and forth. It was not in the style of most graduate courses and seminars, but rather in the style of Socrates, except that it did not have the endearing but self-serving statements Socrates used to make. Feyerabend was practically without ego, at least in his graduate seminar. After I presented what I thought was a very original idea in that seminar, I tried to impress another professor with it. The professor told me Feyerabend had already published it some years previously. It was in one of the few papers of Feyerabend I had not read. But sure enough, the idea was in it. I asked him why he had not said anything to me. He said he had forgotten all about it, so when he heard me explain it, he thought the idea was mine and new. I wish I had learned his modesty, but I am afraid I've never managed to be that good of a person.

    In his undergraduate courses he did lecture, and in his lectures he did more formally exposed students to his views, but he never pushed those views even then. What I did learn from Feyerabend was to be true to my own philosophical inclinations. I guess that is why I gravitated towards him so easily: he let me be myself. I think by the time I finished my first seminar with him, it was understood that he would be directing my dissertation, even though we never explicitly discussed his involvement.

    I also learned from him that it was possible to hold the intuitions (or prejudices) that I had about science and philosophy without being a fool. I went into philosophy because I thought it was a mess that needed straightening out. I had all the impulses of an analytic philosopher but felt that analytic philosophy was a dead end. So it was wonderful to meet such an extraordinarily gifted man who thought along the same lines (well, roughly) and encouraged me.

    PN: Discussing Feyerabend's "Anything Goes" argument, you have written that "it should be an embarrassment to the profession that many reviews were completely unable to see the structure of this simple reductio". Why do you think philosophers of science then and since have been so quick to misunderstand Feyerabend?

    GM: Not just philosophers of science. Epistemologists are even worse. The reason, I suspect, is that analytic philosophy is a very narrow way of thinking, and it is hard to manoeuvre mentally under such constraints. But there is really no excuse. Feyerabend's arguments were very simple and straightforward reductio ad absurdum arguments: you use reasoning to tie together what the philosophers' standards tell them is evidence, and then you point out obvious conclusions that completely destroy their empiricist views. But in offering a reductio, you need not accept the evidence or the reasoning yourself. Otherwise, atheists who offer reductio arguments against the existence of God would have to (sincerely?) accept that God exists, since they advance such existence as a premise for the sake of their argument. It's amazing that philosophers who constantly make this point, to their students for example, could not see that they themselves were committing the mistake.

    PN: Why do you think Against Method and Science in a Free Society in particular were so misunderstood?

    GM: Against Method attacked practically every major intuition about scientific method that philosophers had had for three hundred years. He just had to be wrong. People wanted desperately for him to be wrong. And he used the history of science to make his points. This presented two additional problems. The first is that the epistemology of science was supposed to be about how science ought to be, not about how it is. The second was that philosophy of science was also presumed to be about the "logic" and "grammar" of scientific concepts. Feyerabend showed that the latter approach was simply irrelevant to understanding science. As for the first, he showed not merely that scientists violated the methods thought up by philosophers (and by scientists like Newton in their philosophical moods) but that they actually had to violate such methods if progress (or what we know consider progress) was to result. So even the philosophers that thought history irrelevant were now looking at the history of science and building alliances with "solid" historians, for they thought that Feyerabend's account of history just had to be wrong. What eventually happened, of course, is that they came up with "devastating" objections that turned out to be little more than paraphrases of Feyerabend's own work.

    By the time Feyerabend came out with Science in a Free Society, he was already a pariah and many philosophers refused to take the book seriously. The curious thing is that Feyerabend later took back just about every new thesis that he advanced in that book. That was the one work of his he came close to disowning. In particular, he took back the extreme relativism expressed in it (particularly on p. 70, for which it seems that I am to blame, if you pay attention to his footnote) and the thesis that all traditions should be equal before society, and he scaled back the most important corollary of that thesis, namely the separation of science and state.

    PN: Describing Feyerabend, you have written that he "[was] probably the most interesting person I have ever met [...] but his devastating criticism [wa]s the sort one would wish on one's worst enemy, or oneself when taking seriously the notion that criticism is at the heart of progress and improvement of ideas. The man question[ed] everything; even obvious claims c[a]me up for challenge and sometimes ridicule." Did you ever have any reservations about his approach? Do you think it contributed to the hostile reception he received in some quarters?

    GM: Some people never forgave his exposing their weak arguments for what they were. And the fact that he did it with a great sense of humor made them feel also ridiculed - they took it personally. I know that his mode of criticism contributed to the hostile reception he received in some quarters. He once showed me a letter from a famous philosopher whose book he had just criticized in print. The letter was one ad hominem insult after another. It also contained an announcement to the effect that Feyerabend was now in the writer's "enemies' list." On the other hand, he liked to bring down people he thought were pompous. Some friends of his told me that he had gone too far a few times. I wasn't there those times. When I was there, I really relished his manner. He openly made fun of me often. But I made fun of him just as often.

    There was an additional motivation on his part. Just as an idea discredited for two thousand years - the idea that the earth moves - can revolutionize science, the ideas from other cultures also have the potential to contribute to the progress of science. This implies that we should treat with respect cultures that differ from the Western culture, no only in spite of the admiration we feel for the Western advances made possible by science, but precisely because that respect will help maintain the climate of pluralism that is vital for the progress of our celebrated science (I explain this point below).

    Therefore, the lack of respect towards the traditions of ordinary people - the "vulgar," as philosophers used to say - and especially the lack of respect prompted by an empiricist conception of science can lead to a very damaging intellectual arrogance.

    Consider for a moment that until rather recently a person could end up in prison for practicing acupuncture (medical fraud); that in the name of "development" millions of women in the Third World were advised to stop breast-feeding their children and use powder milk instead (which of course they mixed with contaminated water on more than one occasion); and that in the presumably most advanced country in the world a high percentage of people are so obese they can hardly walk, thanks to a "scientific" diet - a diet officially sanctioned by the state - that forbids eggs (to which the human body is adapted) and emphasized, still emphasizes, refined carbohydrates (to which the human body is not adapted, which causes all sorts of physiological problems, obesity amongst them).

    Feyerabend detected that sort of arrogance in the contempt that many intellectuals feel towards ordinary people, their beliefs and their traditional customs. That is why he made fun of intellectuals, shattered their "reason", and called them "fanatics" and "criminals" for creating suffering and misery in the world by imposing their abstract "truths" on everyone else. His reaction may seem exaggerated, but we must understand it in the proper context. In the first place, if a tradition has served a society well and has allowed its members to adapt well to their environment, we have no right to impose our truth on them, no matter how scientific and confirmed it may appear to be. In the second place, many of the intellectuals' abstractions, even if named "truth" or "justice", are the result of bad reasoning (which he demonstrated with many examples), while the valuable ones are so only within a limited practical context. His last (and posthumous) book, Conquest of Abundance, deals with this issue of abstraction in great detail.

    PN: You have written (of Feyerabend) that "philosophy of science can well afford bold thinkers who are prepared to defend implausible ideas against all comers". What do you think motivated Feyerabend to "provide a service" in this fashion and why is it beneficial to the philosophy of science?

    GM: He didn't do it to provide a service. He did it because he was too intelligent not to see the flaws, too honest not to point them out, and too imaginative not to conjure up alternative approaches.

    My response to the Mill question below is relevant here too.

    PN: How would you describe the relevance of Feyerabend's thinking today and his legacy for the future?

    GM: The big philosophical problem about science was that the scientific method worked but we could not prove so: classical skepticism, Popper's efforts notwithstanding. Feyerabend came in and cleaned house: the so-called "scientific method" did not work; it actually got in the way of scientific progress (as defined by the empiricists themselves). I think this is a finding of the greatest importance, although not his only contribution. Philosophy cannot - should not - be the same after that, even though professional philosophers will keep on doing pretty much the same things for as long as they can get away with it. I am reminded of Romero's film "The Dawn of the Dead", in which the zombies go to the shopping mall to walk around and window-shop as they used to do when they were alive. Analytic philosophy no longer makes sense, in great part thanks to Feyerabend, but there you have it: a philosophy for zombies. But the zombies are still in charge, so who knows how Feyerabend's legacy will play in the years to come.

    Philosophy of science also suffered, before Kuhn and Feyerabend, because it was a pseudo-mathematical and irrelevant game. With a few exceptions (e.g., Popper) it was practically unreadable. That was the way the philosophers of science liked it: it made them feel superior. They did "science" too, not just some mushy humanities. Philosophy of science should have had something to say to scientists, but the scientists could not make any sense of it. And if you did go through the effort, the rewards were far too small. In that respect things have improved quite a bit. It is now possible to find whole articles in philosophy of science written almost completely in English, or Spanish, or some other honest-to-goodness language.

    In any event, his main legacy is a more humane and exciting understanding of science that ties philosophy to the practice of science, as I will explain in my response to the next question. It is also a legacy of respect for other people and other times.

    PN: What would you consider Feyerabend's most important contribution and where do you think he erred?

    GM: Somebody wrote in Nature that Feyerabend was the worst enemy of science. But, on the contrary, Feyerabend showed how complex and humane science is and ought to be. Of his many contributions, perhaps the most important is that there is no method or rule that can capture science completely. The most excellent idea about the nature of science has to allow exceptions. When we look at the history of science, we discover not only that the great scientists violated the methods proposed by the empiricists, but that they had to violate them, otherwise they would not have secured the great successes through which we know them today.

    Until the publication of the work by Feyerabend and Kuhn, it had been generally supposed that scientific rationality consisted in behaving in accordance with certain methodological rules. And science was the shining example of human rationality. Those methodological rules were inductive, as envisioned by Newton. The philosophical problem was that even though we "knew" that such scientific method produced knowledge, we could not prove it. Karl Popper argued that the problem came from thinking erroneously that induction was the method of science. We just needed to realize that science was based instead on the method of trial and error. But Feyerabend's analysis of the history of science demonstrated that adherence to all proposed methods, from Francis Bacon's to Popper's, would impede the progress of science. To progress, then, science needs to act against method from time to time.

    The reason is very simple. All varieties of empiricism assume that experience determines the worth of our scientific ideas. This assumption is presumably justified because through experience scientists learn directly what is written on the book of nature. For example, if all observers see a stone fall vertically, the vertical motion of the stone is an immediate or direct truth given by observation - an immediate truth with which our most profound hypotheses about the world must agree. If a hypothesis implies that the stone does not fall vertically, our observations, our experience will then refute it. Unfortunately for empiricism, as Feyerabend reminds us, the Copernican hypothesis claims that the earth rotates on its axis to give us the day-night cycle, and this claim is refuted by the vertical fall of the stone.

    This was one of the main objections against Copernicus that Galileo confronted in 1632. If we let go of stone from a tall tower, we see it fall vertically, close to the tower, and touch ground next to the base of the tower. Let us suppose now that the earth rotates. If so, when the stone begins to fall, the tower continues moving as the earth rotates, and therefore (if we choose the direction conveniently) the tower is going to move a considerable distance before the stone hits the ground. The only way the stone can hit the ground next to the base of the tower is by moving in a parabola; but we all see it fall straight down instead. It is clear, then, that the earth cannot rotate.

    What did Galileo say when confronted with such a clear refutation of Copernicus? Galileo refused to accept the verdict of experience. If the earth does not move, he said, the stone will surely fall straight down. But if the earth does rotate, the stone would have to fall in a parabola. The reason we see it fall vertically is that its motion has two components: one in common with the earth, the tower, and the observer; the other towards the center of the earth. But the observer does not notice the motions it shares (today, for example, we don't see the other passengers in our jet plane fly at 900 kilometers per hour). This is why it seems to the observer that the stone falls vertically.

    What motion one accepts depends on the theory one favors. Insisting that the stone falls vertically presupposes that the earth does not move. That is, Copernicus' opponents assume the truth of what is in question - does the earth move or not? - when they declare the their experience is veridical (the stone does fall vertically). Their empiricist argument is no more than an instance of the fallacy of petitio principii [begging the question].

    Feyerabend points out that the observer sees a phenomenon (the motion of the stone) and interprets it in a way that seems natural to him: the stone falls vertically. It is that natural interpretation of the phenomenon, but not the phenomenon itself, that contradicts the Copernican theory. Galileo dissolves the contradiction by offering a different way to interpret the phenomenon. Galileo gives us, then, a new empirical basis constituted by a theory of interpretation congenial to Copernicus' ideas.

    These considerations do not imply that scientific hypotheses or theories always defeat the verdict of experience, but they do imply that such victories by theory are possible. This result implies in turn that all empiricist methodological rules must have exceptions. The reason is that such rules assign a higher priority to experience (over theory). We have seen, however, that the great scientific revolution would not have happened if Galileo had not violated such rules. Similar results can be expected in the majority of critical episodes in the history of science, as Feyerabend argues in his work. It bears emphasizing that it was not just a couple of hunches that allowed Galileo to take a short cut that led to the same findings that the patient use of method would have provided in the long run. Not at all. If method gives priority to experience, method would have forever closed the path to a point of view that could not be established without first defeating previously accepted experience. If, by developing a theory already refuted by experience, Galileo committed a sin against science and philosophy, we must then love not only the sinner but the sin.

    Feyerabend rescued Galileo from the preposterous role of being the first and greatest hero of empiricism. By doing so, he allowed us to understand science very differently. In this contribution he did not err. He erred in his proposal that all traditions or ideologies should have equal standing. But eventually he realized that, as Marguerite von Brentano had argued, the Nazis and the Quakers would then have equal access to pursue their goals, even though one of the Nazis' main goals was to exterminate other cultures. He also acknowledged, though reluctantly, my criticism to the effect that a society has the obligation to teach its young the skills and the views they need to survive, and that in a world that depends on science that is what students will have to learn, not astrology or voodoo. He thus came to see that there were drastic limitations to his notion of the separation of science and society. So where he erred he changed his mind anyway. I think he also erred in his going away from relativism. Of course, the relativism he attacked in his later work was the caricature provided by analytic philosophers, namely that the truth, or the good, of an idea or action is relative to a culture or point of view. Relativism can be far more sophisticated than that. Nevertheless, given the range of problems he examined, it is remarkable how insightful he was.

    PN: Feyerabend wrote often about Mill's famous essay On Liberty and how he had extended the arguments found in it. What was the extent of his debt to Mill in your opinion?

    GM: Feyerabend points out that we are often unable to even discover important evidence against our favorite theories unless we consider seriously alternative theories that can propose and make sense of counter-evidence such as the compound motion of bodies in the case of the Copernican theory. Our science has, then, greater opportunities to progress if we accept a theoretical pluralism. This is the second important historical contribution made by Feyerabend, a contribution closely allied with his first. No matter how certain we may be of a theory, a scientist who fails to accept it and develops instead a different theory is doing science a favor. For as Feyerabend says, "We need a dream-world in order to discover the features of the real world we think we inhabit (and which may actually just be another dream-world)."

    This second philosophical contribution of Feyerabend acts not only against Newton but also against the important tradition of Plato and Descartes, whose obsession it was to discover the correct path to unique truth. Century after century, generation after generation of skeptics sowed doubts about the path to truth suggested by this or that great philosopher. But Mill was the first important philosopher who rebelled against the goal itself. In his essay On Liberty, Mill argued that it does not favor society to force its members to accept the official point of view - no matter how certain it seems to be. By allowing the development of different points of view society profits, for if the official point of view is false, we gain the opportunity to replace with another that might be at least partially true. And if the official point of view turns out to be true anyway, comparing it with alternative points of view allow us to understand it better. Feyerabend's accomplishment in this area comes from extending Mill's philosophy to science. Science also profits by allowing the development of points of view different from the one that "agrees with the facts." And we find one of the best examples of how science profits precisely in the case of Galileo and his defense of the Copernican revolution.

    Feyerabend's ironic sense of humor led him to proclaim anarchy in the philosophy of science and to suggest that "anything goes." But he never offered anarchy as a sort of anti-method method. Anarchy is the description that a traditional rationalist would give to the way science should be done according to Feyerabend, and particularly the description that rationalist would give of pluralism. It is that rationalist who finds it obvious that rationality consists in behaving in accordance with the rules of the method of empiricism. And it is that rationalist who recoils in horror at the "anything goes" attitude in science a la Feyerabend.

    PN: How did Feyerabend influence your own work?

    GM: He set the stage for the future development of philosophy. For example, if we could no longer say that science was rational in the traditional sense, could we still talk about science as a rational activity? My answer is "yes". A good deal of my early work developed a biological conception of science in which we could see that science was indeed a rational activity in a very straightforward ends-to-means conception of rationality. He first accepted my evolutionary relativism, but in his later work he argued against relativism on some very interesting grounds. I think he is wrong, but definitely worth replying to. He also pointed out to me a strong connection between Bohr's ideas and my biological approach to philosophy. This connection is becoming more and more important in my work. Another important influence is the notion that concepts and meanings are flexible and may change. We can see that in the history of the physical sciences and should expect it in our growing understanding of the mind as neuroscience advances (that is the main point of eliminative materialism). It also shows why analytic philosophy was doomed, since analytic philosophy relies on stable concepts so logic and argument may shine in all their glory. He was influenced by Wittgenstein in this last issue, but I think he goes well beyond Wittgenstein.

    Incidentally, this is where the linguistic version of the infamous problem of incommensurability arises. Philosophers of science thought of explanation as logical derivation. And new theories explained their predecessors, which became special cases of the new theories. Science thus evolved by accretion. But, if we are strict about meanings, it seems that the meanings of scientific terms change when theories change. In that case, new theories cannot explain the old, for in the presumed derivation the meanings of some terms would vary from the premises to the conclusion. Philosophy of science then committed science to perennial equivocation. This was a big problem for philosophers, who tried to fix it by changing their theories of meaning. But Feyerabend pointed out that scientists should not lose any sleep over this issue, since they were very flexible and pragmatic about the meaning of their terms. I have argued that incommensurability is indeed a serious problem for empiricism, for it makes us realize that there is no common measure or standard by which to judge the worth of competing theories, as Galileo demonstrated. At any rate, "facts" do not provide such a standard, and this result is a dagger in the heart of empiricism. I have also argued that this problem is independent of theories of meaning.

    Feyerabend's main influence on my work, beyond setting the stage for many of the problems that have concerned me, is his example of being honest and daring.

    PN: What do you think is the relevance of the philosophy of science today? What are the main issues you are interested in?

    GM: The field has become more relevant today, in spite of all the remarks made to the contrary by Feyerabend, and in great part because of his influence. One of reasons for the increased relevance is that Feyerabend and Kuhn showed us how important the practice of science is for philosophy. So many, particularly younger philosophers, even some who learned from their mentors that Feyerabend was a kook, now bring their philosophical energy to interesting controversies in the practice of science. Some of them have interesting points to make, and some times the scientists pay attention. This brings me to a second and related reason: we have done away with the silly formalist approach, at least in great part. So scientists can now read what philosophers say with some chance of understanding it. That allows them to respond to the philosophers. This in turn gives the philosophers the chance to say more pertinent things in the future.

    I am not talking about a world-shattering movement here. But it is an improvement, and Feyerabend deserves a good part of the credit, in my opinion.

    The journal Science recently compiled a list of 25 hard scientific questions, and I realized that about two thirds of them are questions that enter my own work in some way or another. Many of them are addressed at least briefly in my next book, The Dimming of Starlight: What is the universe made of? Can the laws of physics be unified? How does Earth's interior work? How and where did life on Earth arise? How far can we push chemical self-assembly? How hot will the greenhouse world be? And even: What can replace cheap oil and when? Some of the other questions I deal with in class and my take on them will be making its way into print slowly in the next few years: What is the biological basis of consciousness? Why do humans have so few genes? What determines species diversity? What genetic changes made us uniquely human? How are memories stored and retrieved? How did cooperative behavior evolve? What are the limits of conventional computing? Do deeper principles underlie quantum uncertainty and non-locality?

    The reason these questions enter my work is that some of them have interesting philosophical consequences while in others there are philosophical comments worth making about the methodologies used by scientists to tackle them.

    But of course you wanted to know about the main philosophical issues I am interested in. I want to explain how the brain of social animals, a biological organ, can make sense of the world. Using this approach I believe one can solve the main problems of the philosophy of science: the problem of the rationality of science and the problem of reality (does science give us the truth about the universe?). Darwin published The Origin of Species in 1859, but the biological revolution has not yet taken place for philosophy. Analytic philosophy, for example, has treated biology as an interloper in its (philosophy's) attempt to preserve what it takes to be the autonomy of philosophy (e.g., biological discussions of ethics commit the naturalistic fallacy, epistemology is prescriptive while biology is descriptive, etc.). So my job is to show that the so-called fallacies are not fallacious, that the only mistakes in reasoning are committed by the philosophers who discover them or who use them as the intellectual equivalent of slander. I see myself as clearing the philosophical rubble so that we may again have a worthwhile natural philosophy. I began this task in my Radical Knowledge in 1981, continued it in my Evolution and the Naked Truth in 1998, and will bring it all together in what I hope will be my best book on the subject, A Theory of Wonder.

    I am also interested in showing very precisely why analytic philosophy is a dead end in every field: philosophy of science, epistemology, ethics, and especially philosophy of language, presumably its crowning glory. If I live long enough I should write it all up in a book titled Against Analysis.

    PN: What projects are you currently working on?

    GM: I am doing the final rewrite of The Dimming of Starlight: The Philosophy of Space Exploration. I am also doing a book in Spanish titled Variaciones sobre temas de Feyerabend ("Variations on themes by Feyerabend"). Right after these two I will finish rewriting A Theory of Wonder, which is my final commentary on the philosophy of science, and in which I give particular prominence to Feyerabend's work, all within the context provided by a biological approach to philosophy.

    PN: In his How to defend Society against Science, Feyerabend was notorious for having given "three cheers to the creationists". What do you think of the current debate surrounding so-called Intelligent Design and/or Creationism and how do you see Feyerabend's writings on "the tyranny of truth" and the separation of science and society applying to this controversy?

    GM: Scientists and other reasonable people are quite right in pointing out that there is no worthwhile science in creationism or in intelligent design. So in that sense they are also quite right in keeping those subjects out of the science classroom. Giving equal time to all points of view in the classroom is one of the aspects of Feyerabend's Science in a Free Society that I criticized most strongly (see my paper in Beyond Reason). Nevertheless, I think that, if it were done right, it would be a terrific idea to pit intelligent design against evolutionary biology. It would be quite interesting for the students too: this is the accusation that creationists either of old or of "intelligent design" garb make against the theory of evolution; this is the reply. Done right it would be a rout in favor of evolution. And we would have American students actually understand biology for the first time in the history of the country. Unfortunately, most Americans, even scientists outside of biology, have little understanding of evolution. The fundamentalists should be careful about what they pray for, since if it is done properly it would give them fits. And they would have only themselves to blame. They often have no idea what the theory actually says. All they can think of is that we don't come from monkeys and God already wrote down for us in the Old Testament when the world began. The rest is a bunch of very confused notions about evolution and science.

    I am afraid that it would not be done right, though. I have this vision of high school teachers parroting Popperian inanities. Still, they could clear up a lot of misconceptions about the fossil record, the evolution of complex organs, and so on.

    My final take is that it would be an excellent way to teach evolution, but that I would feel more at ease if they put me in charge of training the biology teachers.

    PN: What advice would you give to laymen interested in Feyerabend's thought but put off by the hostile reaction it got?

    GM: If they are that easily led by the nose, I don't think they would be too interested in Feyerabend's work. As for those who are curious, I think that reading about him in less formal environments (this one, for example) can be helpful. Several good books on Feyerabend have been published and will continue to be published, but they tend to be written for specialists, and thus laymen may not take to them. Perhaps the most accessible is The Worst Enemy of Science? Essays in Memory of Paul Feyerabend, an anthology that I edited for Oxford with John Preston and David Lamb. I think that Feyerabend's reply to critics in Beyond Reason is very enjoyable, as are many of his shorter essays. His main works are quite challenging because of the extraordinary level of erudition and his uncompromising irony, even though he was a very good writer. People can read his autobiography, Killing Time, though. That is a very readable book. I am writing my A Theory of Wonder both for the layman and the specialist. I hope I will succeed.

    In any event, people should bear in mind that Feyerabend is one of the most exciting philosophers in a long time. He was very irreverent, but he was also very insightful. If you want to experience a true challenge to the philosophical tradition, Feyerabend is your man.
  • Who Was Online

    1 User was Online in the Last 24 Hours