This site is supported by Nobility Studios.
  • Resources

    Our library of interviews, essays and reviews is entirely created and maintained by our members. These resources are aimed at all levels and our content aims to support learning and help people gain an insight into the many areas our members and contributors are interested in. To offer content or suggest an interviewee, please contact us.

    Teaser Paragraph: Publish Date: 06/02/2005 Article Image:

    Aviezer Tucker was a research associate at the department of Philosophy and Law of the Australian National University in Canberra (at the time of interview) but has previously worked in America, Europe and Asia and was then a lecturer in the School of Politics, International Studies and Philosophy at Queen's University in Belfast. He resigned from Queen's after being subjected to blackmail and refusing to inform on colleagues; he recounted his experiences via The Prague Post in his articles Informing as a state of mind and Race to the bottom. He is currently Assistant Director of the Energy Institute at the University of Texas at Austin and an Associate of the Davis Centre at Harvard University. He aims to obtain appointments in Africa and Antartica to be able to brag that he has worked on all the continents. He is the author of Our Knowledge of the Past (also available from Cambridge University Press). I was able to ask him some questions about this book and its implications.

    - Interviewed by Paul Newall (2006)

    PN: How do you understand the difference between History, Historiography and the Philosophy of History?

    AT: History is composed of past events and processes. Historiography attempts to describe, explain and theorize parts of history. Most of history is unknown and unknowable because it did not generate information-preserving effects that survived to the present. Historians who write historiography infer propositions about the past from present evidence.

    The philosophy of history attempts to gain knowledge of history that does not depend on empirical evidence, it attempts to by-pass historiography. Philosophies of history typically rely on idealistic epistemology, on intuitions of the essence of history. These intuitions may appear in the guise of "historical self-consciousness." Philosophers of history like Vico or Hegel believed they could gain knowledge of the historical process that resembles self-knowledge. They believed that as we have intuitive self-consciousness of ourselves, philosophers of history could gain knowledge of the past by becoming the self-consciousness of history. Since we can understand a living process only at its end, philosophers of history from Hegel through Marx to Fukuyama believe themselves to be living at the end of history. However, all existing philosophies of history have reflected the conflicting historical consciousness of their age, how they perceived the historical process. Yet, the historical process always goes on towards ends we cannot possibly fathom.
    The end of history will always happen tomorrow and be accompanied by yet another philosophy of history....

    PN: The subtitle of your book is "A Philosophy of Historiography". How do you conceive of marrying the two?

    AT: The philosophy of historiography, unlike the philosophy of history, is a branch of epistemology that examines our knowledge of the past. As much as the philosophy of science asks how do scientists gain knowledge of nature by examining the relations between scientific theories and the evidence, the philosophy of historiography examines our knowledge of the past by examining the relations between historiography and historiographic evidence. As much as the philosophy of science begins by examining the successful paradigms of Galileo, Newton, Einstein etc., why and how they replaced previous paradigms, the philosophy of historiography examines the history of historiography to discover how an why new paradigms in textual criticism, comparative historical linguistics, historiography and evolutionary biology were established.

    PN: In your book you distinguish between two alternative approaches to our knowledge of history that you call "historical skepticism" and "historical esotericism"? What do you mean by these?

    AT: Skepticism argues that there is no knowledge of the past. Whatever consistency we find between historiographies is the result of factors external to historiography, such as political or economic interests. Esotericism claims that there is knowledge of the past, but it is impossible to know how successful historians obtain it. Historiography then is a practice like cooking that cannot be reduced to a set of instructions. I argue that skepticism and esotericism are implausible because they cannot explain the history and sociology of historiography as well as the kind of internal account I develop. Skepticism cannot explain the uniquely heterogeneous broad consensus on parts of historiography. The totalitarian reduction of all opinions to political or economic interests cannot explain why historians of widely different, indeed conflicting, interests and identities agree on so many historiographic propositions. I develop an alternative account that explains better this consensus by shared theories and methodologies concerning the transmission of information in time that may seem trivial yet are incredibly fruitful. I do not like the undemocratic and elitistic implications of esotericism. If knowledge of history is the preserve of an obscure elite that benefits from long apprenticeships with older masters, they cannot be criticized by the unwashed masses who may not have been touched by expensive genius, but can examine the evidence just as well. I think that it can be shown that esotericism is false by presenting explicitly the actual theories and methods that historians use and teach, as I do.

    PN: In recent years there has been a vigorous debate on the nature of historiography. Why did you consider it necessary to provide a scientific approach to historiography?

    AT: I argue that a part of historiography is indeed scientific in the sense that it offers highly probable propositions of all kinds (descriptive, explanatory, theoretical) about the past, and indeed has been so for about two centuries, since the breakthroughs and paradigm founding in textual criticism, comparative linguistics and historiography. I also argued that other parts of historiography, marked by sociological fragmentation into schools, are less than scientific.

    It seems to me that much of the debate has been marked by over-simplifications and inattention to either historiographic practices or contemporary epistemology and philosophy of science. Both sides overemphasized in my opinion the significance of historiographic texts in comparison with historiographic research and practice. Both sides imported to the philosophy of historiography theories and methods of interpretation that were originally developed for and fitted other disciplines at other times and I do not believe quite fit historiography. I do not think that either side would finish reading Our Knowledge of the Past feeling smug or vindicated. I think they need to go back to the drawing board and produce a more sophisticated, complex and above all sensitive to the practice of historiography and its history versions of their original positions. I do hope the debate will go on. It is an important debate and it can potentially flash out many interesting things about historiography.

    PN: In the section on the theory of scientific historiography, you rely on Bayesian probability theory to develop a method of comparing explanations. Can you explain briefly what it involves?

    AT: The three most common lies are: The check is in the mail, if you sleep with me I'll divorce my spouse and, among academics, "I'll be brief." I'll try to be brief, then:

    I use an interpretation of Bayesian theory that is closest to the one Elliott Sober developed to account for the inference of common cause, though it differs from Sober in several crucial points. Let me try and explain it informally:

    The task of historiography is to offer the best explanation of the evidence. The evidence consists typically of two or more units of evidence like texts, languages, testimonies, species, genomes, or cultural practices that are similar in certain respects. I will use here as an example, two very similar exams that are handed by students to their teacher and deserve a perfect score, to simplify the argument and connect it to everyday experience. I believe that the inference of historiography from evidence proceeds in three stages:
    First, we ask whether the similarity between the units of evidence is more likely given some common cause, or given separate causes. For example, if two or more students submit very similar exam essays, it is far more likely that there was some common cause than there were separate causes. It is highly unlikely that the students came up independently with exactly the same words in the same order. The same is true of texts in biblical criticism and classical philology or of testimonies in historiography. However, if the exam is in logic or mathematics, two identical, perfect score, exams, are more likely given separate causes, given that the students wrote answered the exams independently of each other because there was a single correct or best answer to each question. Likewise, similar biological traits that convey adaptive advantages such as wings or fins or human behaviors like agriculture, language and fishing may develop independently of each other, from separate causes because they are the best solutions to shared types of problems.

    Second, if we are able to establish that the similarity between the units of evidence is more
    likely given some common cause than given separate causes, we need to distinguish five possible causal nets, and find which is the best explanation of the evidence:
    A single common cause caused the similarity between the units of the evidence; for example, both students copied the same textbook.
    Several common causes affected the evidence; for example, the students copied a textbook, an encyclopedic entry and a Website.
    One or more of the units of evidence affected the others; for example, one student wrote the exam and the others copied his text into their notebooks.
    All the units of the evidence affected each other; for example, all the students co-authored the exam together.
    Combinations of 1 or 2 with 3 or 4.

    Distinguishing between these five possibilities, finding which one is most probable, requires further analysis of the evidence and/or the discovery of new evidence. For example, the teacher who is certain there was plagiarism but wishes to discover the culprit(s) may consider whether the language of the similar essays is too grammatically and syntactically perfect, sophisticated or mature for students, which would decrease the probability of 3 & 4 above, and vice versa. The teacher may consider if the students could communicate during the exam with each other and whether they are friends. If the style of the exam is typical of one of the students and he has been a significantly better student than the others, the third option is most likely etc.

    Third, once the historian draws a likely causal map, a genealogic tree of information transmission, it is possible to try to infer the actual properties of the various units on the map. In our example, if stylistic or conceptual discontinuities in the text point to multiple common causes, the teacher may attempt to infer which parts of the exam resemble the style of an encyclopedic entry and what this entry may be, and which resemble more paragraphs in a text book and which text book it may be. If the teacher has access to a good library, she may try to match books and encyclopedias with parts of the text of the exam.

    If we look at the history of textual criticism, comparative linguistics and evolutionary biology, we will see that they follow these three successive stages of development, though some do not have sufficient evidence to advance to the next stage. For example, comparative linguistics is capable of proving that the similarities between the Indo-European languages are too numerous and detailed to be the result of separate causes. The Indo-European languages are connected in a causal net. However, there is insufficient evidence to favor either the hypothesis that there was a single proto-Indo-European language from which all other Indo-European languages descended, or that there never was such a language, only a group of unrelated languages, spoken by peoples who lived in close proximity to each other and therefore progressively influenced each other's languages until they became quite similar, as in the "wave theory of language." In historiography, however, typically stage one is quite easy, and often stage three is achieved as well.

    I consider this part of Our Knowledge of the Past to be the most significant, in proving that the philosophically significant disciplinary distinction is not according to the subject matters of the disciplines, between human and natural sciences, or according to their way of describing the world, between nomothetic and ideographic sciences, but between the sciences that infer common causes tokens and the sciences that confirm hypotheses about types of causes. The sciences that infer common causes tokens are the historical sciences, historiography, both of humans and of nature, comparative linguistics, textual criticism evolutionary biology, and probably parts of geology and archaeology as well.

    PN: One of the main criticisms of Bayesian approaches is the difficulty in assigning prior and conditional probabilities. What problems do these pose for your thinking and for historiography in general?

    AT: Though historians, unlike biologists who infer phylogenies, do not use explicitly Bayesian formulae, I think that the Bayesian model I developed above is the best explanation of the history and sociology of historiography, of the actual practices of historians.

    Though historians do not plug in precise quantitative values to the Bayesian variables, they do make comparative more/less estimates of the values of the Bayesian variables: In the present stage of historiography, when there is a broad network of well-corroborated beliefs about the past, it is fairly easy to estimate the priors of many hypotheses, according to whether they fit everything else we already know about the past. Historians typically estimate the likelihoods of the evidence given competing hypotheses by examining the information chains that should connect past events with present evidence. I argue that this examination of information chains is the main professional activity of historians. The theories that historians typically develop and use are about the voluntary and involuntary mutation and decay of information in time.

    Since I argue that accepted historiography must offer a better explanation of a broad scope of evidence than existing alternatives, it is not necessary to prove any absolute quantitative probabilities, merely that one historiographic hypothesis is considerably more probable than an alternative one.

    PN: Throughout your book you are keen to emphasise the similarity between your scientific historiography and the methodology used in evolutionary biology. Can you explain the parallels?

    AT: Natural and human historiographies attempt to infer descriptions of events and processes from similarities between units of information preserving evidence. Units of evidence may be species in evolutionary biology or documents, testimonies and material artifices in human historiography. The inference of common causes in historiography and evolutionary biology follow the same three stages outlined in my answer to question 5. The difference is that in evolutionary biology the historical evolution of the system is identical with the transmission of information over time. When evolutionary biologists make a phylogenic inference, they trace the evolution of species from present similarities among genome sequences through fossil evidence to an ancient ancestor. Historians of society also infer sections of the evolution of society from information preserving evidence in the present. But the information preserving causal links they study are not identical with the evolution of society; historians are interested in causal-information chains that generated documents or material objects, not the present state of society.

    Historically, whether or not Darwin knew about and was influenced by comparative historical linguistics before introducing the theory of evolution (Darwin's cousin and brother in law was a philologist), the British educated public had known and had accepted Darwin's methods of inference of natural history before they read his books. The educated British public had already accepted scientific genealogical trees of languages. The imperial encounter with the languages of the Indian sub-continent induced interest in the Indo-European hypothesis. Those who accepted that inference had already accepted Darwin's method before he even introduced it. Darwin used comparative linguistics as a heuristic analogy several times in his writings to explain his new ideas to an audience that had already been familiar with the new achievements of comparative linguistics.

    PN: In the conclusion you remark that historiography attempts to provide an analysis of the past via the best explanation of the available evidence. You add that "[t]he most that historiography can aspire for is increasing plausibility, never absolute truth." How does this understanding of historiography differ from what we typically think of history?

    AT: Parts of historiography have such a high level of probability that ordinary people frequently consider them as facts, rather than well-confirmed hypotheses or theories. For pragmatic purposes of orientation in the world this works just as well. However epistemically, George Washington is an extremely well confirmed and useful hypothesis that explains a wide scope of evidence, not a fact. The Renaissance is a very useful theoretical concept that was introduced in the late nineteenth century by Burckhardt.

    PN: You are critical of both traditional historiography and the arguments of skeptics. In what ways are the two inadequate?

    AT: The skeptics cannot explain, cannot make sense of, the uncoerced, large, and uniquely heterogeneous consensus of historians on the Rankean paradigm, its methods and results. The skeptics are better positioned to explain what I call the traditionalist part of historiography, where historiographic schools interpret inconsistently vague large scope theories ad hoc to explain a narrow range of evidence. Still, though traditionalist historiography, associated sociologically with historiographic schools, cannot claim a scientific status, it is nether reducible to political or economic interests, nor indeterminate. Though the evidence is insufficient in some cases for discriminating between several competing traditionalist historiographic interpretations of school theories, the evidence is sufficient for rejecting quite a lot of hypothesis that do not fit it. Consequently, I argue that traditionalist historiography is neither determined, nor indeterminate, but underdetermined.

    PN: Why do you claim that in some respects "the philosophy of historiography is a philosophy of liberation from the tyranny of the present"?

    AT: Though the philosophy of historiography is a sub-field of epistemology and is not political, it has some political implications. I only hinted at them at the end of the book. Many social and political conflicts have been perpetuated by non-scientific historiographic interpretations. Typically, these historiographies tell a story of group victimization by another group, imply the responsibility of the other group for whatever misery has befallen the favored group since the victimization and the obligation of the current generation to right the historical wrongs and vindicate their ancestors. These are the kind of historiographic stories that both sides to conflicts in places like the former Yugoslavia, the Middle East, Northern Ireland, etc, tell their children. These stories perpetuate these conflicts by teaching young people to hate and repeat the mistakes of their ancestors. If one holds, as the skeptics do, that all historiography is fiction, there is nothing that can be said to prove such stories false. At most one can preach to both sides to listen and understand each other's narratives and be tolerant and respectful of them, hardly an effective means for ending such conflicts. But if we recognize that parts of historiography offer probable knowledge of scientific quality, it is possible to say that some historiographic narratives are plain false or quite unlikely in comparison with their scientific alternatives. It is possible then to tell young people that the historiographic narratives their elders taught them are false or improbable and offer no basis for them to kill or be killed.

    For example, Israelis and Palestinians fight over the "graves of the patriarchs" in the town of Hebron. The evidence for the presence of these graves in Hebron is from a verse in the book of Genesis and a tradition that originated in the Roman period. If we examine the evidence from Genesis in its linguistic and textual contexts, we may conclude that the best explanation of this verse is a dispute in the fourth century BCE over the exact borders of the Jewish province in the Persian Empire. To establish a claim to the then disputed town of Hebron, the editors of the then young bible probably added this verse. Further consideration of the architecture of the graves in Hebron suggests that the best explanation of the oldest part of that structure is that "the graves of the patriarchs" are the graves of Edomite sheikhs from the first few centuries BCE. Scientific historiography can then tell both sides that they are not fighting over the graves of their patriarchs, but over old Edomite bones.... Further, if Abraham and Jacob existed, it is highly unlikely that they were buried together, since the Abraham stories all take place in the territory of what would be the kingdom of Judea, while the Jacob stories all take place in the territories that would become the kingdom of Israel. Abraham's god is Jehovah; Jacob's God is Elohim. The best explanation of these differences is that Abraham and Jacob were originally the mythical ancestor fathers of different tribes. After the kingdom if Israel was destroyed in the eighth century BCE and the Judean kingdom assumed the claim to the Israelite territory and heritage, the two narratives were combined to unite the mythical ancestors of both ethnic groups as the patriarchs of a united nation. If Jacob existed, he was probably buried somewhere much north of Hebron.

    Just as false are some narratives of irresponsibility. Scientific historiography can tell
    neo-Nazi Holocaust deniers that their narrative is not just ugly and vicious, but also plain false. The weight of the evidence is that the best explanation of the material remains in East Europe, the Nazi documents, and the testimonies of survivors is that there was indeed a Holocaust. The Nazis were responsible for the Holocaust.

    I am reading now a book on Czech historical memory (Francoise Mayer, Les Tcheques et leur communisme (Paris: Editions de l'ecole des hautes etudes en sciences sociales, 2004)). According to the author, Czechs tell themselves a story of national victimization and irresponsibility: Czechoslovakia was destroyed following the British and French betrayal in Munich in 1938. The Communist take over in 1948 was caused by the Soviets with the cooperation of Czech and Slovak young Communists who blamed the West for 1938. The native liberalization of the Prague Spring in 1968 was again crushed by external forces following the Soviet invasion. This story exempts the Czech from self-examination. Contemporary and former dissident Czech historians argue that it is true that Britain and France betrayed democratic Czechoslovakia in 1938. But the mechanized Czechoslovak military could have attempted to match the German military in case of war, unlike the Polish cavalry that fought the Germans tanks in 1939. Czechoslovakia fell apart before it was occupied following the discriminatory policies of the Czech majority that alienated the Slovak and German minorities. The Communist coup of 1948 was indeed carried by a well-organized Communist minority. But it did not receive substantive Soviet support, and the democratic majority could have resisted it had it been sufficiently united and determined. True, the invasion of August 1968 and the following repression were the faults of the Soviet Union, but from late 1969 to late 1989 all the political repression was carried by Czechs and Slovaks against Czechs and Slovaks. Consequently, scientific historiography can tell Czechs that their elders and ancestors have partial responsibility for their sad half century between 1938 and 1989 and examine critically their political culture, rather than just blame outsiders for all their misfortunes.

    In a philosophical context, contemporary historiography can say that the story John Locke told about first appropriation in the second treatise on Government is a fairy tale. Genetic evidence proves that by 10,000 years ago all the continents have already been settled by hunters-gatherers. The last piece of global real estate to be first appropriated was New Zealand, about 1500 years ago. Prior to the last 200 or so years, almost all transfers of real estate involved involuntary appropriations. Therefore, first appropriation historical theories of property rights can justify just about nothing. An article I wrote on this topic entitled "The New Politics of Property Rights" will be published in the last 2004 issue of Critical Review.

    Scientific historiography can then liberate us from living the lies that others tell us and even from the lies that we tell ourselves. The liberating power of scientific historiography can be effective on a personal just as much as on a national level. To take a personal example: One of my great-grandfathers, Nisan-Zvi Gwurtzman, was murdered in the Holocaust, probably, given the established fate of other Jews who lived in his last place of domicile, in the Maidanek extermination camp in eastern Poland.

    As I was growing up, I was told stories of his religious piousness, moral uprightness, intellectual vigor and business acumen, all models for me to emulate. My family also inherited his library. It included the kind of religious books that a pious man would have been expected to own. But it included also books about professions and businesses: One book had instructions on how to make fizzy light drinks. Another explained how to produce perfumes. The dates of publication of the books were a few years apart. The latest book, from the late twenties, was about how to perform circumcisions (how to be a Mohel in Yiddish).... In the early thirties, after obviously a short career in the penis-chopping business, that great-grandfather of mine emmigrated together with my grandfather and his wife to what was British Palestine, so much for the astute businessman. The next obvious question was: If he got out of Europe in the early thirties, how and why did he die in the Holocaust back in Europe? One unit of obvious evidence is the grave of my great-grandmother, his wife, who died in Tel Aviv in 1936. So, obviously in 1936 they still lived in Palestine when my great-grandfather became a widower. Then, I needed more evidence that required pressuring some older members of the family to testify. The historiography that emerged out of their reluctant testimonies was that the lonely widower was offered a match back in Europe, to marry the widowed mother in law of one of his older sons. In the late thirties (sic!) he took a boat from Palestine back to Europe to remarry, leaving his library behind for me to examine four decades later, so much for his sharp intellect. In the process, he abandoned my young grandfather in Palestine, sickly, penniless, and without an education or a profession, so much for the moral uprightness. My abandoned grandfather found then a dominant older woman to care for him, who became my grandmother.

    These circumstances also explained a few things about the character of my late mother who grew up in a household with a dominant older mother and a weak, sickly, indeed helpless and dependant, father who died in his mid-fifties. When I pieced this family historiography together a couple of decades ago I found this to be a liberating experience, not because I invented an alternative story to the one I was told during my childhood, but because I had good reason, evidence, to doubt the story I was told and to support an alternative historiography that offers a better explanation of the evidence. This revised narrative liberated me from the myths of my immediate ancestors and allowed me to adopt a more critical approach to people who were more human than saintly.

    PN: Why do you think practicing historians should be concerned what philosophers of history have to say?

    AT: I do not think that the actual practices of historians are often affected by what they think or learn of them. Still, I think that historians are interested, like the rest of us, in becoming self-conscious of their practices. Most significantly, historians distinguish, often without reflection, legitimate historiographies from illegitimate ones; for example, Nazi revisionism, Bolshevik fabrications and Nationalist forgeries. I think it is valuable to set clearly the conditions that distinguish historiographies that reflect the strong emotions of those who invent them, from evidence based scientific historiography.

    PN: How do you answer the objection that philosophers do not understand what goes on in the work of historians?

    AT: Some philosophers indeed did not and do not; their work indeed should be ignored. In relation to historians, the old New York joke about two psychiatrists who meet on the street and say to each other "You are fine, how am I?" is appropriate. Practitioners, scientists, historians or lawyers, rarely gain through practice the kind of abstract
    self-consciousness that can lead them to self-knowledge. Outsiders must observe them from without to understand what they actually do and what it means. That is why we need philosophies of science, historiography or law. It is helpful of course if the philosopher-outsider has some practical experience of what practitioners actually do and is familiar with relevant analytical skills. Bad philosophies of historiography, positivist or post-modernist, applied to historiography models that had been developed elsewhere and did not quite fit historiography. That led to the perception of irrelevance of philosophy on the part of historians. But the problem was not with the field, but with the particular approaches that have been dominating the field. The solution I attempted to develop in Our Knowledge of the Past is to start with a study of the history of historiography and then construct a philosophy of historiography that attempts to offer the best explanation of the history of historiography.

    I believe that the history of historiography is the empirical-evidential ground that should unite and decide between competing philosophies of historiography. I would expect any philosophy of historiography that is better than the one presented in Our Knowledge of the Past to offer a better explanation of the history of historiography.

    PN: How can your work be developed further?

    AT: In Our Knowledge of the Past I discussed only textual criticism, comparative linguistics, historiography and Darwinian biology. I left out the history and philosophy of Geology, Archaeology and Genetics. I suspect that the models I developed will be applicable to these disciplines as well. There is also much more to be written about the history and philosophy of biology and linguistics.

    Another implication of the elucidation of the scientific historiographic method is the criticism of philosophies that violated these methods. There are several philosophical debates that originated prior to the introduction or spread of scientific historiography, and somehow managed to maintain their "prehistoric" character to this day. I mentioned earlier my criticism of historical theories of property rights that originated with the prehistoric fairy tales of John Locke about original appropriation. I also applied scientific historiography to criticize the contours of the current debate about Hume's Of Miracles as prehistoric in my article "Miracles, Historical Testimonies, and Probabilities" forthcoming in the October 2005 issue of History and Theory. Earlier, I subjected Saul Kripke's theory of proper names to the same criticism in "Kripke and Fixing the Reference of 'God'" in International Studies in Philosophy, 34-4, 2002, 155-160. The concept of tradition, as used often by hermeneutic philosophers and social scientists, is very problematic as well. I made a few notes about that at the end of chapter four.
    Conversely, historiographies that exceed the limits of evidence or violate the rules of inference of common cause from similar evidence must be criticized as well.

    PN: What are you working on currently? What will your next projects be?

    AT: I have been working recently on a couple of projects in political philosophy and theory, one on Democratic theory and practice with my ANU colleagues John Dryzek and Robert Goodin, and one on a theory of post-totalitarianism on my own. I expect to return to the issues I raised in Our Knowledge of the Past next year, once the current projects are complete. I would like to write a philosophical historiography of the historical sciences in several volumes. I plan on writing it "backward" start with the volume on contemporary genetics and end with a volume on Biblical criticism. I'll be forty in the summer, so I probably have enough time to write this grand history of the historical sciences.
    Teaser Paragraph: Publish Date: 06/02/2005 Article Image:
    By Paul Newall (2005)

    Suppose we have an idea about world and put it to the test. Our discussion of falsificationism looked at what we can conclude from a failure, but what if an experiment shows us what we expected to find? We usually say that the test has confirmed the theory, but what does this mean? There have been several approaches in the philosophy of science to understanding what confirmation involves, some with more success than others. We will examine the main candidates here.

    Basic Confirmation

    The easiest way to tell a story about testing scientific theories is to say that a successful trial proves that the theory was true. If we set this out in syllogistic form, we have:


    If theory T is true, we would expect to note observations C;
    We observe C;
    Therefore, T is true.
    Unfortunately, this reasoning is an example of affirming the consequent. Even if we drop the difficult issue of truth and try to say that observing C merely confirms T, we still run up against the same underlying problem: that a theory "works" is no guarantee of its accuracy. After all, it could be that something else is causing the effects we notice. For example, consider the example of Brownian motion covered when we looked at Ockham’s Razor. The phenomenological theory of gases explained the behaviour of gases and was highly confirmed by experiment but nevertheless gave way to the kinetic theory; that is, another explanation was found.

    Pardoxes of Confirmation

    This should not come as any surprise, however. Expecting a single successful experiment to confirm a theory so decisively is perhaps aiming too high, but another difficulty with confirmation was identified by Hempel (1937). Suppose we consider the proposition "all swans are white" (1). This is logically equivalent to the proposition "all non-white things are non-swans" (2); or, to put it another way, "if it isn’t white then it can’t be a swan". Now imagine that we notice a black raven, a creature beloved of philosophical arguments. Although it may seem that this has nothing to do with (1), it actually confirms (2): the black raven isn’t white and isn’t a swan, so (2) holds. Since (1) and (2) are logically equivalent, though, the black raven turns out to confirm that all swans are white.

    Notice the way that this example was constructed: we could have chosen any number of ridiculous instances for the confirmation of (2) to arrive at the same result. It seems that (1) is thus confirmed by observations that have nothing at all to do with whiteness or being a swan. This result is paradoxical because we tend to think that a proposition like (1) is confirmed by sighting white swans, and further than the more white swans we observe the more likely (1) is to be true; but if a black raven can confirm (1) then this account seems to make little sense.

    The Problem of Induction

    The issue at the heart of understanding confirmation is of course the famous problem of induction, due to Hume: how can we justify an inductive inference – in the form of a general (scientific) theory – from a finite number of particular instances? A number of solutions have been proposed, including Popper’s falsificationism (claiming that scientific inference is actually deductive) and Mill’s System of Logic (1837 - actually much the same as Galileo’s and that of Aristotle and the Jesuits before him), but induction is interesting because it seems that any description of what confirmation is must rely on it. After all, if we want to say that a test has confirmed a theory in some way that we are making an inductive inference.

    A more recent version of the problem is Nelson Goodman’s (1983) New Riddle of Induction. Suppose we take two propositions: "all emeralds are green" (3) and "all emeralds are grue" (4), where “grue” means green until time T and blue thereafter. Now consider what we can say about each observation we make of an emerald before time T. (3) says that we should find that each emerald is green, so a green emerald confirms it; but (4) says the same and hence seems to be confirmed as well. This is an example of underdetermination but is also another paradox of confirmation. The obvious response is to say that no one has seen any emeralds change colour in the past, nor have we heard of a reason how this could happen, but this is begging the question: if we assume that a causal link exists in the first place and hence that all emeralds are green come what may, then it is trivial to say that an instance of a green emerald confirms what is already certain.

    Goodman’s own solution was linguistic, saying that the predicate green is entrenched in our language and our interaction with the world (especially when buying or talking about emeralds); but this is no solution at all, since all it does is acknowledge that we are inclined to think in a certain way without explaining whether or not we are justified in so doing. Another – more promising – possibility is to make a distinction between weak and strong confirmation, with observation and experiment never being more than a fallible form. Theoretical reduction, which proposes and explains causal mechanisms at work in predicates like greenness, gives us stronger reasons for believing that (3) is meaningful while (4) is not. This is to say that we have no idea what a grue form of science would be like – that is, how could we have a science if we had no way of knowing when emeralds would change colour or why - or how it could make any sense, and is thus a realist argument. It has the unfortunate consequence, however, of making non-scientific inferences unjustified; that is, unless we know of a theory that explains why swans are white, we have no reason at all to suppose that the next one we see will be.

    Bayesian Probability Theory

    Given the difficulties with these understandings of confirmation, an alternative is to appeal to probabilities instead. This is perhaps a more intuitive approach, since it aims only to say that a successful test of a theory makes it more likely. For example, suppose that someone claimed that they were friends with a certain film director and able to predict what she would be working on next. If he was correct with his first attempt, we might say it was just a lucky guess; but if he was right again on numerous occasions, we would probably think there is something to the claim after all. Indeed, the more times his guesses were accurate the more likely we would say that his being friends with her actually is – or so it seems. Can we justify this kind of thinking, though?

    Bayes’ Theorem is a way of evaluating the probability of an hypothesis based on the evidence we have for it. It takes several forms but the simplest is to consider evidence e for an hypothesis h. We say that


    P( h | e ) = [P( e | h ) * P(h)] / P(e) (5)

    This means that the probability of the hypothesis h, given the evidence e that we have for it, is equal to the probability of the evidence given the theory multiplied by the probability of the hypothesis, all divided by the probability of the evidence itself. Sometimes this is expressed as

    P( h | e * b ) = [P( e | h * b ) * P( h | b )] / P( e | b ) (6)
    where the extra term b stands for the background conditions (thus P( e | h * b ) means the probability of the evidence given the hypothesis and the background conditions, and so on).

    Bayesian theory is helpful because it helps us appreciate that the likelihood of an hypothesis depends on the evidence for it. The problems arise when we look at the terms on the right-hand side of (5) or (6): P(e | h) expresses the conditional probability of the evidence given the hypothesis; that is, how likely are we to find e if we suppose that h is true? Similarly, P(h) is the prior probability that the hypothesis is true, but this is precisely what we do not know and are using the evidence to evaluate. It is the assigning of these probabilities that poses the most significant challenge to Bayesian ideas.

    For example, suppose we are pulling numbers out of a hat, written on slips of paper, and that the first eight have all read "10". How do we then decide what the conditional probability of the hypothesis "all the numbers read 10" is, given that these eight were? Similarly, and even before we took any pieces from the hat, how could we determine the prior probability that all would read 10? Bayesians respond that although instances like this are troublesome, typically in science we have a good idea of which probabilities to use. In the grue case, say, we would imagine that the likelihood of an emerald turning blue at some point in the future is very small indeed, so we can use Bayes theorem. Critics object that actually the Bayesian approach only addresses part of the grue problem – i.e., the hypothesis before T and not after.

    Inference to the Best Explanation

    An alternative method proposed by C.S. Peirce (see his Collected Papers, 1931-1958) and others is inference to the best explanation, sometimes known as abduction. A nice way to understand it is via two different metaphors: rather than science being like wandering around a beach at night, picking up "observations" to confirm our theories, instead we try to come up with the best explanation of the facts we have and then use this theory like a candle or flashlight, illuminating larger areas of the beach to see what else we can learn about it. This intuitively makes a good deal of sense: when we have the best explanation of a set of evidence, we say that the evidence confirms the theory.

    There are several difficulties associated with abduction. In the first place, what do we mean by the best explanation? We could say that it is the most probable, but then we are back with Bayesianism or something similar. Also, what makes an explanation good enough? That it is able to explain the evidence is admirable, but – once again – so do others, since the theory is always underdetermined by the evidence. Moreover, sometimes scientists infer several explanations where there are competing possibilities (colour-perception models in neurophysiopsychology, for example) and sometimes they refuse to make any inference at all (the most notable instance being Bohr in his early years, when he struggled with the implications of quantum theory and steadfastly refused to take the easy road to an instrumental interpretation).

    More importantly, perhaps, we recognise that we use other (non-empirical) criteria to judge how good our theories are. For example, we tend to prefer them to be parsimonious; not ad hoc; predicting novel facts about the world; and so on. Including these in a description of the "best" theory, however, is not easy; after all, there seems to be no reason why the universe should be fundamentally simple rather than complex, so which of two theories fitting these characterisations is the better one, other things being equal? It seems, then, that to be more accurate we need to replace "best explanation" with "best of the available explanations, where this option is good enough for our purposes", with the latter being open for discussion.

    In summary, there are many aspects to confirmation and much debate as to which formulation is most satisfactory. Note, though, that there is no question that we do employ inductive inferences and that we regard our ideas confirmed in some way; the question is how we can justify this inevitable practice.


    ---

    Selected References:


    Goodman, N., Fact, Fiction and Forecast (Cambridge, MA: Harvard University Press, 1983).
    Hartshorne, C., Weiss, P. and Burks, A. (eds.), Collected Papers of Charles Sanders Peirce, 8 vols. (Cambridge, MA: Harvard University Press, 1931-1958).
    Hempel, C.G., Le probl�me de la v�rit� in Theoria, 3, pp.206-246, 1937.
    Mill, J.S., A System of Logic (Honolulu: University Press of the Pacific, 2002).
    Teaser Paragraph: Publish Date: 06/01/2005 Article Image:
    Stephen David Snobelen is Assistant Professor in the History of Science and Technology at University of King's College, Halifax, Nova Scotia. He is a founder member of the Newton project and author of many fascinating papers on Newton's alchemy and religious thinking. I was privileged to be able to ask him some questions about his work on Newton.

    - Interviewed by Paul Newall (2005)

    PN: How and why did you first become interested in Newton?

    SS: Like most people, I first learned about Newton as a child. Unlike most, I also learned relatively early on from a historical book on radical dissent in the early modern period that Newton was privately a passionate lay theologian and prophetic exegete. Although this book didn’t offer much detail, the information was striking enough that it stayed with me. I read this book several years before I began my undergraduate training. However, it was not until I was well into my undergraduate History degree that I began to study Newton’s religious beliefs for myself.
    There were several starting points. First, I wrote an undergraduate honours thesis on Socinianism (an early modern antitrinitarian movement that emerged in Poland) and included a short section on Newton. Second, I wrote two upper-year papers on early modern millenarianism. Again, Newton featured in these studies. Third, I wrote two other undergraduate papers entirely on Newton and the recovery of his religious faith and how this faith related to his natural philosophy (science).

    Also very important was my encountering during my undergraduate years of the work of James Force and the recently-deceased Richard H. Popkin. These two scholars (Force being Popkin’s quondam PhD student) wrote a series of wonderful papers on Newton’s theology proper and the relationship between his science and his religion. Popkin and Force (the latter is still busy working on Newton and Newtonianism) showed me just how exciting and intriguing this field could be. I was smitten! Their papers provided a model early on for my own work on Newton. It was a great pleasure to meet both of these fine scholars in 1998 and work with them on publishing projects related to Newton and early modern millenarianism.

    Coincidentally, in 1991, while I was partway through my undergraduate degree, Chadwyck-Healey released the majority of Newton’s unpublished papers on microfilm. This microfilm collection included most of Newton’s theological and prophetic papers. This, along with the work of Popkin and Force, helped initiate a flurry of activity in Newton studies — particularly with respect to Newton’s theology. Having just begun to take an academic interest in Newton, I was poised to take advantage of this treasure trove. Timing is everything!

    When I went on to study for my MA in History at the University of Victoria, I choose to work on a subject that related to the popularisation of Newtonianism in early eighteenth-century Britain. It was at this time that I formally added to my interests in history the history of science. I owe this in part to my supervisor Paul Wood. Thus when I applied to Cambridge in 1996, I chose the Department of History and Philosophy of Science. It was while at Cambridge (1996-2001), when I completed an MPhil and PhD in HPS and worked as a research fellow in the same field, and during which time I had rich archives at my disposal, including many of Newton’s original manuscripts, that I began to do serious work on Newton’s theology, prophecy and the relationship between his science and religion. It was a matter of making hay while the sun was shining. Although I’m always conducting new research, I still to a certain extent rely on research I did during those years.

    While at Cambridge I completed an MPhil thesis on Newton’s heresy and a PhD thesis on William Whiston, one of Newton’s natural philosophical and theological disciples. I worked under Simon Schaffer, who in 1980 completed a PhD thesis on Newton at the same institution. I also began to network with Rob Iliffe (Imperial College, but a former student of Schaffer’s at Cambridge) and Scott Mandelbrote (then of Oxford, now of Cambridge), both of whom had caught the Newton bug a few years before I did and were already producing excellent work on Newton’s theology. In 1997, the year I began my PhD, and with a gentle nudge from Rob Iliffe, I started work on transcribing some of Newton’s theological manuscripts held at King’s College, Cambridge.

    PN: Why was Newton a "heretic"?

    SS: During his own life time, Newton was a heretic from the perspective of orthodoxy. His study of the Bible and church history had convinced him that the orthodox version of Christianity that emerged in the fourth century A.D., represented in Anglicanism and Calvinism and especially (from his point of view) Roman Catholicism in his own time and context, had strayed from the original purity of first-century Christianity and had become hopelessly corrupt. The chief heresy of the orthodox church was the doctrine of the Trinity (for Newton, it was the orthodox who were heretical, while he saw his own views as orthodox in the sense of original Christianity). To put this in perspective, the doctrine of the Trinity is widely recognised as the central tenet of orthodox Christianity. So Newton wasn’t merely chipping away at the edifice of traditional Christendom; he was destroying its chief cornerstone. Not that Newton’s (mostly private) efforts should be seen primarily as destructive. Newton saw his own biblical and historical researches as part of a recovery of the purity of the primitive faith of Christianity.

    Why did Newton believe the Trinity was unbiblical? There are several reasons. First, he did not find the doctrine in the Bible. Not only was the term “Trinity” invented years after the closing of the New Testament canon, but Newton could find nothing approaching a formal doctrinal declaration of the Trinity in the Bible — something many modern scholars will affirm. Instead, he believed passages such as 1 Corinthians 8:4-6 and 1 Timothy 2:5 taught that only the Father is God in the absolute sense, while Christ is the Son of God, but not “very God of very God”, to use the language of the Nicene Creed. 1 Corinthians 8:6 reads: “But to us there is but one God, the Father, of whom are all things, and we in him; and one Lord Jesus Christ, by whom are all things, and we by him”. Like other non-Trinitarians of his age and today, Newton took this verse to teach that the One God worshipped by the ancient Israelites is the Father alone, not Father, Son and Holy Spirit as in the Athanasian formulation.

    Why did Newton believe the Trinity was a corruption? Concluding that the Trinity could not be found in the Bible (a conclusion he came to in the early 1670s, around the time he turned thirty), Newton also looked in the annals of ecclesiastical history to find out when it was introduced. His research confirmed that Hellenising churchmen introduced Platonic language and “substance talk” to Christianity in the third and fourth centuries. This “substance talk” led to the conception that the Father, Son and Holy Spirit are united according to “essence” or “substance”. Newton had astutely observed that the Bible speaks about the Father’s unity with the Son as a unity of will, not substance (see John 10:30 and 17:11, 21-22).

    In part, Newton was rediscovering a Hebraic conception of God, in which God is described according to his activities and his relationships with the world and His people. This discomfort with the belief that we can know the essence or substance of things resonates with his science, as does what could be called his phenomenalistic understanding of God and Christ. It was Newton’s firm belief that Christians should avoid speculative extrapolations from biblical doctrine and the introduction of foreign ideas to it, both of which can lead to error, and stick with the descriptive accounts of God and Christ found in the Bible.

    But it would be a mistake to characterise Newton’s heresy only in terms of his denial of the Trinity. Newton also rejected the immortality of the soul — another litmus test of orthodoxy — which he similarly found to be unbiblical. Instead of natural immortality, eternal life for Newton was obtained through bodily resurrection. On this point we see another example of Newton rejecting a Hellenised Christian doctrine in favour of a thoroughly Hebraic idea (for the doctrine of natural immortality owes much to the post-biblical superaddition of the conception of the Platonic soul to biblical language). To support his “mortalist” conceptions of the human, Newton turned to passages such as Psalm 6:5, Psalm 115:17 and Ecclesiastes 9:5,10, all of which speak about death as unconscious oblivion.

    On top of these “heresies”, Newton came to believe that the Bible does not teach a literal personal devil or literal personal demons (he had no trouble accepting literal angels). His view on Satan is very similar to the teaching of Judaism that Satan is not a personal being, but rather a personification of the evil inclination (yetzer ha-ra) within the human heart. The demons of the Synoptic Gospels, Newton concluded, were not meant to be taken literally; instead, the language of the Bible here is accommodating itself to the sensibilities of the common people, just as Newton believed it does when it describes the apparent motion of the sun. For Newton, the demon-possessed people whom Jesus healed were simply mentally or physically ill. It would be a mistake to see Newton’s rejection of a personal devil and personal demons as an example of incipient rationalism, however. Instead, these conclusions were the result of his biblicism and, likely, his strong monotheism that rendered belief in supernatural evil beings a threat to the unchallenged sovereignty of the One God Whom he worshipped.

    PN: What were the difficulties associated with being a Socinian - or antitrinitarian in general - in Newton's day? Can you explain what the two involved?

    SS: What Newton was doing was dangerous. Denial of the Trinity was illegal in Britain until 1813. The last person burnt at the stake in England for the denial of the Trinity was in 1612, only three decades before Newton’s birth. Antitrinitarians were seen as arch-heretics by the Anglican establishment. It was partly for this reason that Newton largely kept his antitrinitarianism to himself. Similarly, denial of the immortality of the soul and a personal devil were viewed as extremely radical doctrinal moves in Newton’s day. For many, denial of the Trinity, the immortality of the soul and evil spirits was, ironically, tantamount to atheism — even though these denials are also associated with positive teachings (the Oneness of God, the resurrection and strict monotheism). Because of the civil and ecclesiastical laws against such forms of heresy, Newton would have lost his position as Lucasian Chair of Mathematics at Cambridge if he had publically revealed his heresies while serving in this capacity. We can be absolutely sure of this because Newton’s successor at the Lucasian Chair, William Whiston, was ousted from Cambridge in 1710 precisely for denying the orthodox doctrine of the Trinity. So the stakes were very high indeed.

    Newton’s form of antitrinitarianism, while aligned with an array of supporting biblical texts, can be loosely described as Arian. Arianism is a fourth-century Christology in which Christ pre-exists his birth in Bethlehem and is perhaps of “similar” substance to the Father. In my view, Newton gradually moved away from the more overt implications of Arianism in coming to reject any sort of substance talk (at one place in one of his manuscripts he chastises both Athanasian Trinitarianism and Arianism for introducing metaphysics into Christianity) to focus exclusively on a unity of will between God and Christ. Arians, it is true, also talked a lot about this unity of will, but it seems possible that Newton’s apparent interest in seventeenth-century Socinian Christology softened his Arianism in the later decades of his life. The Socinians (some of whose books he owned, and one of whom Newton actually met personally in 1726) believed that Christ began his existence at his birth by Mary and rejected any sort of substantial relationship between Christ and God. A sign that Newton was at least contemplating the Socinians’ slightly more radical (although, to the Socinians, more biblical) Christology comes in his later theological manuscripts, where he implies that Christians who believe in Christ’s pre-existence should find fellowship with those who don’t.

    PN: Why do you think there has been an emphasis on Newton's so-called scientific papers to the neglect of his considerable writings on religion and alchemy?

    SS: It’s partly Newton’s fault. Newton kept his theological and alchemical papers secret from all but a few of his closest friends. When he died, his relatives, realising the explosive nature of the manuscripts, kept them from public view, despite the fact (and probably partly because of the fact) that antitrinitarians like William Whiston were clamouring for their publication. Add to this the fact that throughout the eighteenth century a series of secular apologists created an image of Newton and his science as iconic of rationalism, and it’s easy to see why the world for years had no idea that Newton was a heretic, a prophetic exegete and a practising alchemist. Until they were sold at Sotheby’s in London in 1936, Newton’s collateral descendants kept the theological and alchemical papers under lock and key, only occasionally allowing historical researchers to examine their contents.

    After the 1936 sale, which was in many respects disastrous for Newton scholarship (at least temporarily), most of the manuscripts circulated for years in private hands before the majority of them eventually settled in academic libraries. Nevertheless, until the 1991 release of most of Newton’s scientific, theological, alchemical and administrative papers on microfilm, accessing these manuscripts was difficult and in some cases impossible. The recent accessibility of the manuscripts is the main reason why the study of Newton’s theology and alchemy is only now beginning to flourish. Added excitement is created every so often when some of the few scattered sheets in Newton’s hand remaining in private collections come up for auction. Small though they are, these documents continue to add to our knowledge of Newton’s theology and the relationship between his theology and his science.

    PN: What is the relationship between Newton's work in these areas and his scientific studies? What was the extent of their interdependence or can the two be separated in his thinking?

    SS: Newton’s theological views related to his natural philosophical work at several levels. In my view, Newton’s theology and his natural philosophy can be distinguished in certain ways, but were never completely separate. First, Newton was stimulated by his religious beliefs to study nature. Like his contemporary the alchemist/chemist Robert Boyle, Newton likely saw himself as a sort of high priest of nature. This religious stimulus to work in natural philosophy, which can be termed an example of a weak relationship between science and religion, did not directly shape the specifics of the content of his natural philosophy. But there are many examples of what can be called a strong relationship between Newton’s science and his religion, namely examples where Newton’s religion helps shape the cognitive content of his natural philosophy.

    Newton was an advocate of natural theology and thus saw the study of nature as revealing the creative hand of God. This commitment to natural theology can be found briefly in the first edition of the Principia (1687) and more extensively in the later editions of the Principia and the Opticks. Several specific examples of interaction are worthy of mention. Newton was keen to avoid what he saw as the major pitfall of the Cartesian mechanical philosophy (which he believed was prone to atheistic extrapolations) and in particular the lack of a role for spirit (in the Cartesian system, spirit is non-extended and thus cannot be the subject of natural philosophy). Newton, in a certain sense, went in the opposite direction, attempting to construct a natural philosophy that led inductively to God and conceiving a view of the universe in which God’s spirit is infinitely extended. God’s omnipresence (associated with God’s spirit) for Newton helps to explain the universality of gravity. Newton only hinted at this in his General Scholium to the Principia, while in private he was much more sanguine. Similarly, Newton’s concept of absolute space and time relate to his notions of God’s infinite extension in space and his infinite extension in time. Interestingly, in conceiving of God filling time on the analogy of God filling space, Newton seems to have moved towards a conception of time as a dimension.

    There are other examples. There appears to be some sort of relationship between a series of rules of prophetic interpretation Newton penned in the 1670s and the Rules of Reasoning that took shape in the three editions of the Principia. It is certainly the case that Newton often advocated the use of similar empirical methodologies in his interpretation of Scripture and his interpretation of nature. An inductive heuristic method is one way that Newton’s study of the Bible is linked to his study of nature. It is also the case that Newton made distinctions between the absolute and relative in both his theology and his natural philosophy. In the case of the former, he distinguished between absolute and relative meanings of the word “God”, contending that Christ was God only in a relative sense, taking the title God (as he does in a handful of occasions in the New Testament) in an honorific rather than ontological sense. The Father, on the other hand, Newton believed was God in an ontological and absolute sense. In his natural philosophy or science Newton distinguished between absolute and relative, time, space, place and motion.

    These examples and others are explored in some of the recent scholarship on Newton, including papers that I have recently published and am about to publish. The profound relationships Newton saw between his theology and his natural philosophy can be explained in part by his commitment to the concept of the Two Books, the idea that God wrote two books, the Book of Nature and the Book of Scripture. Newton saw both these books as “written” by God. Believing this, he could conceive of no disunity between them.

    PN: What were Newton's eschatological and prophetic views? How widely known were these in his day?

    SS: Newton was a premillenarian in his eschatology. Simply put, this means that he believed that Christ would return to establish a thousand-year reign with the saints on earth. The “pre” in premillenarian refers to the time of Christ’s second coming (parousia) with respect to the establishment of the Millennium or Kingdom of God on earth. Like other premillenarians, including Joseph Mede of Cambridge (who had almost single-handedly revived premillenarianism in the English-speaking world in the early part of the seventeenth century), Newton found support for his eschatology through a literal but sophisticated interpretation of such prophetic books as Isaiah, Ezekiel, Daniel and Revelation. Newton believed that the Battle of Armageddon (Revelation 16:16), the return of the Jews to Israel, the rebuilding of the Jewish Temple in Jerusalem and the fall of apocalyptic Babylon (the Trinitarian Church, manifested most fully in the Roman Catholic Church), would occur around the time of the second coming of Christ. He also believed that the true Gospel would only begin to be preached successfully shortly before this period. This was one of the reasons why Newton did not actively preach in his own time. He believed the time of the end was still at least two centuries away.

    The restoration of primitive Christianity in the place of corrupt, institutional Christianity is but one of several positive visions Newton had for the future. He also believed that Christ would establish a worldwide Kingdom of God on earth and that this Kingdom would bring about peace and prosperity of the world’s inhabitants. Based on his reading of Old Testament and New Testament prophecies, he believed peace would prevail between nations, within the animal kingdom and between wild animals and humans. The Jews would be restored to their land Israel after centuries of captivity. Jerusalem, now a city of contention for the world and the three monotheistic world religions, would become the capital city, as it were, of this worldwide Kingdom. For Newton, Christ is to be King of this Kingdom and the saints (whom Newton would have identified as the righteous) are to reign over this Kingdom with Christ (Revelation 20:4-6). An important prophetic passage that spoke to Newton of this coming Kingdom of peace is Isaiah 2:2-4, where the prophet Isaiah says of the nations that “they shall beat their swords into plowshares, and their spears into pruninghooks: nation shall not life up sword against nation, neither shall they learn war any more” (Isaiah 2:4). This evocative vision of international peace evidently also appealed to the former Soviet Union, which donated to the United Nations in New York a statue of a man beating a sword into a plowshare and bearing this quotation from Isaiah.

    Much fanfare has surrounded recently public revelations that Newton predicted the end of the world in 2060. Although this international news story helped bring Newton the prophetic exegete to the attention of the world in a dramatic way, Newton did not predict that the world would end in 2060. The date 2060 represented for him one of the possible dates when the events of the second coming would begin to take place. The date was scribbled on a scrap of paper and was never meant for public eyes. Ironically, although Newton was passionately against setting dates for the time of the end, he himself was opposed to the public setting of dates, principally because he believed it brought ridicule on the Bible when human interpreters failed in their predictions. Newton believed the genuine fulfilment of prophecy would serve as a powerful argument for the existence of God and the inspiration of the Bible. I have elsewhere written in more detail about how Newton came to the 2060 date and how this date fits into his prophetic scheme.

    Only Newton’s closest followers knew about his prophetic and millenarian views during his lifetime. Some idea of his interest in prophecy was made available to the public when a small portion of his prophetic writings were released in the Observations upon the prophecies of Daniel, and the Apocalypse of St. John in 1733. But the writings published in this work are mostly bland and insipid when compared with some of his early prophetic writings, some of which are laced with his heretical theology. Nevertheless, within some Protestant circles some of Newton’s prophetic ideas were known through this book. It is still respected and quoted by certain Protestant prophetic exegetes today. Most of these, however, are either unaware of Newton’s theological heresy or carefully choose to ignore it. Notwithstanding this, Newton through this book played a minor role in the rise of literal prophetic interpretation in Protestant fundamentalism — an interesting irony considering the use to which Newton was put by Enlightenment thinkers.

    PN: Why did Newton conclude his Principia Mathematica with the General Scholium? Why was he concerned to attack monotheism at the close of a work that we tend to think of as starting or playing a vital role in the so-called Scientific Revolution?

    SS: The first edition of the Principia published in 1687 contained only one reference to God (in a statement of natural theology) and one mention of the Bible. Newton, for various reasons (some of which are obvious and others that haven’t yet been fully worked out), wanted a much more robust statement of natural theology in his second edition of 1713. This came in the General Scholium, a sort of appendix that he added to the conclusion of the Principia. With his editor Roger Cotes’s new preface, which contained vigorous appeals to the natural theological utility of the Principia, Newton’s greatest work was now positioned as a work that supported natural theology. This isn’t particularly surprising, given the popularity of natural theology in Newton’s lifetime and his evident devotion to it.

    What is more surprising to those unfamiliar with Newton’s thought, and even to some who are, is that Newton would want to “encode”, as it were, a polemic against the Trinity in the General Scholium. But this he did. It was crucial that he present this polemic indirectly, since open denials of the Trinity were illegal and because he didn’t believe in casting his pearls before swine, to use the biblical expression. Thus, the General Scholium is constructed a bit like a Russian doll, with outer layers that are accessible to all (these include the natural theology and the biblical descriptions of God), and inner layers that can be penetrated only by the elite amongst his enemies and supporters (this includes the implicit attack on Trinitarian scriptural hermeneutics).

    Newton attacks the Trinity in the General Scholium in several ways. First, he argues that the term “God” is a relative word like “Lord”. By this he means that “God” as a term does not automatically denote divine essence or substance, but that the term primarily derives its meaning from its relations, as in “God of Israel”. For Newton, the term denotes dominion and power, not essence and might. The unmentioned but implied heretical corollary to this is that when Christ is called God, as he occasionally is in the New Testament, this does not mean that Christ is God in substance, only that he takes on the title of God as his representative. To support this implied conclusion, Newton cites John 10:35 and Psalm 82:6, which refer to Israelite magistrates being referred to as “God” or “gods” because they represented God on earth. Elsewhere in the General Scholium, Newton deftly suggests that God is unipersonal (a heretical conclusion, since in the Trinity God is tripersonal). He also boldly states that we don’t have any idea of the substance of God. Not only does this resonate with Locke’s phenomenalism, but it is a swipe against Trinitarians, who claim they know enough about the substance of God to conclude that the Father, Son and Holy Spirit are consubstantial beings. In rejecting substance talk when applied to God, we see a biblical phenomenalism that parallels his phenomenalism in physics.

    Why attack the Trinity in a work of natural philosophy? Apart from the likelihood that Newton derived a sense of satisfaction in countering the Trinity in a public text and getting away with it, it is likely that Newton wanted to lend his support to other antitrinitarians, who had published much more explicit arguments against the Trinity. Chief among these was Samuel Clarke, who published his antitrinitarian Scripture-doctrine of the Trinity only a year before the General Scholium was released. It also seems likely that Newton equated the feigning of hypotheses in natural philosophy (such as Descartes’ fluid vortices, attacked in the very first line of the General Scholium) and hypotheses in theology (such as the doctrine of three consubstantial persons). In fact, it appears that Newton saw himself as working to effect two reformations: one in natural philosophy and one in theology. The two reformations come together in the General Scholium. How can we be certain that Newton wrote a coded attack on the Trinity in the General Scholium? We can be sure because when the language of the General Scholium is compared with identical language in his private manuscripts, the more explicit private manuscripts can be used to interpret the meaning of the public text. And so it is that what many consider to be the single most important book in the history of science ends with a theological attack on the central doctrine of orthodox Christianity. Who would have thought?

    PN: Newton is often considered one of the instigators of the Age of Reason. Why is this view mistaken?

    SS: It’s not entirely mistaken; but it is very misleading. Newton’s mathematics, physics and optics, along with his scientific method, were used by the apologists of the Enlightenment (particularly those in France) to help found the so-called Age of Reason. But to use Newton in this way there was much they had to leave out of the picture. In part, this wasn’t their fault, since so little was known about Newton’s theology and how this theology underpinned his science. Nevertheless, it is undeniable that the French philosophes created Newton in their own image as a icon of rationality. Newton’s firm belief that his natural philosophy would ultimately lead to a knowledge of the God of Israel would have been scandalous to these thinkers. In some cases where these image-makers knew about Newton’s theology they contended that Newton turned to theology only with old age and after a mental breakdown, thus preserving the “sanctity” of the Principia. But this contention doesn’t wash. Newton’s manuscripts (most of which we can date fairly accurately) demonstrate that Newton was up to his eyeballs in theology and alchemy for a decade or more before his began to work on his revolutionary Principia. What’s more, it is now clear that some of Newton’s preexisting theological and alchemical ideas actually helped inform some aspects of his natural philosophy or science. I’m sure that if some of the more atheistic philosophes had know what we know today, there would have been more than a little weeping and gnashing of teeth.

    PN: In your writings on Newton you have made a distinction between the exoteric and esoteric meanings in his work. Why did he make such a demarcation and how did he achieve it?

    SS: In his theology, Newton liked to distinguish between open and closed levels of knowledge. When discussing biblical doctrine, he spoke in terms of a distinction between “milk for babes” (simple aspects of doctrine understandable by all) and “meat for elders” (the weightier matters of doctrine, accessible only to the theologically astute). He had similar ideas about mathematics and natural philosophy, commenting at one point in his life that he wrote his Principia only for able mathematicians rather than “little smatterers in mathematics”. One can call this a form of intellectual elitism and in part it is. But it is also similar to, and perhaps related to, similar forms of epistemological dualism in the Greek philosopher Pythagoras, the Jewish philosopher Maimonides and in alchemy. As with some other advocates of epistemological dualism, there was for Newton a social corollary in that he believed that the esoteric layers of knowledge could only be penetrated by the adepts. In his religion and in his natural philosophy, Newton saw himself as a member of a small remnant class who possessed the truth.

    PN: What is the Newton Project and what is your involvement with it?

    SS: The Newton Project was formed in England in 1998 as an effort to transcribe and make available the literary output of the “other” Newton, Newton the theologian and alchemist. Harvey Shoolman, Rob Iliffe and Scott Mandelbrote laid the early groundwork. I was also involved from the very beginning since I was working on Newton’s manuscripts and already networking with these gentlemen. I remain active in the Project and serve on the Editorial Board. Many of my transcriptions can be found on the Project website. The Project obtained funding in 1999 and after that the work began in earnest. Rob Iliffe is the driving force of the Project, which is based at his institution, Imperial College, London. Our original goal was to put the theological manuscripts online to make them available freely to scholars and the rest of the world. Since the original inception of the Project, the scope has broadened to include the scientific work of Newton as well. In 2003 I founded the much humbler Newton Project Canada to serve as a focus for Newton transcription work in Canada. The NPC is a sister organisation to the NP-UK, and most of the transcriptions carried out under the aegis of the NPC will ultimately end up on the NP-UK website.

    PN: What other projects are you working on?

    SS: Despite my enthusiasm for Newton and my passion for unravelling his complex thought world, I am engaged in other research projects. I work on early modern religious heresy, including antitrinitarian doctrine and biblical scholarship. Early modern millenarianism and prophetic interpretation is another interest of mine. I have carried out research on the popularisation of science in the early modern period. I do some work on science and religion in the nineteenth century and, partly because I teach two courses on science and religion at King’s College and Dalhousie University, I have begun to explore the current relationship between science and religion. I am interested in issues relating to the presentation of science in the contemporary media (and teach a course on this subject). Although it doesn’t directly relate as much to my current academic work, I also maintain an interest in biblical hermeneutics.

    The Newton Project Canada under my direction is producing electronic editions of Newton’s Chronology (1728), Newton’s Observations (1733) and the Leibniz-Clarke Correspondence (1717). As for my current writing projects, I am continuing to research and publish papers on aspects of Newton’s theology, as well as the interaction between his theology and his natural philosophy. The largest project is a book I’m writing for Icon Books that is tentatively called Isaac Newton, heretic: alchemy, the Apocalypse and the making of modern science. I am also involved in the organisation of a major Newton conference slated for Israel in late 2006. This conference will bring together scholars who work on Newton’s theology, alchemy and natural philosophy for the first time.

    PN: What can studying Newton teach us about contemporary issues?

    SS: I’m a historian; I never think about the present! But seriously, there is much contemporary value in the recent scholarship on Newton’s theology. To begin with, for the first time ever the wider world is beginning to gain an appreciation for the full spectrum of Newton’s career. This is important partly because it helps correct a problem that has plagued the history of science for decades. Until recently, historians of science have tried to recover modern scientists in historical figures such as Aristotle, Roger Bacon, Copernicus, Galileo and Newton. But these men weren’t scientists and would not have been called scientists by their contemporaries. As for Newton, he was a natural philosopher and natural philosophy included in many cases the discovery of God in nature as well as elements we today would associate with the Arts. In other words, the precursor of science was broader in scope than modern science.

    Religious believers and religious scientists today might take comfort in the knowledge that one of the greatest figures in the history of science was not only a believer, but also a believer in the interconnectedness of theology and natural philosophy. But there is something here for non-religious people as well. Newton was convinced that natural philosophy should have a moral dimension. Many observers of science, as well as many scientists themselves, have argued that science has for too long lacked a moral compass. In this regard, the results of the work of the Manhattan Project come to mind. On another note, Newton’s passionate belief that natural philosophers and theologians must humbly submit to the empirical method is something from which those who study Nature and Scripture can learn a great deal.

    At its most fundamental level the new research on Newton is about presenting a more accurate and a more holistic picture of the man. He wasn’t simply a rational “scientist” coldly working through his calculations. He was a man engaged in a range of activities, embracing both of C.P. Snow’s “Two Cultures”, the Sciences and the Arts. Today science and the humanities often seem far removed from each other. When we examine the career of Isaac Newton, we examine the career of a thinker who represents a time before these two fields of human endeavour drifted apart. Since Newton is a central figure in the emergence of modern science, understanding Newton better helps us understand the roots of modern science better.

    PN: What has investigating Newton's thought taught you about history and its methods?

    SS: The study of Newton and his historical context has taught me a lot about the importance of empiricism in historical research. One must not go to the historical sources with a preconceived idea of finding, for example, a modern scientist in the early modern period. Such preconceptions distort the historian’s data collection and his or her conclusions. It is impossible to avoid beginning with a present-centred perspective, because we all live in the present. But as much as possible we must avoid historical anachronism and attempt to extract the past from the sources in an inductive manner. When we do this with Newton it becomes immediately apparent that earlier depictions of Newton as a sort of positivistic scientist are widely off the mark, even to the point of silliness. We must drawn our lessons from history, not impose them on it. At the same time, historians have to be aware of a host of other types of bias and be willing to submit the conclusions of our research to the critical review of our scholarly peers. Finally, we must never lose sight of the possibility — or likelihood — that in the decades to come our own research may appear incomplete. This awareness should inject a degree of humility to our work.

    PN: Imre Lakatos wrote that "Philosophy of science without history of science is empty; history of science without philosophy of science is blind." What is your view of the relationship between the two?

    SS: One must not study the philosophy of science in a vacuum, just as one should not study philosophy or the history of ideas without a firm sense of the historical context. Similarly, the sophisticated methods available in philosophy of science and philosophy provide aids to the study of the history of science and other aspects of history. Thus, a scientist might want to study Newton’s scientific method without regard to the historical context that helped shape this method, but philosophers of science and historians of science are keen to examine such things as Newton’s scientific method in the various contexts in which it was situated. My own research has pointed to the importance, inter alia, of the theological context, even for the methodological and cognitive aspects of Newton’s natural philosophy or science.
    Teaser Paragraph: Publish Date: 06/01/2005 Article Image:
    By Paul Newall (2005)

    This short essay discusses the various forms of falsificationism, particularly insofar as it functions as a proposed answer to the demarcation problem; that is, the search for a means to distinguish between science and non-science.

    Dogmatic Falsificationism

    The dogmatic (sometimes called naturalistic) version of falsificationism is at once the easiest to understand and (apparently) the most straightforward. The way to demarcate between theories is to call scientific those for which we can specify (beforehand) one or more potential falsifiers; that is, an experiment with a particular result that would cause us to give up our theory. The most common example of this approach is the proposition "all swans are white": this can never be proven, since that would require checking each and every swan anywhere; but it can be disproven by finding a single instance of a non-white swan.

    A theory is scientific, then, if we can say what would possibly cause us to reject it. This seems a reasonable approach to take because if there were no circumstances that could ever lead us to reject the theory, it would be uninteresting; after all, why bother investigating a theory that cannot be wrong and is therefore already true? We could just get on with more important things, like rugby.

    For the dogmatic falsificationist, this understanding helps to make sense of what goes on in science. Although a theory is never proven, if we can falsify it then we force ourselves to look again and come up with a better one. This is also unproven but an improvement on the last; and so it goes. Lakatos referred to the illustrative progression from Descartes theory of gravity, through Newton’s, to Einstein’s. As the first was refuted, the second came along and was able to explain the observed phenomena without falling victim to the same difficulties. Eventually it was also falsified but Einstein was able to do likewise again, explaining what went before but without the flaws. Falsification thus demarcates between scientific and non-scientific theories and helps account for the development of scientific theories.

    Sadly, it does no such thing and it was not long before the flaws were demonstrated. There were three main concerns. Firstly, there was a reliance on a separation between observational and theoretical propositions. The latter would be a particular theory of gravity, say, while the former would be the observations that are supposed to potentially falsify it. Unfortunately this distinction is untenable. To take an example, consider the famous Tower Argument used by geokineticists and geostaticists alike (that is, those who, in Galileo’s time, held that the earth was or was not in motion, respectively). By dropping a stone from a tower, it was supposed that it could be shown whether or not the Earth was revolving as some claimed: if the Earth was in motion, the stone should fall some distance away from the tower; if not, it should land at the base. The theory was thus to be tested by observation, but the problem came when interpreting what had occurred. When the stone did fall at or near the base of the tower (allowing for experimenter error), the geostaticists remarked that this was what predicted by their theory. In like fashion, the geokineticists also expected the stone to fall at the base because they held that everything on the Earth was moving with it, including the stone and the air through which it fell. Hence we see that there was no observational statement without the theories to interpret them. This is an instance of the more general theory-ladenness of observational terms; subsequent study has shown that there can be no (theory-)neutral observational terms because we do not just passively experience the world but actively encounter it and can choose different ways to do so.

    Secondly, there was a logical concern: no proposition can ever be proven by experiment. This basic result has apparently caused much confusion but it is the very difficulty that falsification was proposed to address; namely, that no proposition could ever be proven, hence the effort to disprove them instead. More generally, this is an instance of the problem of inductive logic or the knowledge that logical relations like proof are between propositions, not facts and propositions. Although falsification was supposed to avoid this difficulty by proceeding deductively instead of inductively, in order to call a theory disproven we have to rely on an experiment proving another theory – the negation of the theory under consideration – which is precisely what we agreed could not be done.

    The third and last difficulty was even more severe. When we test a theory by experiment, we do not do so in isolation. Instead, what is actually tested is the conjunction of the theory with a ceteris paribus clause (a Latin term meaning "all other things being equal"). Even if we allow that the first two problems are surmountable, then, we can always dodge a falsification by saying that the ceteris paribus clause was refuted and change it for another, thereby leaving the theory intact. This is exactly what was done with the Tower Argument, for example: the experiment designed to disprove the motion of the Earth was actually testing the theory "a stone dropped from a tower on a static earth will fall at the base, assuming everything else on the Earth is not moving with it". The geostaticists thus immediately said that the italicised ceteris paribus clause has been falsified, not the motion of the Earth. Lakatos gave another example of an astronomical theory that predicts certain behaviour in the heavens, which is actually not observed. Rather than consider his theory falsified, the theorist then says that there must be another body invisible to the naked eye causing the anomalous effects seen. Even when a new telescope is invented and this is no longer tenable, the theorist appeals to the influence of a magnetic field nearby; and so it goes, each new ceteris paribus clause saving the theory from falsification. These auxiliary hypotheses can always prevent the conclusion that the general theory has been falsified, so dogmatic falsificationism collapses.

    Methodological Falsificationism

    If all theories are thus equally disproven then all scientific theories are fallible and we are no closer to solving the demarcation problem or characterising what makes a proposition scientific. This unpalatable conclusion brings us to the second form of falsificationism: methodological. The falsificationist now makes the same basic assumptions as his or her dogmatic colleague but calls them tentative – "piles driven into a swamp", as Popper put it. Relying on a set of supposedly unproblematic propositions, which he or she accepts tentatively, the methodological falsificationist proceeds as before to try to falsify theories. He or she is thus a conventionalist in that certain propositions are taken as basic and used to build a foundation of scientific theories upon. Methodological falsificationism suggests taking some things as given and seeing what happens when we test other theories thereafter; in a word, it advocates risky decisions.

    We can see this at once when we ask what we are to do when a theory is ostensibly falsified. It could be that the theory is false, or that the ceteris parisbus is, or even that one or more of the "basic" propositions assumed by convention are. Although the choice we make could be wrong, the methodological falsificationist sees this as a matter of the lesser of two evils. Dogmatic falsificationism was a dead-end and hence some bold choices need to be made. The chance of rejecting a true theory as falsified is one to be taken in order to allow the possibility of progress; that is, a choice is made between a brand of falsificationism that may not work and giving up completely in favour of irrationalism and an inability to give any justification for theories. As Lakatos put it, it is "a game in which one has little hopes of winning" but he or she believes "it is still better to play than give up."

    It is difficult to critique methodological falsificationism for the simple reason that it is unfalsifiable. What should concern us most is that the history of science gives little indication of having followed anything like a methodological falsificationist approach. Indeed, and as many studies have shown, scientists of the past (and still today) tended to be reluctant to give up theories that we would have to call falsified in the methodological sense; and very often it turned out that they were correct to do so (when seen from our later perspective). This tenacity in the face of apparent adversity – such as when Einstein dismissed "verification through little effect" when his special theory of relativity was falsified by Kaufman’s results – is reinforced by the commitment to the themata that Holton has shown characterise scientists' unwillingness to give up their fundamental conceptions of how the universe is. (For a critique of a potential response to historical arguments, see here.)

    The study of the history of science leaves us with a stark choice: either we have to give up the attempt to provide a rational account of how science worked and works (looking for alternatives as Kuhn did), or we must try to reduce in some way the reliance on conventionalist "basic" propositions in methodological falsification and try again.

    Sophisticated Falsificationism

    Popper attempted to do this by conceiving a sophisiticated version of falsification that held a theory T1 to be falsified only if the following three conditions were satisfied:


    There exists a theory T2 that has excess empirical content; that is, it predicts novel facts – new ones not predicted by T1;
    T2 explains everything that was previously explained by T1; and
    Some of these new predictions have been confirmed by experiment.

    It is thus not enough to find a falsifier to reject T1. Sophisticated falsificationism takes us away from making decisions about theories in isolation and towards considering them in company with others. A theory is not to be rejected as falsified until a better one comes along. Although we might find that a number of experiments are conflicting with a particular theory, we know from our previous considerations that this is never enough to dismiss it. Instead, we wait until a new theory is found which tells us the same things as the old one but without the difficulties (some or all). This gives us a notion of growth or development of theories in place of the dogmatic falsificationism that either accepts or rejects them in single instances. It also means that the so-called "crucial experiment" of dogmatic falsificationism – one that decides the issue at a stroke – is superseded by the realisation that no experiment can be crucial, unless interpreted as such after the event in light of a new theory for which it offers corroboration. Finally, it shows that the idea of proliferating theories (trying lots of alternatives) is important to sophisticated falsificationism as it was not at all for the dogmatic version.

    To go back to an earlier example, then, what made Einstein’s theory of gravity "better" than Newton’s was not that one was falsified while the other was not, but instead that Einstein’s explained everything that the earlier theory did while at the same time offering new predictions, some of which were confirmed (such as Eddington’s expedition to observe the eclipse of the sun in 1916).

    The conflict in science is thus not between theories and experiments but always between rival theories. The problem with sophisticated falsification, however, arises from the fact that it is always a series of theories that are consequently referred to as scientific or non-scientific and never a single theory on its own. Where we have two incompatible theories, we may try to replace one with the other, and vice versa, in order to see which (if either) provides the greatest increase in empirical content; but we must fall back on the conventionalist aspects of methodological falsificationism or the untenable assumptions of dogmatic falsificationism in order to ultimately make a choice. After all, calling novel facts corroborated presupposes a clear demarcation between observational and theoretical terms and also that we have a straightforward situation in which no anomalies are involved – both decisions of convention as to what constitutes "basic" or "background" knowledge when undertaking the process. We have the additional difficulty of not knowing whether a potential falsifier refers to the theory being tested – the explanatory theory – or the underlying one(s) used to make sense of it – the interpretive theory. If we can satisfy the requirements of sophisticated falsificationism, which should we reject? Propositions are also no more proven by experiment for sophisticated falsificationism than they were for the dogmatic version, while we can make the same mistakes in rejecting true theories when we assume that excess empirical content has been demonstrated – not least because a different ceteris paribus clause may have new consequences that can be tested. Finally, we are still no closer to explicating the tenacity of theories, even when the conditions of sophisticated falsificationism would have us conclude them falsified, which we again find in the history of science.

    In summary, then, falsificationism in its various forms is an interesting idea but insufficient either to characterise science or solve the demarcation problem. It suffers from a series of logical and philosophical difficulties that should perhaps give us pause if hoping to find a single answer to what makes good science and what does not.

    Selected References:

    Feyerabend, P.K., Against Method (London: Verso, 1975).
    Kuhn, T.S., The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962).
    Lakatos, I., The methodology of scientific research programmes (Cambridge: Cambridge University Press, 1978).
    Popper, K.R., The Logic of Scientific Discovery (New York: Basic Books, 1959).
    Teaser Paragraph: Publish Date: 06/01/2005 Article Image:
    By Paul Newall (2005)

    According to one understanding of the so-called Galileo Affair, the old system of geocentrism was challenged by the new observations made possible by Galileo’s invention of the telescope. In general, it was believed that theories are tested against observations, so that we have a clear demarcation between theoretical and observational statements; the former confirmed or otherwise by the latter.

    This picture breaks down quite quickly when we consider it in more depth. To begin with, Galileo's observations were not made via his unaided senses but through a telescope. Some of his contemporaries were extremely skeptical of this instrument; indeed, what Galileo lacked was a theory of optics to explain why those looking through his telescope could trust what they were "seeing" rather than suspecting the apparent celestial phenomena to be tricks of the lenses. Thus what occurred was not the clash of Aristotelian/Ptolemaic theory with observations but instead with Galileo's observations in light of his own optical theories (or lack thereof). We say that the observations were theory-laden.

    This line of argument was originally developed in the 1950s and 60s in opposition to the positivist demarcation of observational and theoretical statements, mainly by N.R. Hanson (1958) and later Thomas S. Kuhn (1962) and Paul Feyerabend (in 1981i and 1981ii). It also found a role in critiques of falsificationism. In response it was agreed that some forms of this demarcation failed but that nevertheless there was a natural boundary between statements derived via theoretical considerations and those resulting from the unbiased experience of the world available to us through the senses. In spite of the existence of hallucinations and other sensory phenomena classified as errors (or variously as unscientific, unhealthy or delusional), it was believed that pure impressions could be received by the passive mind and hence constitute direct knowledge of the world. That is, it is possible to identify a normal cognitive process that, although vulnerable to mistakes due to mental illness and other factors, could be relied upon.

    A debate took place during the 1980s between Churchland and Fodor (see their 1988 and 1986 respectively for representative instances) concerning the extent to which theory-ladenness plays a part in perception. The latter argued that a distinction should be made between perception and inference, wherein the difficulties discussed above would apply to deriving statements from our observations but not to the actual observations themselves. Churchland and others responded (see his paper A Deeper Unity in Munevar (ed.), 1991, for example) by noting that observation is not simply a matter of perception; instead, it is a cognitive achievement that involves perceiving that something is or is not the case. A number of papers in neuroscience and related areas by Churchland and others have since expanded on this insight, explaining that if our brains held no hypotheses about the world when encountering it (or, alternatively, if these hypotheses were fixed) then we would not be able to learn from new information. This is to say that observations and experiences have to be interpreted to be meaningful and it is this unavoidable involvement of a theoretical dimension – even at the level of brain functioning – that constitutes theory-ladenness. It goes all the way down, as Feyerabend would say.

    To return to examples, then, even a straightforward statement such as "this lump of coal weighs one kilogram" is riddled with theory. Whether we include inference from prior experience (i.e. that the heaviness from lifting pieces of coal is conserved over time); the apparatus required to derive weights; the physical theories upon which the instruments and concepts like weight and mass are based; other theories that determine the effect (if any) on weight at different locations; and so on; we are very far indeed from a "basic" proposition.

    Not surprisingly, theory-ladenness has been considered a problem because of the importance attached to experiment as an arbiter in testing and/or choosing between physical theories in science. Much discussion has taken place, particularly with regard to the so-called "experimenters’ regress" identified by Collins: the “correct” result of an experiment is one obtained via correctly functioning apparatus; but the latter is nothing more than that which gives a correct result. Similarly, how do we judge the competence of an experimenter other than by whether he or she obtains the "correct" result? For Thomson, the point was that when we talk about repeating an experiment we mean that we "repeat all the features of an experiment which a theory determines are relevant", so we find ourselves "repeat[ing] the experiment as an example of the theory." The apparent lack of any formalised rules (or demarcation criteria, yet again) to decide these issues has led to increased awareness of the importance of other factors, especially sociological ones.

    If Churchland is correct about the inevitable role of theory-ladenness, however (insofar as it is what makes learning possible in the first place and our experience of the world genuinely cognitive), then it is not so much a circumstance to be lamented but a realisation that judging what there is or is not in the world by reference to passively observing it was too simplistic a hope to begin with. Lubbock's advice that "what we see depends mostly on what we look for" thus becomes not a cynical refrain but an encouragement to look again and again in different ways as part of a truly reflexive practice.


    ---

    Selected References:


    Churchland, P.M., A Deeper Unity: Some Feyerabendian Themes in Neurocomputational Form
    in Munevar, G. (ed.), Beyond Reason: Essays on Paul Feyerabend (Dordrecht: Kluwer, 1991M.).
    Churchland, P.M., Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind (Cambridge, MA: Cambridge University Press, 1988).
    Feyerabend, P.K., Realism, Rationalism, and Scientific Method: Philosophical Papers, Volume 1 (Cambridge: Cambridge University Press, 1981i).
    Feyerabend, P.K., Problems of Empiricism: Philosophical Papers, Volume 2 (Cambridge: Cambridge University Press, 1981ii).
    Fodor, J., Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: The MIT Press, 1986).
    Hanson, N.R., Patterns of Discovery (Cambridge: Cambridge University Press, 1958).
    Kuhn, T.S., The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962).
    Teaser Paragraph: Publish Date: 06/07/2004 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2004)

    In this article we'll look at Aesthetics, starting with what it is and why it should concern us before moving on to some historical ideas put forward and the notions that are discussed today. The subject is a very broad one and so we won't be able to cover everything, but hopefully we'll get an indication of why it still fascinates people (and not just philosophers).

    What is Aesthetics?

    Aesthetics is often understood as either the theory of beauty or the philosophy of art, or more generally as both. For a long time the latter was concerned with definition, asking and trying to answer the question "what is art?" The former addresses similar problems, wondering what we mean by beauty (and other similar or contrasting terms that we use, like sublime, ugly and so on), as well as how we would justify an assertion like "this painting is simply magnificent" or "Hugo's writing is pretty awful".

    It seems more obvious in the case of aesthetics why we would bother with a philosophical investigation than many other subjects. Consider the frequent discussions most people find themselves in from time to time when an aesthetic concept is being used or argued: "the Dutch play the most beautiful soccer", "Beethoven's music is more important than a one-hit wonder", "modern architecture is boring", "Juliette Binoche is the best actress alive", and so on.

    An example that comes up often these days is the art exhibition with, say, a fifteen-foot canvas painted in some shade of red before which two patrons of the gallery are arguing. The first might say "'tis truly a marvellous work, dear fellow. The deepness of the colour symbolises both man's passions and his madness, co-joined for the most fleeting of moments with his struggle to grasp a rational temperament and bring order to the chaos that seizes him at every turn. The brush strokes, alternately violent and subdued, remind us that the vicissitudes of childhood and the agony of death are tempered by the calm recognition of our fate that we may attend to in our middle years. How wonderful!" In reply, the second might remark, "um... it's just a plain red canvas. It symbolises only two things: firstly, that you shouldn't sit down or you'll have to stop talking out of your ass; and secondly, that I'm even more of a chimp for paying to see it."

    The point, of course, is that both judgements involve, implicitly or explicitly, a definition of art or a theory of beauty. Even the familiar refrain "beauty is in the eye of the beholder" is just more of the same; after all, why should it be? It isn't obvious that we shouldn't be able to come up with some kind of conception that allows us to say "this is beautiful but that isn't", but neither is it immediate that the task will be an easy one because we see disagreements like that above every day.

    Standing in the Louvre it can appear that there's something very different going on to much of so-called modern art, but plenty of critics wax lyrical about works that the guys at the pub assert could be bettered by their three-year-olds in a finger-painting class, with some on the carpet for good measure. Listening to Bach it can feel as though losing this music would be that much more of a tragedy than the premature death of yet another dance-floor classic. Gazing up at a Dutch townhouse can strike us as a world away from a drab and dreary tower block, while the concretisation of open spaces can seem like a crime—even without the environmental and ethical considerations. However, what reasons do we have for preferring the one to the other, or deciding what goes in the gallery?

    We see, then, that we can't get away from aesthetic judgements of one kind or another. Many thinkers in the past tried to understand art, beauty and what they consist in, some studies saying that aesthetics proper started in 1712 with Addison's series in the early Spectator issues. Long before then, however, the Egyptians and Greeks were investigating such things. Let's look at some of the opinions offered and the history of the subject.

    What is art?

    Early history—the separation of art

    The term "art" comes from the Latin ars, this itself deriving from the Greek techne, meaning skill. The idea was that all pursuits or endeavours requiring a degree of skill—whether it be leading an army, sculpting a statue, building a house or using rhetoric to win an argument—was an art. Any such skill involved an understanding of rules, so there could be no art without an appreciation of those underlying the discipline. Neither the ancient Greeks nor the later Scholastics considered poetry an art, for instance, because it relied on inspiration (to the former, usually from the Muses) and hence was the very opposite of art.

    Thus it was that both the employment of skill and the mastery of the associated rules gave a much wider definition of art than we use today. A distinction was made between those arts that involved only mental effort (the liberal arts) and those needing physical too (the vulgar—meaning "common"—arts). In the Middle Ages, the former were further divided to include grammar, rhetoric and logic, known as the trivium ("three roads"), and arithmetic, geometry, astronomy and music, called the quadrivium ("four roads"). Reading this article, for example, would make it an attempt at a vulgar art form because effort is required both to make sense of the writing and stay awake while so doing.

    In this list, only music is what we would today consider an art. In those days, though, music was the study of harmony, not how we might think of it. Poetry remained separate from the arts until 1549, when the first Italian translation of Aristotle's Poetics was completed by Segni and people became persuaded by the arguments therein and the explanations of the rules of tragedies. During the Renaissance another development took place: what we now know as the fine arts (painting, sculpture, architecture) became distinguished due in large part to social circumstances. Beauty—whatever it is we mean by that term—had become valued more highly and the proponents of these skills thought of themselves as higher than mere craftsmen. Painting originally, and later sculpture, became a valuable commodity and at least as good an investment as other ventures, raising the profiles of their creators.

    Dividing the fine arts from science as it came to be was a more tricky matter. Those involved in the former chose to consider themselves scholars rather than artisans because of the social consequences and used the old ideas of what constituted art to study laws and rules. It was then difficult to say that they weren't engaged in science as it was understood then; in painting, for example, men like della Francesca, Pacioli, Piero, Leonardo (and also Poussin and Dürer, along with others) were either writing on perspective and proportion or using their work to study them. Only late in the Renaissance did a distinction arise, thanks to the assertion that whatever art could achieve, it couldn't do the same as science. This point was made particularly forcefully by the master Galileo, again combining philosophy with science.

    Defining art

    If we look at dictionary definitions we typically find that the concept of art is either undefined or uses the older versions that have been given up today because they were so unsatisfactory. However, it's interesting to look at the evolution of meaning over the years and note that, once again, philosophical analysis has played a part in the progression of our understanding.

    Back in Greek times there was already a distinction made between original and imitative art but it was lost over the succeeding centuries. For some twenty-one hundred years until the fifteen hundreds, the notion we considered before was in general use; that is, producing something by means of accepted rules. The next two and a half centuries saw an evolving of new ideas as the separation of science and crafts from art was mooted and then asserted, until around 1747 when Charles Batteux coined the term "fine arts" and listed them as sculpture, painting, music, dance and poetry. With the addition of a further two—architecture and eloquence—this list of seven became dominant, as did Batteux's conception of these fine arts having in common the imitation of nature. From approximately this time, art was understood as the production of beauty; however, this definition was problematic, so we'll look briefly at why it was given up.

    What is beauty?

    It may seem perfectly obvious what we mean by the term beauty, but even in everyday conversation it's employed in many different senses. Compare, for example, the following statements:


    That was a beautiful sunrise.
    That was a beautiful poem.
    He/She is a beautiful man/woman.
    That was a beautiful goal.
    That was a beautiful moment.
    It's a beautiful day when Hugo doesn't post.
    Although there's a similarity to each, we don't mean quite the same thing in our use of the word. Can we clarify it in such a way as to have each make sense? Perhaps: according to popular definitions, beauty is that which is pleasing to the senses; that would certainly explain each of the options above. Unfortunately, though, it isn't much help to our efforts to define art; it then becomes "the production of things pleasing to the senses", but that puts us back where we started: Beethoven is pleasing to some, but some people take pleasure in the karaoke warbling of rugby players, so how can we call his work art and not the post-match celebrations also?

    Another aspect to this question concerns the division of the crafts from the arts. On the face of it, it seems that a finely balanced sword could easily be considered a masterpiece, or a well-prepared meal, or some exquisite brickwork, or a patchwork quilt put together over generations, and so on. What about carpentry resulting in a simple rocking chair that sends the weary incumbent to sleep as surely as Holblingian prose? Why shouldn't a thatcher call his efforts a work of art?

    A more sophisticated notion of beauty is Cardano's suggestion that it's harmony, a kind of elegance, simplicity and equilibrium of whatever medium is being employed. However, this restrictive sense doesn't seem to capture Baroque or Gothic art, or later on—say—Picasso. As a result, it was rejected too and beauty remained a troublesome aspect of the philosophy of art.

    The failure of definitions

    Both the older classification of art as the production of beauty and the separation of the fine arts from the crafts and sciences began to come unstuck with the advancing years. The advent of photography and later cinematography, along with other areas like landscape gardening, were outside the scope of the older fine arts and often did not produce beauty in the same sense; can a photograph create beauty, or is it already there and the photographer merely records it? However, it seemed ridiculous to exclude these from any definition of art and hence the understanding of the term had to evolve again. The possibilities put forward included the following:


    Art imitates or reproduces nature: Leonardo considered that the best painting was that which "is in greatest accordance with what it represents", but as we alluded to above, imitation is victim of much the same difficulties as beauty. How do we judge the success of an imitation, in any case? What of music, or painting in more abstract forms?
    Formalism: According to this idea, art is the creation of form; that is, shaping or constructing things. It goes back (as do most ideas) to Aristotle but the nineteenth and early twentieth centuries saw a lot of work from theorists in rigorously applying it, Zamoyski declaring that "art is all that which has arisen out of a need for shape". The problem here is that the definition is—as the term suggests—too formal, and intentionally so: the formalists wanted to disallow certain areas of art that they considered should not be. Also, we can quite easily say that the sciences and crafts are creative and there are many people who consider their products beautiful, especially in the former case.
    Art is expression: This is Croce's definition and is one of the conceptions that tries to take account of the intentions of the artist; the painter, for example, has a purpose behind his work that goes beyond merely filling a space on a wall, whereas the armourer only plans to make a sword. Even if the latter is fair and accurate, though, we can still find a sword beautiful or call it a work of art. By expression, then, we refer to the artist's intention to show us some feeling, thought or idea through a medium. This, unfortunately, doesn't account for constructivism in art (that is, combining art, science and technology in an experimental way), or the way a photograph or painting, for instance, can evoke different reactions in different people according to their own ideas and not just (or at all) those of the artist.
    Art is that which produces a certain effect: Instead of looking at what the artist had in mind, we could look at the result for those experiencing a piece. What effect, though? We could say that art produces a kind of exhilaration, or shock, or prompts reflection, but—again—the same opera, say, can prompt myriad responses, from tears to blank faces to remarks like "I hope the fat lady sings pretty damned soon". What is the "correct" effect? It's hard to see how the question can be answered in a satisfactory way.
    We see, then, that the situation with defining art is somewhat analogous to that of defining science or scientific method as we saw in the last article. A number of suggestions have been made and each of them seems to have something to it, but they all have their difficulties too and don't quite capture what actually goes on. This led some thinkers to take the view that the task was impossible in the first place, with Weitz asserting in no uncertain terms that "it is impossible to propose any necessary and sufficient condition of art; therefore, any theory of art is a logical impossibility, and not merely something difficult to achieve in practice".

    Defining art again

    It's tempting to fall victim to this idea but it smacks of something that the philosopher Illka Niiniluoto calls the "all-or-nothing" fallacy. Perhaps we're just setting the bar too high, asking for something that will provide us with absolute certainty when we really need more complex schemes to deal with art, science and life in general? To go back again to our analogy with science, perhaps we can instead use a list? That is, something like "art is expression, or something that makes you feel awed, or sad, or inspired, or it's giving form to an idea you have, or trying to capture nature..." and so on. This seems a little rambling but that's because we have to take into account so many different factors to get at what we mean by art, and because concentrating on only one of them to the exclusion of others has failed. The Polish thinker Tatarkiewicz did just this when he gave this definition:



    Here we see again the evolution and increasing sophistication of philosophical thought and still today efforts continue on this important question. Tatarkiewicz was subject to critique, too, with the Institutional Theory of Dickie coming to the fore (art is defined more by the conferral of status as a piece of art by the artworld, or the community making it up), while still others have suggested (referring to some ideas of Wittgenstein) that the concept of art, like so many others, is defined by its use; to try to do otherwise is to miss the point and misunderstand the way in which all definitions are continuously subject to challenge by the artists themselves.

    Questions in Aesthetics

    Apart from the difficult problem of defining art, there are other areas of aesthetics that we'll say a few words about here. The subject is so large today that it would require a separate series to cover everything that's been written on or thought about art over the years, as we'd expect given the importance most people attach to it in one form or another.

    Experiencing art

    One question studied by aesthetics is the experience of art: is there a correct way to watch a movie, look at a painting, or listen to a piece of music? Do we get more out of the Mona Lisa if we know the history of it, for example, or how it was painted? Is the feeling it invokes in an artist deeper or more meaningful than that in a tourist, or just different? Do we need to know the intentions of an author, say, or what the poem was supposed to mean to appreciate them to the same degree as an English graduate might? Alternatively, does too much education on these things actually dull the senses?

    Judging art

    Everyone is capable of making judgements about art, whether it be graciously, cursing or the simple teeth-clenched retort of "philistine". However, are some opinions more important than others? We are not considering the experience that two pictures may bring about, but instead the classification "that one is best", or something similar. Does expertise make a judgement more worthy of our consideration, as before? On the other hand, can we just say that it's all "in the eye of the beholder", as the saying goes, and hence confidently skirt around remarks like "face it, Holbling: your writing sucks as much as your thinking"?

    Art and Society

    What is the relationship between art and society? In particular, do artists have social responsibilities in their work or can they almost do as they choose?

    In recent years such questions have had an impact on philosophy, especially with discussion of Heidegger and the influence on his thinking — if any — of his Nazism. What of art? Does a photographer in a war zone have a moral obligation of some kind to bring the horrible truth to the public, or does distorting it a little have minimal impact on his or her art? Does a painter who sketches dead bodies with the assistance or a mortuary worker need to worry about what the deceased's family might think? Should an artist seek to have a positive impact on society, or is it "art for art's sake"? There are many ways in which art and society interact and much work in aesthetics is concerned with them.

    We see, then, that aesthetics is a diverse and important subject with roots stretching back thousands of years but which is far from finished and with the same relevance to us today as when the first caveman said "that doesn't look like a mammoth to me."

    Dialogue the Fifth

    The Scene: Still in The Drunken Bishop, Anna and Jennifer have gone to the bar and are talking to a friend of the former, having done so to "let the boys have a few minutes to themselves". Steven is trying to look at Jennifer without making it too obvious while Anna is pointing out Trystyn to her friend.

    Steven: (Eyes slightly glazed...) Wow. Where did you find her?

    Trystyn: At my uncle's house.

    Steven: (Not really listening...) Your cousin is hot.

    Trystyn: (Deadpan...) So am I—they should open a window.

    Steven: (Blinking...) Um, I mean she's nice.

    Trystyn: She's a good person, for sure.

    Steven: What? (He punches Trystyn lightly.) You suck. She's beautiful, I mean.

    Trystyn: What do you mean by "beautiful"?

    Steven: (Exasperated...) Here's an idea: close your eyes; remember that I'm not paying you to be an idiot, so you're working for free tonight; then open them and take another look.

    Trystyn: She's my cousin...

    Steven: So?

    Trystyn: It's not something I've thought about.

    Steven: You're a saint, to be sure.

    Trystyn: No doubt. (He pauses.) Anyway... I've been studying aesthetics recently and it's quite interesting. What do you mean by "beautiful"?

    Steven: It figures that a philosopher would study putting people to sleep.

    Trystyn: Aesthetics, you dolt. Philosophy of art; theory of beauty; that kind of thing.

    Steven: There's a theory? Don't you guys ever leave the library?

    Trystyn: (Conspiratorially...) We get girls there too...

    Steven: (He turns sharply to look at Trystyn.) Really? (Pause. He looks back at Jennifer as before.) "I knows what I likes and I likes what I sees."

    Trystyn: Okay: what is it that makes her beautiful?

    Steven: (Misty-eyed...) Her mind. (Pause.) She has the beating of me, for sure. (Another pause.) Her eyes, and how they narrow when she's about to speak. The way she's holding that glass.

    Trystyn: (Serious.) Wow. I think I'm going to cry. (Pause.) You've only just met her.

    Steven: Weird, huh?

    Trystyn: Hmm. Do you think other guys mean the same things when they say a girl is beautiful?

    Steven: Probably not. Most guys only have one thing on their mind: does she have a father or brother big enough to hurt me?

    Trystyn: (Laughing...) Fair enough. Is it the same as when you say a painting is beautiful, though, or a sculpture?

    Steven: How do you mean?

    Trystyn: Well, are you saying "this painting is beautiful" because you think it's got something about it that makes you feel as though it's special in some way, or worth something; or are you commenting on some technical aspect of it, like the method? Are you referring to the way the painter has achieved whatever aim he or she had in mind? And so on. (Pause.) These aren't the same.

    Steven: I see. (He looks at Jennifer more intently.) When I look at your cousin, I don't really care what anyone else thinks about her, and I suppose it's that way with paintings sometimes too. Then again, I can see what you're getting at: if I watch a movie and I can appreciate how difficult a scene was to make or how hard it was to get across a certain emotion with just a glance, I might think of the director or actress and say "that was beautiful".

    Trystyn: Right—and you'd expect there to be something about it that others could notice too.

    Steven: I suppose so. It should be there for all to see. Then again, not everyone does.

    Trystyn: Right. Then we can ask another question: is that a failing on their part?

    Steven: A failing?

    Trystyn: Is there a "correct" way to watch the movie or is any interpretation as good as another? Also, suppose a scene makes you feel sad but someone else views it differently—satirical, say. Is that a failing on either part, or is there no "correct" way to feel about a scene?

    Steven: I guess I've never thought about it this way. You just assume that what you feel or think about a movie is the right way and hardly give it another thought.

    Trystyn: Sure, but suppose it could be shown that there is a correct way to understand a movie and yours is different; you could watch it again with this in mind and maybe get something else out of it. On the other hand, if it were shown that it just can't be done, then it might say something about critics.

    Steven: That they're useless parasites, you mean?

    Trystyn: (Laughing...) Sort of.

    Steven: I see your point. They should just say "here's my take" and not "this movie is lame and if you disagree you're evidently some form of troglodyte."

    Trystyn: That's it, I suppose. Even if we don't give aesthetics the time of day, we're still using it.

    Steven: The important point is: can I use it on your cousin?

    Trystyn: Better ask her—her she comes now. (The girls are returning.)

    Steven: (Worried...) What should I say? Pretend we were talking about something interesting.

    Trystyn: That's a tall order for me, as you know. (He winks.) Just be yourself; otherwise she'll see through you, beautiful or not. (Thoughtfully, looking at Anna...) Strange creatures—mesmerising, really.

    (Steven is not listening.)

    Curtain. Fin.
    Teaser Paragraph: Publish Date: 06/06/2004 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2004)

    In the last article we looked at the sources, scope and—in general—the theory of knowledge. Given that much of the information we have about our world today has come from science in one way or another it makes sense to look next at the philosophy of science. As usual, we'll investigate the subject by looking at some of its history initially before moving on to some of the interesting topics being discussed today. First, though, we need to understand what the philosophy is concerned with and why it should bother us at all.

    Why study the Philosophy of Science?

    It's possible to give technical justifications for our studies, but let's instead start from the very beginning. Suppose, like Galileo, we stand near the top of the leaning tower of Pisa and drop simultaneously balls of differing weights of roughly the same size. What, before we let go, is the point of this experiment? It's intuitively obvious that the heavier will hit the ground first, so why do it in the first place? Indeed, it may be at least partly because it was (and usually still is) so obvious that few people actually checked. Even so, what does this first theory ("the heavier will land first") mean?


    The heavier piece will land before the lighter piece of the same size if both are dropped at the same time from the leaning tower of Pisa.
    The heavier piece will always land before the lighter piece of the same size if both are dropped at the same time from the leaning tower of Pisa.
    The heavier piece will always land before the lighter piece of the same size if both are dropped at the same time from anywhere.
    The heavier piece will always land before the lighter piece of the same size if both are dropped at the same time and under the same conditions from anywhere.
    The heavier piece will always land before the lighter piece of the same size if both are dropped at the same time and under the same conditions from anywhere and at any time.
    Already we can see that the meaning of our theory is not immediately clear and that even these few alternatives are very different. Also, they tell us what we expect to happen if we actually tried the test (that is, they predict), but not why (that is, they don't explain). Here we have another question to ask of science before we go any further: what are we aiming at? That is, what goal do we have in mind, excluding the remark "just throw things at Hugo"?


    A theory that tells us what to expect and hence allows us to predict the consequences of our actions.
    A theory that tells us why one thing should happen instead of another.
    A theory that describes what happened but says nothing about what might happen in the future, or why.
    Some combination of the above, or something else.
    Again, these aren't the same at all. The first reminds us of the practical person who says "I don't care how it works; I just want to know how to use it." The second seems to be looking deeper, but it of course depends on the context—after all, what do we want the theory for in the first place? The third manages to capture what happened in a description but tells us nothing further. It seems, at this stage, that a little of all would be a better prospect.

    Galileo—to get back to the story—had different ideas. He proposed a different theory, according to which both would land at the same time. In fact, the Aristotelian thinking he was opposing was very complex indeed and to check his theory he decided to try the test that was supposed to give an obvious result: he climbed the tower and started dropping things. He found, of course, that they did land at the same time, so we have two theories, a test and some results. What can we say now?


    Galileo's theory is correct.
    The first theory is wrong.
    Both of the above.
    Galileo's theory is correct under certain conditions but may still be wrong under others.
    The first theory is wrong under certain conditions but may still be correct (or useful) under others.
    Galileo's theory is more likely to be correct than the other.
    ... and so on again. The conclusion we're entitled to make is not so obvious; perhaps Galileo cheated to prove his idea, meaning we'd be wrong to reject the first idea? Alternatively, perhaps he was right after all but still cheated in his experiment? What could we say then? It could also be that the test was flawed in some way, such that although Galileo was honest in his approach he in fact didn't show anything. Moreover, perhaps the theory is a good one for Pisa, but are we justified in claiming that it'll work anywhere? Here we are up against the problem of induction again.

    Some people are aware that what Galileo actually found was a good deal more complicated. On some occasions the heavier object fell slightly quicker, striking the ground just before the lighter. At other times the lighter fell quicker, a result also obtained by Borro using lead and wood. Galileo was not inclined, however, to reject his theory because he thought there may be ways to account for the puzzling results that weren't quite as he expected. Indeed, a recent paper by Settle has managed to solve this mystery: it's actually impossible to release two objects from the hands at exactly the same time; instead, and without meaning to, an experimenter will invariably let the heavier one go first. Thus we see that the experiment when taken literally seems to be confusing without some notion of how to interpret it; we have to be very careful when asking what it all means. Galileo used experiment to test his theory, but when it didn't quite work out he nevertheless kept his theory because of some still more theoretical reasons. It's about time, then, that we looked at how science is supposed to proceed—the scientific method—and what philosophy has to say about it.

    The Scientific Method

    Why do we need to worry about what we mean by scientific method? It's true that your humble narrator is inclined to talk to himself on the matter, but what difference does that make? Well, suppose we look at the history of science, particularly those episodes that—with the benefit of hindsight—we consider to have contained good ideas or decisions, such as supposing that theories should be tested by experiment or that the earth isn't flat. Are there any features in common that could account for the success? If so, we could perhaps say something like "if you want to find a good theory, you should do x", or at least "... you shouldn't do y". This way, both good and bad moves made in the past can inform us today. On the face of it, this seems like a good idea, so let's see what suggestions for methods were offered historically.

    Induction

    It's often held that early scientists didn't approach their work with the same sophistication as we do today, but we've already seen that Galileo was both doing experiments and considering what the implications were for knowledge. In the early seventeenth century, on the other hand, Bacon was advocating for science an inductive method: the idea was to gather as much data as possible about the world and infer general theories therefrom, all the while taking care not to allow any assumptions or theories to influence the finding of information in the first place. We already know about some problems with the former; the latter we'll come to in more detail soon, but for the time being we can at least note that stopping ourselves from having any prior thoughts on what we expect to find is a tall order. Lakatos also pointed out the logical impossibility of deriving a general law from facts.

    Hypothetico-deductive

    Although he didn't call it so, this method was conceived by Newton late in the seventeenth century. The principle is as follows: first, we have an idea or suggested theory (the hypothesis part) that we come up with for some reason or other; then, we try to figure out what the consequences of it would be (the deduction part). The final stage is to test for these expectations and, by so doing, verify whether the theory is a good one or not. In this method it doesn't matter where the theory comes from, but only how well it's confirmed by experiment.

    Unfortunately there's a significant problem here that becomes clear when we set the method out in logical form, as we saw in the earlier article. We want to say:


    P1: If theory T is true, then we would expect to see a set of facts or results F;
    P2: We see F;
    C: Therefore, T is true.
    This is a logical fallacy called affirming the consequent; the flaw is that although T may be true, F might instead be due to something else entirely. Look at this argument, for example:


    P1: If rain dances are effective, we would expect to see rain after a dance;
    P2: Rain is found to follow rain dances.;
    C: Therefore, rain dances are effective.
    In fact, it could be that the rain is caused by something other than the dancing (we would say that it is) and the dance leader has a fair idea of what signs to look for, only starting a dance at such times. If that's so, no amount of wiggling is likely to open the floodgates. Hence, the conclusion doesn't follow from the premises. The flaw in this argument is a difficulty for the hypothetico-deductive method.

    Abduction

    The generally overlooked philosopher C.S. Peirce wrote a good deal on this method that dates back to Aristotle. It's often called inference to the best explanation and reasons thus:


    P1: Facts of the form B have been observed;
    P2: The statement, "If A, then B" can explain B;
    C: Therefore, A.
    This is much the same as the previous method but the important distinction for Pierce was that A is the best explanation for B and therefore is the probable explanation. In our example of the rain dancing, then, it would seem that this isn't the best explanation of the rain, unless your dancing is quite something.

    One problem with this theory is what we mean by the "best" explanation. Another is how it can cope with Hume's problem (it can't). A third is that making a statement like "A is the most probable explanation" has proved very difficult indeed and prompted a great deal of (highly technical) work in inductive justification.

    According to the philosopher Hilary Putnam, however, it would be a miracle if a false hypothesis was nevertheless as successful as some of our scientific theories are and many people consider this a decisive objection. Of course, it isn't; one of several objections is that this scheme uses inference to the best explanation to justify inference to the best explanation—a decidedly unsatisfactory situation.

    Falsification

    Before he became the butt of philosophical jokes, Karl Popper claimed to have conceived the method of falsification that in fact—again—dates back to Aristotle. It took several forms (naïve, methodological and sophisticated) as it proved very difficult indeed to stick up for and was battered by a succession of brutal critiques. In its basic form it was an attempt to avoid the problem of induction by suggesting that science could instead proceed in a deductive fashion: scientists would propose theories and then try to falsify them (i.e. show them to be wrong). A theory that had stood the test of many such attempts is a good one but may still be wrong; a theory that is falsified is discarded. On the other hand, a theory that cannot be falsified at all is thereby not scientific.

    An uncharitable way to look at Popper is to ask if—in common with many philosophers of science—he neglected to check how scientists were actually working, but in fact he was suggesting a new way in which science was to be understood. Unfortunately his ideas were taken to task because very often theories are proposed that don't specify what would falsify them (perhaps they're at an early stage), or else are falsified but still clung to by scientists (Einstein is the paradigmatic example of both)—and why not? It may be that an experiment discovers an anomaly, not a falsification; also, what if the experiment was in error somewhere, or its consequences misunderstood? What if the theory was wrong but by clinging to it scientists found a way around the difficulty and thereby made it stronger? None of the possibilities that take place throughout the history of science are accounted for by Popper's ideas and hence falsification was eventually treated with some hostility.

    Who needs method?

    As a result of these difficulties, some philosophers began to wonder if the prospect of a unique scientific method was such a good one after all. (Meanwhile, other philosophers worried that such thinking would swiftly send the world to hell in a hand basket.) Research found that in fact the many sciences were not unified at all and employed different methodologies (for example, compare particle and condensed matter physics, or molecular and organismic biology), very often even within the same field (compare Einstein or Dirac to Ehrenhaft). Nowadays this disunity of the scientific enterprise is gaining greater recognition and scientists and philosophers alike are less keen to hold forth on the scientific method. Moreover, studies in the history of science have shown that no methodological account seems to be able to take in all the twists and turns made by individuals.

    The demarcation problem

    Perhaps none of this is such a big deal but many people want to distinguish between science and non-science (or pseudo-science), usually to disparage the latter. In that case, we may not be too concerned at the lack of a distinct method but it would help if we could say "this is science" and, similarly, point out what isn't; sometimes we see "scientific" used as a word meaning "you should accept this", so if it's wrongly applied then people could be deceived. This became especially important to debates on funding (who gets the little money available to try all the ideas out there?) and education (how do we decide what goes in the curriculum as science?), the latter particularly with regard to creationism. Thus the demarcation problem: what factors characterise science?

    It seemed that the ideal solution would state that science consists of x, y and z but creationism (or whatever) doesn't; therefore, creationism isn't science and shouldn't be on the curriculum. Some philosophers, though, warned either that this wasn't possible (Lakatos and Feyerabend in particular) or that it would backfire (Laudan). Due to the former, the latter is what happened: science was defined according to a few flawed criteria, leaving creationists the task of adapting their ideas to fulfil them and hence giving birth to creation science, so-called.

    There have been several attempts to propose criterion that would solve the demarcation problem but they were either subject to severe critique (usually by Lakatos) or proved to have no uncontroversial analysis. This led Laudan to declare "the demise of the demarcation problem" and indeed many thinkers have decided to try for something less ambitious.

    What can we say about science?

    A description of science today is likely in some quarters to consist in a non-prescriptive list. For example, a scientific theory is one that has some or all of the following factors:


    It makes testable predictions.
    It is falsifiable.
    It predicts new facts.
    It unifies already existing ideas.
    It is consistent with what we already know.
    And so on...
    However, the point of it being non-prescriptive is that even a theory that doesn't succeed in meeting one of the criteria may be a good or useful theory; we need only be a little cautious about those that fail to meet any or only a few.

    Imre Lakatos used this understanding to develop his methodology of scientific research programmes that made an effort to take into account both the philosophical difficulties we've seen so far and the history of what happened to various ideas and the thinking proposed by scientists and philosophers over the years. He wanted to appreciate just when it would be appropriate to finally discard a theory or, conversely, whether we should be reluctant to ever do so. This was sparked, at least in part, by some historical cases.

    For example, Atomism was proposed back in classical Greek times, in particular by Leucippus and Democritus. Since that time it was mooted, supported, refuted or rejected on several occasions until some two thousand years later it finally became a scientific theory, even though in the early part of the twentieth century it was still looked upon with some scorn. This being so, how can we be sure in eliminating a shaky theory that we won't be making a mistake in so doing? If the answer is that we can't, how can we instead minimise our chances of error or giving a similar idea every chance to impress us again?

    Mill gave a thorough and quite beautiful argument in his On Liberty in favour of methodological pluralism, the notion of giving even apparently crazy theories a chance and using them to aid our work with others. It can be found here. Feyerabend showed with many examples how such pluralism is indispensable and that very often only another method can illuminate flaws or strengths in one we may support. In combination with the tenacity in the face of difficulties that is the lesson of the history of ideas, Lakatos thought he could take these into account with his two concepts of firstly a negative heuristic, being the core parts of a theory that we are reluctant to give up (this is what Kuhn looked at in his famous work The Structure of Scientific Revolutions), and secondly the positive heuristic, being the additional or auxiliary ideas that try to defend the theory against the anomalies and new information that may come up.

    He suggested, then, that the distinguishing characteristic of a progressive, scientific research programme is that it makes new predictions or discovers new facts; a degenerating, pseudo-scientific research programme does not. Nevertheless, the latter case is no reason to reject a theory and we may ask just how new facts are to be found unless we employ a methodological pluralism in the first place and devote time and energy to alternative hypotheses. Lakatos was criticised on such grounds but his terminology has become widely-used today in both science and philosophy.

    In the philosophy of science, then, we have seen progress; we've learned that a simplistic understanding of science won't suffice and that myriad factors need to be taken into account.

    Some concepts in the philosophy of science

    It may be useful in closing this article to look at some of the terms that come up often in discussion that are from or related to the philosophy of science. By doing so, we may begin to understand just what the hell your narrator is talking about in the majority of his blustering.

    Ockham's Razor

    With the exception of the argumentum ad hominem, parsimony is probably one of the least understood concepts around. Philosophers and scientists alike are very sceptical of its application and with good reason. The idea is usually given as "do not multiply entities unnecessarily", or that the theory with the least assumptions is to be preferred. Technical analyses of this suggestion can be made, but the general point is that we are very rarely, if ever, in a situation where two theories have exactly the same consequences and content, except for one having more assumptions. A point made with much force by Bohr is that these consequences of the additional assumptions that we're supposed to reject are never clear before the fact; they have to be investigated to see if they tell us anything extra, either in the area being looked at or outside. Once they've been studied in any depth the issue of which theory to choose usually ends up being decided by other reasons, but even when we think we have considered everything it may still be that at a later date something further comes up. Thus it makes little sense, especially given the many examples from the history of science and ideas we could adduce, to reject a theory on the basis of parsimony unless it meets the very unlikely conditions for use.

    The under-determination of theories

    In the last article we looked at the example of finding white sheep and asking how reasonable it would be to adopt the theory that the next sheep found will be purple. Given that the already available evidence supports equally this hypothesis and an alternative that the sheep would be white, we couldn't say that one was any more reasonable than the other. This is generally called the under-determination of theories: the evidence we have to hand fails to pick out one theory when all are equally supported, as in this example. One way around this difficulty is to note that we're rarely faced with an infinity (or even just several) competing theories and when we are (as in this case) there are other reasons why we accept the one and not the other (for example, some information on the possible pigmentation of wool). Nevertheless, and in light of our comments on pluralism, perhaps we should view it as a failing if we don't have rival theories to choose between?

    The theory-ladenness of terms

    A much more difficult proposition is given by the idea that the appeal to evidence made by many people is all but empty. In its most extreme (and common) form, the conception is of theories that are tested against the facts that somehow sit in the world awaiting our comparison. Instead, these facts themselves depend on other theories in order to be understood, and they on further facts that are interpreted by other theories, and so on. Theories, therefore, go all the way down: there is no evidence free of any theory to appeal to. Another way of saying this is that there's no way to make an observation without relying on theory in some way.

    What are the consequences of this strange situation? Well, early (naïve) versions of empiricism were killed because the experience to be referred to is infected by theory. Also, the comment "I don't see any evidence" is to be more carefully considered; if our observations rely on theories then Lubbock was at least partly correct that "what we see depends mostly on what we look for". There are other more technical points that we won't consider here.

    Verisimilitude

    When Popper began to look at the possibility of comparing a theory to the truth, in the sense of "what there really is", he conceived the notion of verisimilitude: essentially, a measure of how close to or far from the actual truth a theory is. This would be especially useful if two (or more) theories have the same consequences or are both known to be incorrect because we may still care to know which is closer to the truth. Unfortunately this is a notoriously difficult idea to make satisfactory and, as is the sport, Popper came up against some very serious criticism from the likes of Lakatos and Oddie. In recent times Niiniluoto, Tuomela and others have offered more stringent versions but they require a good deal of mathematics to appreciate so we won't cover them here.

    The problem of realism

    The main concern in the philosophy of science today is the problem of realism, which deals with the interpretations of theories. Suppose, for example, that we have a theory that explains in a satisfactory way why an apple dropped outside Notre Dame in Paris falls to the ground, using some form of theory of gravity. Since we can't see or observe gravity with our own senses except by what we suppose to be its effects, should we say that gravity is real (i.e. that it really exists)? Later on the theory might become more successful, in which case we might be even more tempted to say that it is so because the gravity referred to really does exist, although we need to be wary of making the same logical flaw that we saw earlier of affirming the consequent. However, on many occasions in the past our theories have turned out to be wrong, replaced by others. Should we, then, not be a little more careful when declaring what exists and what doesn't?

    This debate has grown into many threads and even realism is no longer easily defined. Niiniluoto gave six different areas we could be realists about, along with the type of questions we could ask:


    Ontological: Which entities are real? Is there a mind-independent world?
    Semantical: Is truth an objective language-world relation?
    Epistemological: Is knowledge about the world possible?
    Axiological: Is truth one of the aims of inquiry?
    Methodological: What are the best methods for pursuing knowledge?
    Ethical: Do moral values exist in reality?
    Some of these areas we haven't yet covered, but we can see that the problem is wide-ranging and the questions important. If we answer "no" (or similar) to any, we call ourselves anti-realists with respect to them. Note that we could be realists on some issues but anti-realists on others: for example, we could believe that the world really does exist and can be known more or less, but also that there are no moral values other than those we create for ourselves. Presently the discussions are at something of an impasse on traditional fronts but new perspectives are being tried by many thinkers. Perhaps the most famous case of realist versus anti-realist interpretation is that of the Quantum Theory. At a later date much more will be said on this vibrant and impassioned area of study.

    There is one significant problem in the philosophy of science to be avoided: poor philosophical ideas may hold back the practice of science. Unfortunately, rather than this being a concern for philosophers (although sometimes it has been), often the guilty parties are scientists who employ uncritical philosophical assumptions in their work without appreciating their basis and their consequences. This has very much been the case with the Quantum Theory, where philosophical decisions made deliberately or unthinkingly have influenced the course of subsequent work—some (including the scientists involved) saying negatively so. Thus it is that whatever our feelings on the philosophy of science, it cannot help but remain relevant and important.

    [For more on the philosophy of science, follow the links given above or visit the History and Philosophy of Science section of the site.]

    Dialogue the Fourth

    The Scene: Still in the Drunken Bishop, Anna has learned that the mysterious girl is in fact Jennifer, Trystyn's cousin on his mother's side and also a philosophy student. As a result, Steven has a new-found desire to continue the discussion of the subject.

    Steven: So tell me, Jennifer: which area of philosophy are you interested in?

    Jennifer: (She looks at Trystyn, who nods.) Realism mostly; the problem of realism.

    Steven: There's a problem?

    Trystyn: It depends who you ask...

    Jennifer: Look at this table. (She knocks on it.) Is it real?

    Steven: Of course it is. (He knocks also.) Is this a trick question?

    Trystyn: It depends who you ask...

    Jennifer: It certainly seems real enough. (She knocks again.) Kinda solid, really. We also have a picture, though, that says the table is composed of particles in some way, mostly empty space. Are these particles real? If so, which picture is really real; the ordinary one or the technical one? Perhaps both?

    Anna: Maybe it doesn't matter? (Trystyn smiles.)

    Steven: Hold on—I don't think we should minimise the importance of philosophy here. (He knocks the table for good measure. Trystyn has rolled his eyes so far they do not appear to be coming back.)

    Jennifer: Of course it matters. The whole point of science, after all, is to find out what the world is really like, so if we have conflicting ideas about what's real or how sure we can be about any such claims, we ought to be worried about it.

    Anna: (Indicating Steven...) I thought you said science aimed only at explaining what had happened and predicting what might?

    Steven: (Quietly, through clenched teeth...) Did I? I don't recall. (To Jennifer...) Tell me some more about the problem.

    Jennifer: Well, we can see and feel the table here; other theories about sub-atomic particles and the like aren't so obvious and even experiments have different interpretations. Not so many people are inclined to doubt the existence of the table...

    Steven: (Nodding at Trystyn...) Make I present exhibit A? This fellow will argue for or against anything.

    Jennifer: The scientific picture is slightly different, though. We had ideas in the past about what's real and what isn't that turned out to be wrong, so we need to be careful that they aren't again. Think about it: many times before we've come up with theories that explain something on the basis of the existence of something else - like the ether, phlogiston or the power of sympathy - but they turned out to be poor theories and now we say those things don't exist after all. Why should we agree, then, that the latest round of similar declarations should fare any better?

    Anna: What's the alternative?

    Jennifer: There are several. We could say that our theories only have instrumental value; that is, we use them as instruments to explain or predict but say that their successes prove nothing whatsoever about what exists or doesn't.

    Steven: Of course. (He nods.) Anything else is the business of head-in-the-clouds types like... (He trails off.)

    Trystyn: (Grinning...) Like...?

    Steven: (Ignoring him...) What else?

    Jennifer: We could say that the point of our theories is to enable us to model our world and that the truth or otherwise of them is beside the point; in fact, it might even be meaningless.

    Anna: That seems to me like an excess of skepticism.

    Trystyn: How so?

    Anna: Just because we can't be sure of our ideas, it doesn't mean talking about them being right or wrong is meaningless or that we should give up trying to find the model that most closely fits reality. (Jennifer smiles at Trystyn.)

    Steven: I think I can see the point here. Many scientists have a basic idea that they're trying to find out "the way it really is", while others don't think that makes any sense and just want successful models, along with the other positions you said there are. In either case, what we can find is limited or defined by the philosophical ideas we start with. Separating philosophy and science doesn't make much sense, I guess.

    Anna: Bravo. (She smiles.)

    Trystyn: My turn to buy, I think.

    Steven: (Deep in thought...) I'm going to ask around my colleagues and see what they make of this.

    Jennifer: You could be a Bohr instead of a Feynman.

    Trystyn: I've always found him an interesting fellow.

    Steven: (Motioning towards the bar with a tilt of his head...) I'm thirsty...

    Anna: (To Trystyn) Come on—I'll go with you. (They leave, conspiratorially.)

    Steven: Tell me how you got into philosophy, Jennifer. Start at the beginning.

    Curtain. Fin.
    Teaser Paragraph: Publish Date: 06/05/2004 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2004)

    In this section of our series we'll consider epistemology, starting with an outline of what we mean by the term and moving on to look at some of the ideas proposed to answer the questions that arise in this area of philosophy. There has been much recent work in the subject so we'll have a lot to cover.

    What is epistemology?

    The word itself derives from the Greek epistéme ("knowledge" or "science") and logos ("speech" or "discourse"). In English, then, we render it as the theory of knowledge, concerned with the nature, sources and scope of knowledge. Interestingly enough (or not, as the case may be), in both French and Italian epistémologie and epistemologia respectively actually mean the philosophy of science, but we'll concern ourselves here with the general subject of ertkenntnistherorie and move on to scientific epistemology later; not so far down the line, though, as to disappoint those readers looking to jump on your humble narrator if he should mention Feyerabend.

    Epistemology, then, is concerned with the following:


    The nature of knowledge: what is knowledge? What do we mean when we say that we know something?
    The sources of knowledge: where do we get knowledge from? How do we know if it's reliable? When are we justified in saying we know something?
    The scope of knowledge: what are the limits of knowledge? Are there any in the first place?

    We'll look at each of these in turn and try to get a grip on them via the efforts of philosophers or the past and present. First, though, we'll take a glance at some historical developments that led to these problems in the first place.

    The beginnings of epistemology

    The early Greeks were studying and trying to determine the properties of reality. They were initially, generally speaking, quite sure of themselves and pronounced in one way or another on its nature. Heraclitus, for example, thought that change was constant; Parmenides, on the other hand, allowed that reality could not change. His argument ran that if Being were to change, it could only become not-Being, which is absurd. Such speculations involved a certainty as to what could or couldn't be said about reality that led others to questions whether we can be so sure of ourselves to make such declarations, hence leading to the birth of skepticism with the Sophists. However, Socrates, Plato and Aristotle still thought that the mind could reach truth and certitude but they considered instead the conditions under which we can say something is valid. Plato thought that knowledge was the attempt to get at universal ideas or forms, and, while Aristotle placed more emphasis on logical and empirical ways of so doing, he still accepted the basic idea that universal principles were to be sought.

    It was recognised early that our senses are sometimes unreliable: for example, most of us have experienced the strange phenomenon of seeing someone far away who looks just like a friend but upon closer inspection proves to be a stranger. Some thinkers nevertheless supposed that we could arrive at knowledge by using our intellect to go beyond the information given by our senses and arrive at general or universal notions and ideas. The question then was what we could say about these universals; do they exist, or are they just a product of our thinking? This led to a great deal of controversy during the Middle Ages between the realists, who thought that these universals existed independently of whether anyone was around to think about them, and the nominalists who held that they did not. (Like most philosophical disputes, the matter was actually not so clear-cut.) Another question brought up by the fallibility of our senses is to wonder why we nevertheless arrive at understandings that seem to work.

    Most people have heard of Descartes and his famous cogito, ergo sum. His method was to try to doubt everything and see what remained standing, believing this would lead to ideas that were certain. The empiricists thought that knowledge could only be gained from experience; the rationalists considered instead that reason could reveal it to us. These two give us the oft-heard phrases a posteriori, meaning "after the fact", and a priori, meaning "before the fact". Of course, it seems strange to insist that we choose one or the other.

    Partly in response to Hume (whom we'll consider later), Kant proposed to provide a synthesis of these two and proposed the following categories for what we could find in the world: phenomena, or things as they appear to us; and noumena, or things as they are in themselves. He suggested that we could have knowledge of the former, but not the latter. Of course, it could be that the two are not so far apart—close enough for government work—and that this explains why our senses still give us useful information. With Kant we reach some of the modern problems that are still vexing epistemologists today, but before moving on to the specific areas we'll lastly consider the pragmatism of recent years.

    Pragmatism is this context may be explained as considering something knowledge if it is useful to some end. The attendant questions of whether an idea is true or gets at reality are sometimes considered meaningless or at least not important; what matters is that a given model helps us to solve problems. Does it matter where we get our knowledge from if it works? This trend has led to some interesting epistemological avenues of investigation that we'll consider later on.

    The nature of knowledge

    The first thing we need to consider is the question of what knowledge is. A popular classical account held that knowledge is justified true belief (JTB): we say we know x if, and only if,


    x is true (1);
    We believe that x (2);
    We are justified in believing that x (3).

    Let's take a specific example: suppose we want to say that we know it's snowing outside. According to this account of knowledge, to know that it's snowing we must have:


    It's true that it's snowing outside (1);
    We believe that it's snowing (2);
    We're justified in believing that it's snowing (3).

    In the first case, it could be the case that it is snowing; in the second, we need only believe it; and in the third, we could be justified in so believing because we can see the snow falling and children are praying (religious or not) that school will be cancelled. This seems straightforward enough.

    Unfortunately a paper by Edmund Gettier pulled the rug out from under this conception. He suggested that we could infer a true circumstance from a false one and then find (2) and (3) satisfied. In that case, we would've made our way to knowledge from a false claim, making the road to this truth something of a fluke. This would be a strange kind of knowledge.

    Looking at our example, then, it could be that we say "the thermostat went up, so it must be snowing". In fact, it may have been that the temperature didn't change and we in fact imagined it; then we would've arrived at justified true belief that couldn't be considered knowledge. The so-called "Gettier Problem" frustrated attempts to counter it and many epistemologists gave up on this account of knowledge.

    Sources of knowledge

    Let us consider, then, some other ways of arriving at knowledge. We could use our senses, our memory, testimony or reasoning. However, any or all of these could be flawed, so there are several theories for how we use them to get at knowledge.

    Reliabilism

    One alternative to the JTB account is reliabilism, in which we say that we are justified in believing something if, and only if, we arrived at it via a reliable cognitive process. For example, in the past our senses have been very reliable (barring the few exceptions we touched on before), so we may be justified in saying "I know there's a computer in front of me". The difference between this account and JTB is the removal of condition (3). Unfortunately the Gettier problem is a difficulty here too and there is the issue of what makes a process reliable in the first place.

    Foundationalism

    One way of explaining where we get our knowledge from is to start with certain ideas or statements as our foundations and build on them, which is called foundationalism. This is much the same way as we can proceed in mathematics—starting with some axioms and building up a system based on what we can get from them. An obvious question to ask, of course, is where we get our foundations from in the first place: are they justified? Some thinkers propose that the choice of such foundational concepts doesn't need to be explained because they're so obvious; perhaps denying them makes no sense at all, for instance, like questioning if the universe really exists. In more recent times some philosophers have called such fundamental assumptions properly basic, meaning that they require no justification at all and can be held by any reasonable person without argument. The trouble then, though, is debating which ideas can be so called.

    Coherentism

    Some people don't think there are any foundational beliefs; instead, they think that our ideas hang together (or cohere, hence the name) and support each other, like bricks in a building. In that case, for example, we might reject the claim "Hugo is interesting" because it doesn't fit with all we already know about how dull he is; on the other hand, if someone said "Hugo is as mad as a clown's trousers" we might be inclined to accept it because we already have a long list of instances that have given an inkling of that theory.

    The difficulties with coherentism are several: firstly, how do we know which ideas a new one has to agree with? Secondly, how do we tell the difference between a true idea and a false one, given that the latter may still agree with a lot of what we already think we know? Lastly, it seems that some of our thinking is more certain than the rest and therefore has greater importance, like the foundations in the brick building; can we account for this feeling?

    Empiricism

    As we noted above, empiricism proposes that our knowledge comes from experience. One of the attractions of this idea is that we might be able to avoid false superstitions with no basis in the world, but there are problems all the same. Many people—not just philosophers—have wondered how logic, mathematics or even ethics could be based on experience: isn't twice two four whether our earth exists or not? If so, how can we account for these (and other) areas from experience alone? On the other hand, we do seem to take a lot of our ideas from our experiences in the first place and when we come on to consider scientific epistemology later we'll see that empiricism has a large part to play with some further (theoretical) difficulties as well.

    Rationalism

    On the other side of the coin, we have the notion that much of our knowledge comes from reason, or the act of reasoning. In particular, we might be looking for knowledge that must hold, irrespective of circumstances, like mathematics or logic (again). The overuse of reasoning, though, can lead to being accused of piloting an armchair; no matter how well you can fly it, you won't leave the lounge and see what the world outside can tell you. It makes sense to think that empiricism and rationalism can tell us a good deal together.

    Naturalism

    In epistemology, naturalism refers to the idea that knowledge can be studied as a science and involves a relationship between sense inputs and cognitive (or mental) outputs. In that case, psychology, sociology and biology can tells us a lot about how we come to our beliefs and further investigation may show how our experiences influence what we end up thinking. We can also apply evolutionary ideas to our questions above.

    There are more possibilities to consider here, but they make sense best in a specifically scientific context. To that end, we'll save them for the later discussion.

    The scope of knowledge

    Perhaps the many and varied difficulties we have seen so far could lead to the suggestion that no knowledge at all is possible, or that at the very least it's a tricky business? Can we know anything for sure?

    Fallibilism

    Some thinkers have suggested that no matter how hard we try or how successful our efforts, there always remains the possibility of error. Even when we feel absolutely sure of something, we could still be wrong. Fallibilism is the idea that all knowledge is provisional and could have to be revised at any instant. Essentially, then, it involves the notion that perfect certainty is impossible.

    Skepticism

    Being traced back to the Greeks (again), skepticism has a long history. We've seen that the conditions required for knowledge are strict and perhaps they may never be satisfied. Some skeptics take this "alas, it doesn't seem that it can be done" attitude, while others are quite sure that knowledge is impossible. (Another usage of the term refers to withholding judgement.) Generally speaking, fallibilism can lead to skepticism.

    An argument heard often against skepticism is given by the question "how do you know that nothing can be known?" The implication here is that the skeptic contradicts him- or herself, knowingly (pun intended) or otherwise. In fact, the skeptic can get around this using an interesting idea from Bertrand Russell, wherein the claim "nothing can be known" is rewritten as, for example, "there is no x such that x matches the description of knowledge". This means that the skeptical challenge is a powerful one but the impossibility of certain knowledge may only mean that we have to be satisfied with what we can get.

    As noted, we'll move on to study some of these notions in more depth when we consider scientific epistemology.

    Some common topics

    We'll finish this discussion by looking at a few subjects that come up frequently and investigate them a little.

    The tree in the woods

    Many of us have heard the question "if a tree falls in the woods and no-one is there, does it make a sound?" (Indeed, Bart Simpson employed it to his benefit while playing golf.) This conundrum here is based on the idea of verificationism: a claim is justified only if it can be verified in some way. In the case of our tree, then, we seem to know from past experience that falling trees make a noise; however, if there's no-one there to hear it, it can't be verified—hence the confusion. In fact, if we leave a tape recorder behind we can soon answer the question.

    The problem can be extended by disallowing any such monitoring device but this shows—quite clearly—that the epistemological ideas we adopt influence what we can say.

    Induction

    In his writings, Hume (amongst other things) expressed what is now called the problem of induction: we ask whether a finite number of particular cases can ever justify a general conclusion. For example, suppose that we visit a farming district and see very many sheep that are all white. Can we then assert from the hundreds of sheep we saw that all sheep are white? It turns out that we would be wrong to do so, or else there'd be no nursery rhyme. How many white sheep would we need to see before we can justify saying that all sheep are white, though? To look at this further, let's lay out the information we have in a logical form using what we learned in the last article:


    Premise 1: The first sheep was white.
    P2: The second sheep was white.
    ...
    P501: The five hundredth and first sheep was white. (And so on.)
    Conclusion: All sheep are white.

    The problem is that none of the premises contain the conclusion and all of them are moreover the same in form, so we're relying on a kind of brute force of numbers. Suppose we saw another thousand sheep, all of which were also white. Are we justified then? Again, apparently we aren't because in fact some sheep aren't white? Is there any way around this difficulty?

    One thing we can do is recast the problem: instead of asking when we're justified in saying that all sheep are white, we can wonder instead when it would be reasonable to assume it. In this light, the matter takes on a much different hue. If we see but one white sheep, it seems unreasonable to insist that all sheep are white. However, once we've seen several hundred of the walking sweaters we may be reasonable in supposing that they all are. The subsequent finding that some sheep are black doesn't change the fact that we were justified in supposing them to be white.

    In recent times, Goodman has posed what is called the "new riddle of induction". Rather than using his example, let's stay with the sheep and reconsider what we have. Our observations of hundreds of white sheep has led us to propose the theory that all sheep are white. Suppose now that we offer another theory: all sheep until some time T in the future will be white, but the next one will be purple. The evidence we have to date supports our theory, but it also fits this new (but silly-sounding) theory too. How do we decide which is the more reasonable, given that both are equally well grounded? Moreover, we could create plenty of other theories of the same form, involving sheep of all the colours of the rainbow and more besides. We can't say that we have to go with the white theory because we've only seen white so far because that is assuming what is to be proven; i.e. that we expect to see new sheep that are also white. This interesting and perplexing problem is notoriously difficult to escape. In scientific terms it's called the underdetermination of theories and we'll see it again when we come on to scientific epistemology and the philosophy of science. Look for these topics next. Later in our series we'll return to epistemology again for a further discussion.

    Dialogue the Third

    The Scene: Our intrepid philosophical adventurers have moved on to a public house of ill repute called The Drunken Bishop, having acquired its moniker following an incident involving an atheist in fancy dress who, unable to hold his liquor, passed out while the owner was discussing a change of image for the establishment. A hastily painted picture of the unfortunate fellow adorns the sign outside. Our friends are, as if by coincidence, discussing Hume.

    Steven: So tell me about this so-called problem that we're all supposed to be worried about. I read about it in one of these "science is just another tool" polemics. (He is drinking a real ale.)

    Anna: What did it say?

    Steven: The gist of it, my dear, is that those who don't think we can justify a claim that's been found to hold time and time again are not so full of themselves as to test it by throwing themselves out of a window instead of using the lift.

    (Anna looks to Trystyn, who is himself counting the number of occasions that the phrase "you know?" comes up in the conversation of the couple on the next table.)

    Trystyn: If I had a pound for every time I heard this you'd still be buying the drinks but I'd have a warmer coat.

    Steven: If you want another you can tell me where I'm missing the point. Why are philosophers so poor anyway? Your talents are under-appreciated, no doubt.

    Anna: Not by her, apparently. (She nods in the direction of a girl who is trying to get Trystyn's attention.)

    Steven: (Smartening himself up...) Be quick, Socrates, before I make my move.

    Trystyn: Okay, Don Juan. The general point we want to make is that on previous experimenting we've found that throwing something out of a window has resulted in it falling to the ground. If it was a grand piano, we of course had comedy potential below, especially if we had an anvil to follow it. Even more generally, we posit the existence of a force—gravity—that explains why it should happen that way. In particular, we noted that unfortunate or deluded pseudo-supermen who opted to fly downstairs also failed to fight crime another day.

    (He pauses to drink some of the red wine he is sharing with Anna.)

    We want to conclude from this finite number of specific cases that throwing ourselves out of a window is not such a good idea, the more so if we have business to attend to at ground level.

    Steven: Where's the problem, then? (He is looking at the girl, who is still looking at Trystyn, who is looking at his wine while Anna alternated between glaring at him and the girl.) The moral of the story is: don't throw yourself out the damned window.

    Anna: I think the point is that no number of cases is enough to justify the inference...

    Steven: So if I launch myself from my cousin's slide head-first into the garden each day for a year and twice on Sundays, you'll say I can't justify my cousin supposing I'm an idiot?

    Trystyn: (Smiling) You must know I'll disagree with you whatever you say, but at least this time we have an interesting point to draw out.

    Steven: ... that my four-year old cousin is smarter than all of us, you mean?

    Trystyn: You may be right. Okay—here we go:

    (He clears his throat in dramatic fashion. Anna is glaring at the girl now and the latter has caught on.) Try to imagine a situation in which we throw ourselves out of the window and in fact don't fall; perhaps we fly all the way to Krypton instead. Ask yourself if there's anything that smacks of the impossible about the idea of it. You can't really refer to gravity making the suggestion ridiculous because we could ask the same thing: what, before the fact, is so impossible about the idea that gravity might not apply at some particular time?

    The obvious response, of course, is to say "well, it always has; why should today be any different?" Even so, that hasn't answered the point at all: how are we to go from the fact that everyone to so far don the red cape has failed to the assertion that everyone must do so, given that the idea doesn't seem impossible to conceive?

    Now take the fellow not bothered by such matters who challenges the philosopher to jump. In the first place, the philosopher who takes the bet is probably doing the gene pool a favour, but suppose that he does indeed dent the pavement—what then? What has the other fellow shown? That the trainee Reeve has to fall? No, since we already noted that the divers of the past weren't enough for that; all he has is one more case to add to the stockpile that wasn't enough to begin with regardless of how many unfortunate people litter the street below. So by demanding that the consistent philosopher jumps if he doubts the inductive arguments that Hume wrote of he in fact shows only two things: one, that he didn't understand the problem at all; and two, that there are some dumb and flightless philosophers in our world.

    Steven: I see your point, Lex, but what kind of idiot really worries about this kind of thing? Aside from philosopher superheroes, I mean. (Anna is struggling to pay attention now.)

    Trystyn: That's a good point, and one that's been made before. Sure, we can't justify an assertion about always falling, but who's going to bet against it? It seems unreasonable to assume otherwise, so Hume's problem is not something to lose sleep over.

    Steven: Right. (He begins to focus his attention on the girl again.)

    Trystyn: Not so fast, Errol. Suppose I offer a theory that everyone will fall until tomorrow at noon when the bastard son of superman flies off into the sunset. You, on the contrary, say everyone will fall. Which is the more reasonable theory? Unfortunately for both of us, all the facts collected to date about plummeting people supports both theories equally well. Yours may seem more reasonable but the evidence collected applies to both our ideas just the same. That, my friend, is a far more difficult problem, but not as important as pretty girls, I'll wager. (He looks at Anna but she doesn't notice.)

    Anna: If you'll excuse me for a moment, I'm must have a word with her... (She leaves in the direction of the girl.)

    Steven: I think she likes you, you know. (He winks.)

    Trystyn: I think I'm thirsty, Cilla.

    Curtain. Fin.
    Teaser Paragraph: Publish Date: 06/04/2004 Article Image:

    By /index.php?/user/4-hugo-holbling/">Paul Newall (2004)

    Logic comes up often in philosophical discourse and even more frequently in informal discussions, typically involving a claim that "logic says x". Logic, however, is the study of reasoning and so (as we’ll see) doesn’t really say anything in and of itself. In another sense, a logic (as opposed to logic in general) is a set of rules governing such reasoning. (This is why we have words like biology, for instance, and others with similar endings.) Both understandings have an historical importance to philosophy that continues today.

    Making an Argument

    Suppose we take an example of an argument:



    Of course Hugo can fly - after all, he has wings.

    Although this may read as an offhand remark, we can cut it into pieces to expose the underlying argument. Set out in stages, it would be:



    1. Hugo has wings.
    2. Therefore, Hugo can fly.

    There is something missing, though: another line making explicit how we get from 1 to 2. Thus:



    1. Hugo has wings.
    2. Critters with wings can fly.
    3. Therefore, Hugo can fly.

    This is the form of the argument. Notice that the conclusion in 3 is derived, as it were, from the information in 1 and 2. We call these the premises, being the statements from which we obtain the conclusion. We can therefore rewrite our argument in a standard form (one which we’ll use later in our series):



    P1: Hugo has wings;
    P2: Critters with wings can fly;
    C: Therefore, Hugo can fly.

    The important thing to notice about this argument is the obvious one: it is ridiculous. We know that not every critter with wings can fly and we would presumably be doubtful that Hugo has a working pair at all, unless he is on his way to a fancy dress party or some other occasion for flapping. It fails to convince us, but nevertheless there is something about its structure that seems harder to dismiss.

    To take account of this, we say that an argument is valid if the conclusion follows from the premises. An airborne Hugo does follow from P1 and P2 (regardless of whether we accept them or not), so our example is valid. To draw attention to the other aspect, however, we say that an argument is sound if it is valid and its premises are true. If not, we say it is unsound. In a subsequent entry in our series we’ll consider truth, but for now we can just say that P1 is false (Hugo does not have (functioning) wings) and so is P2 (not all winged critters can fly), so our argument is valid but unsound.

    Bad arguments

    Later in our series we’ll look in depth at those arguments known as fallacies. These are reasoning gone wrong somewhere (although not always, as we’ll see), where the premises and conclusion(s) may seem plausible but where one or more mistakes have crept in (deliberately or otherwise). These errors are common enough that we can study them systematically.

    Generalising

    If we look again at the argument above, we can note that it would stay almost the same if we spoke of someone else. Likewise, it would work for any pair like "wings = flight". We can generalise it completely by removing most of the content as follows:



    P1: A has property B;
    P2: Anything with property B has property C;
    C: Therefore, A has property C.

    Here "A" stands for "Hugo", "B" is "wings" and "C" is "flight". Going a step further, we could have:



    P1: B;
    P2: B implies C;
    C: Therefore C.

    This time "B" is shorthand for the subject of the argument (Hugo) having the property B. This increasing level of generalisation is perhaps something that scares people off logic, but instead we just need to teach ourselves to read in a different way (as discussed previously).

    Some other terminology to learn for use in philosophy occurs in our argument, which is alluded to above. Take P1:


    Hugo has wings.
    Here "Hugo" is the subject of this statement; that is, it tells us something about Hugo. The other information provides this detail; that is, the remark "has wings" (or "is a winged creature"). This property of Hugo (being a winged creature) is called the predicate. We term the combination of the two a proposition. We can take P2 as another example:


    Critters with wings can fly.

    This is a proposition with "critters with wings" as its subject and "flight" as its predicate. Similar arguments can be split up in the same way.

    Quantifiers

    In modern times, the study of logic changed significantly with the invention by Gottlob Frege (whom we’ll cover later) of the logical quantifiers. There are two: the existential quantifier, ∃, meaning "there exists"; and the universal quantifier, ∀, meaning "for all". These are also employed extensively by mathematicians and, as before, are merely new terminology to simplify and, more importantly, clarify propositions.

    By way of an example, take a proposition like the following:


    All men are mortal.
    If we let M stand for the property of being mortal, we can write this as


    ∀(x) Mx

    Translated into so-called "everyday speech", this says that for all x (the "∀x" part) we can say x is mortal (the "Mx" part). Here x is used in the same way as in algebra, to stand in for something (in this case, men). If we wanted to instead say that there exists at least one man who is mortal, we would write


    (∃x) Mx
    The combination of these two quantifiers can clear up uncertainty in how we are supposed to read a proposition. Suppose we now take the following example:


    Every wedded woman is married to a man.

    We understand this as saying that being married for a woman means having a man as her husband (or a piece of paper noting as much, some suggest), but are we thinking of a particular man for a particular woman, as we would expect, or one man for all the women? After all, the latter sense would work for our proposition: one specific man (luckily or unluckily, depending perhaps on his perspective) could be married to all the women we call wedded. In this event, the proposition would still work. Which is it?

    If we write the two possibilities using the quantifiers, the difference becomes plain. Let x be a woman, y a man, and M now the property of being married. In the first instance, we would have


    (∀x)(∃y) Mxy
    This says that for each individual woman there exists a man such that x is married to y. On the other hand:


    (∃y)(∀x) Mxy

    The order is different here, so now it says that there exists a man for all women such that x is married to y. By altering the ordering in our use of quantifiers we have changed the meaning, so we can employ either to explain which possibility we intend to speak about.

    Depending on how deeply we wanted to delve, we could pick out more shades of meaning and hence use additional letters and quantifiers to make the proposition unambiguous. Nevertheless, the principle remains the same: we use these logical symbols to clarify, not to confuse and, like any other language, they take time to understand.

    Deduction and Induction

    Another point to make about our argument is that it is deductive; that is, from the information we are given (the premises) we deduce the conclusion - after the fashion of Sherlock Holmes, almost. The alternative to a deductive argument is an inductive one, leading to the famous problem of induction (which we’ll cover soon). An inductive argument is often called ampliative because it goes beyond (or amplifies) the information contained in the premises. Consider the following example:



    P1: Hugo has wings;
    P2: Every critter with wings we have seen can fly;
    C: Therefore, Hugo can fly.

    This replaces our old P2 with a new version, according to which every instance thus far of a creature with wings was also a creature that can fly. Notice, however, that the conclusion this time does not follow from the premises unless we also assume something else:



    P3: We have seen all the critters there are.

    This additional premise rescues the situation because P2 and P3 together are the equivalent of our old P2. Nevertheless, the argument in the form we had does not say that we have examined all the creatures there are and thus does not justify the conclusion as it stands. We have to assume something extra (P3) and make an inductive step from the particular facts we have about critters to a general statement (the conclusion). We’ll return to the problems for reasoning that induction appears to lead to later in our series.

    Unpacking Arguments

    It’s rare to find an argument as explicit as the one we’ve been looking at. Typically we have to unpack the informal form we find it in, drawing out the premises and conclusion(s). In this section we’ll take an example and see how the process unfolds.

    Suppose someone said the following:



    It’s my business what I do in my own home, not the government’s.

    We can try to take this claim apart to see how we might argue for or against it. To begin with, the implication seems to be that the government has no right to create legislation which impacts on our private lives. Thus:



    P1: The public and private spheres of life are separate;
    P2: What goes on in our homes is private;
    C: Therefore, the government has no right to create legislation affecting what goes on in our homes in private.

    This gets at what was said but not in enough detail. The argument as it stands is invalid, because the conclusion does not follow from the premises; after all, it goes from a statement about the separation of public and private (which we could assume to be accurate for now) to an assertion regarding the lack of a right. We could add another premise to account for this:



    P1: The public and private spheres of life are separate;
    P2: The government may only legislate on public areas of life;
    P3: What goes on in our homes is private;
    C: Therefore, the government has no right to create legislation affecting what goes on in our homes in private.

    The argument now reads more plausibly but there remain some difficulties. We still go from a statement of what the government can or cannot do to the assertion that it lacks a right, so it seems our initial formulation was flawed here. We could alter the argument without it losing any force by changing the conclusion to "the government may not...", which leaves the sense unaltered but avoids this problematic issues of rights (something we consider later in our series). The resulting argument appears to be valid: P1 ensures that there is no overlap between public and private, which might have allowed the government some leeway; P2 tells us what the government may do; and P3 states that the home lies within the private sphere. It follows, then, that the government may not act upon the private sphere.

    Now that we have a valid argument, we need to ask if it is sound. As we learned above, that means asking if the premises are true or not. Here we run aground very quickly. To start with P1, that there is no overlap between public and private is not clear at all and it is easy to think of situations that would fall into both (for example, if we were to assault someone in the privacy of our own home it would still be assault, for which public legislation exists; or if we smoke at home, should we be permitted to harm others with the smoke?). For P2, where and when the government may legislate is a function of the type of government we have. In particular, the prohibition from law-making in a specific area would have come from a government in the first place, so it seems reasonable to suppose that this could be changed or lifted. Lastly, we might agree that some or most of what occurs in our homes in private without conceding that every possible circumstance would fall under that rubric.

    Our argument is thus valid but unsound. We would need to expand on it considerably to provide sub-arguments for our premises, which is beyond the scope of this introduction. Even so, we can see that this more formal approach has the effect of breaking a claim into smaller pieces, rendering it easier to investigate and evaluate. As before, the purpose of so doing is to clarify what has been said, not to confuse or complicate.

    Laws of Logic

    We rarely have to travel far in philosophical territory to hear talk of the "laws of logic", often accompanied by a suggestion that rejecting them is tantamount to insanity. They are not really so scary, however, nor inescapable. Traditionally there were three:


    The Law of Identity
    The Law of the Excluded Middle
    The Law of Non-contradiction

    The first says "A if and only if A"; or "Hugo has wings" if and only if "Hugo has wings". The second gives us "either A or not-A" (i.e. the negation of A), which means that either A holds or it does not, these being the only options. A proposition like "Hugo has wings" would therefore have to be either true or false, rather than an alternative (such as "undecided"). The last states "not A and not-A", which is just to say we cannot have A and not-A at the same time. In our example, that would mean "Hugo has wings" and "Hugo does not have wings" not both holding simultaneously.

    These so-called laws have been challenged in relatively recent times. Intuitionists rejected the Law of the Excluded Middle (the most famous being Luitzen Brouwer, a Dutch mathematician who rewrote a host of mathematical proofs to remove reliance on it). A simpler example is the three-valued logic of von Neumann which allows for another possibility in addition to "true" or "false" when discussing a proposition: undecided (or undecidable). This is helpful when considering a proposition like "there is no A in the universe", for some A. We could easily show this to be false if we find a single instance of A somewhere, but to prove it true would require checking the entire universe at the same time (after all, we could look in one place and move on, only for an A to appear). It would seem to make sense to call a proposition like this undecidable.

    The law of non-contradiction is discarded by dialethic logic (from di and alethic, meaning "two truths"), or dialetheism. The idea that we should challenge the convention that a proposition is either true or false (but not both) has a long history in philosophy. A more recent motivation came from sentences like "this statement is false". We’ll come back to this later in our series but we can examine it for a moment. If it is true that "this sentence is false", then it must be false; and, likewise, if it is false that "this sentence is false" then it must instead be true. This is the so-called liar paradox. Although we’ll return to some alternative analyses of it, one way to avoid the paradox is to accept that some propositions can be both true and false, violating the law of non-contradiction.

    Lest it be thought that this example may indeed be a difficulty for philosophers but dialetheism does not really impact on the rest of us, consider a point on a doorframe. Is the point inside or outside the room adjacent? Since the door is the boundary between inside and out, we could meaningfully insist that the point is both. A proposition like "the point is outside the room" could then be both true and false, and this would apply similarly to a person stood in the doorway. Situations like this one involving boundaries are studied by dialetheists today.

    There are many other possibilities and logics, with some people arguing that all the laws above should be rejected or else others added. As a result of these and other factors, there is no longer much insistence within philosophy on the laws of logic.

    Logic and Philosophy

    The value of logic in philosophy (or philosophical logic, to be more accurate) is thus quite plain: we can use it to take arguments apart and study their structure in more detail than a cursory glance would otherwise allow. We can also take a more formal approach to reasoning, which will be of great benefit to us as we proceed.
    Teaser Paragraph: Publish Date: 06/03/2004 Article Image:
    By /index.php?/user/4-hugo-holbling/">Paul Newall (2004)

    In the first instalment of our series we described metaphysics as the study of reality. In this article, we'll expand on these remarks and consider some of the ideas offered by philosophers in the past and today. Before then, however, we'll look at the origin of the term and what it means in more detail.

    Metaphysics takes its name from the work of Aristotle, the famous Greek philosopher, and literally means "after the physics"; legend has it that the Alexandrian librarians christened the writings thus because they followed his Physical Treaties. Since then metaphysics has come to be split into two sub-fields:

    Ontology is the study of existence. It asks what there is, what it means to exist and what kind of things there are. These questions are important because we have all (probably) experienced occasions when we were sure something was there but found out later that it was just a dream, a trick of the eye or an over-active imagination. In short, things aren't always what they seem to be, so it makes sense to ask what reality really is.

    Cosmology is the study of the nature of the universe (or cosmos, as the name suggests). It asks questions about what is possible, such as time travel and parallel or alternate universes. These ideas are of interest because we may be able to find some reasons why a suggestion from science fiction will or won't work before we spend our time studying the practicalities of flux capacitors.

    Some historical perspectives:

    Just as we saw in the first article in our series, there has been a good deal of controversy over the years with regard to metaphysics and its importance. When Aristotle was talking of reality he said that if there were no other thing making up our universe beyond what we often call the natural then natural science would be able to study it and hence be what he called the "first philosophy", or "first science". On the other hand, if there is something else, above and beyond nature, then its study must come before natural science.

    What kinds of things could we mean by this "something else"? An obvious answer is of course God, but other philosophers have proposed ideas other than theological ones. Dualists, for example, suspect that the universe may be composed of two substances: mind and matter (and hence dual). Descartes was a famous philosopher who made this claim. Monists, conversely, suggested that in fact there is only one substance making up things (and hence mono). Berkeley, for instance, insisted that there were only ideas, whereas materialists consider that everything is composed of matter in some way. In both cases, the two substances of the Dualists are reduced to one.

    Another issue to look at is the existence of things like numbers or symphonies. Is mathematics, say, invented or discovered? We sometimes want to say the latter because it seems that mathematics just had to be the way it is, but if that's the case then did it already exist as it is? We can also consider, as Popper did, the difference between the different things we might want to say exist: a symphony, for example, can only come about if it was written, stored in some form and played by an orchestra; nevertheless, it doesn't seem accurate to say that it is the score and the musicians. What can we say about the existence of such things and what can we remark about the way they are?

    Once we've thought about what exists we can move on to asking questions about it. Can we separate substance from property? In other words, can we distinguish between that something is and what we can say about it? If so, what properties does matter have? By posing this problem long before our modern scientific approach could help them, some of the Greek philosophers were able to come up with the early atomic theory; Lucretius suggested that atoms made up our universe and proceeded to wonder what this meant for further investigation and whether he could explain what was already seen using his idea. Berkeley, for his part, tried to explain how it was that some things appeared to be solidly real if they were ultimately only composed of ideas in some way.

    Other questions arose from Dualist ideas: if mind and matter are separate, how does the one influence the other? If there is really only one substance, how is it that our minds appear often to be distinct from our bodies? Perhaps there are different ways of thinking about our minds and our bodies that employ different concepts and hence lead to a confusion?

    Another concept analysed by metaphysics was causality, the idea that every event has a cause. Is it possible to justify this except to say that so far we know of no exception? Aristotle looked into the problem and it is still considered today.

    Some philosophers were hostile to metaphysics. Hume famously declared:



    To be fair to him, though, we should note that he had his own ideas about how we might decide what was worth considering and what wasn't, and some of the writing around in his day was obtuse enough to make his remark quite accurate.

    The idea that only information gained from mathematics or the sciences was of any value was taken up by the group of philosophers centred in the Austrian capital and called the Vienna Circle. Some of their ideas would become known as positivism and suggested that metaphysical speculation would, along with theology and superstition, be replaced by positive scientific understanding. The rejection was especially strong in Comte and the early Carnap, who could find no use for it at all except to hold back the advance of science. Wittgenstein, too, had little time for metaphysics throughout the better part of his writings

    Kant also argued that metaphysics could not help us, but for different reasons. He used the term to mean our attempts to study what lay beyond the natural or apparent world and suggested that even though this realm may exist, we can say nothing about what transcends our abilities.

    In recent times the rise of scientism - the idea that only science can tell us anything about the universe - has led to a decline in metaphysics in some opinions and the use of the word in a negative sense; "more metaphysical clap-trap, Hugo" is a popular refrain in my locale, for example.

    Why study metaphysics?

    If metaphysics is out of fashion for many people, why should we waste our time looking into it? Well, some philosophers think that metaphysical choices or assumptions come before anything else we do. Suppose, for instance, we agree that the natural world is "all there is" and proceed to use science to find things out. If science is subsequently successful, is it because there really is only the natural world, or for some other reason? Can we justify our assumption in the first place or only later on by pointing to its success?

    Many of the interesting areas in modern philosophy that we'll come on to later in the series are very much concerned with metaphysics, like the problem of realism or the philosophy of mathematics and mind. Generally speaking metaphysics is involved in questions where many avenues of philosophy meet, such as talking about truth or cognitive abilities. Before we even ask these things, though, are we not presuming that other people exist to answer them? If so, we are using ideas about what there is and what it's like - metaphysics. Later in our series we'll return to metaphysics again for a further discussion.


    Dialogue the Second

    The Scene: Our three friends have adjourned to a local café to continue their musings. Trystyn has let the word "metaphysics" slip and Steven smells blood.

    Steven: So what's this metahooha anyway? (He sips a cappuccino.)

    Trystyn: (Indicating the coffee...) Savage.

    Anna: I thought it was to do with what there is in the world...

    Trystyn: Right. It was a part of philosophy that looked at what there is and what it might be like.

    Steven: (He splutters his drink in comical fashion...) Whoa! Hold your horses, professor—I think you just talked through your hat.

    Anna: Was?

    Trystyn: Yeah - in the old days they made classical divisions between areas of philosophy but these days there's a lot of crossing over going on and metaphysical ideas are in use or under study everywhere.

    Steven: (Wagging his finger...) If I may... I don't think we can overlook the fact that you just allowed your philosophicatoring to overstep its bounds, my dear fellow.

    Trystyn: (Bows ornately.)

    Anna: How do you mean?

    Steven: I already allowed that philosophy was different from science and had a few things going for it - not many, mind you (he winks) - but this metafoolishness is plain nonsense. Science studies what there is and what it's like, whether it be physics or anything else.

    Trystyn: I agree.

    Steven: Um... you do?

    Anna: I think he's going to say that it uses metaphysical assumptions, like anything else. I mean, don't you have to presume that the world exists, is orderly - and so on - before you can start experimenting and working with theories?

    Steven: Science works—that's all I need to know.

    Trystyn: (Nodding...) I'll have to point you out to my friends Joel and Simon. (He smiles.) Let's consider a problem that'll shed some light on this point: suppose we're trying to figure out why this sugar falls if I drop it. (He takes a spoonful of sugar and slowly pours it onto the table.) You say that a force of some kind is causing it to do so and after thinking about it some more we find that it seems like a plausible idea. We could even do some tests and find out that things happen in accordance with this.

    Now: can we say that the force we guessed at must really exist, or only that "sugar falls when you drop it" seems to work and is probably a good theory? In the first case we're saying something about what there is, but in the second we're just talking about what works so far.

    Some philosophers are trying to figure out if these two ways of thinking (and others) are really so different and what their consequences could be. No amount of science, though, can help us decide.

    Anna: So the results of science can be used to investigate metaphysical ideas?

    Trystyn: Yes. Different scientists have different views on what they assume before they start their studies.

    Steven: I must be missing something here because I don't see why I should care about all this. (He dips his finger in the sugar.)

    Trystyn: You don't have to, but I think it's an interesting question.

    Anna: If you don't care it's like you've given up trying to find out about the way things really are in favour of just looking for what will work. I can't understand why you'd do that. (She shrugs.)

    Steven: Maybe I think that what works and what's real are the same thing?

    Trystyn: Perhaps you're right, but how are you going to explain it without referring to what you assume there is and how it will influence what science can say about it?

    (There is a pause.)

    Think of it another way. Suppose I tell you to go and investigate the science behind time travel - what would you do?

    Steven: I don't believe in it, I'm afraid.

    Anna: Exactly.

    Trystyn: (He smiles.) Right. Before we even get to working out how we'll go back in time and telling ourselves to bet on a bodybuilder being elected to govern, we ask if the prospect itself makes sense. Why spend all that time (he winks) and effort on an idea if it's impossible in the first place? Instead, we think of the metaphysics: are there reasons why time travel can or can't be done, besides the practical ones? For example, we could think of all the difficult situations Marty McFly got into and ask what they would really result in before we need to go looking for a DeLorean. If it turns out that the thing just can't be done then metaphysics will have saved the scientist a lot of work.

    Steven: I guess i see your point, but don't we do that anyway?

    Trystyn: Sure, but no-one said you had to just do metaphysics. In fact, do you fancy another cappuccino?

    Steven: Hell yes.

    Trystyn and Anna: Savage.

    Curtain. Fin.