This site is supported by Nobility Studios.
  • Resources

    Our library of interviews, essays and reviews is entirely created and maintained by our members. These resources are aimed at all levels and our content aims to support learning and help people gain an insight into the many areas our members and contributors are interested in. To offer content or suggest an interviewee, please contact us.

    Teaser Paragraph: Publish Date: 06/25/2010 Article Image:
    A form of consequentialism that judges actions by their utility. Act utilitarianism attempted to measure the pleasure against the pain involved in an act (hence the name); in more recent times, the measure was the anticipated benefit to society or some similar concept. Rule utilitarianism considers instead whether the implementation of an action as a rule would be beneficial to society. Killing someone, for example, would be catastrophic for society if turned into a rule.

    Bachelard

    By The Heretic, in People,

    Teaser Paragraph: Publish Date: 06/25/2010 Article Image:
    French epistemologist and philosopher of science. There are two Gaston Bachelards: the severe philosopher of science who lays out the philosophy of concepts and "epistemological breaks," and the self-indulgent literary theorist who reverted to phenomenology (the very subject he already criticized in his work on science, no less).

    Gaston Bachelard I

    Bachelard's works on the philosophy of science ("Essai sur la Connaissance Approchee, New Scientific Spirit, The Philosophy of No: a new philosophy of the new scientific mind", and "Rational Materialism") emerged from studies of relativistic and quantum revolutions, given his background in mathematics and physics.

    Bachelard’s study of the rise of scientific objectivity emphasized discontinuity in science, anticipating some of the insights of Karl Popper and Thomas Kuhn: science develops through a series of discontinuous changes called epistemological breaks, which overcomes epistemological obstacles (the methodological and conceptual features of common sense or outdated science block the path of inquiry), a non-continuous history of science where concepts emerge from earlier concepts through a process of correction and rectification. This sort of history traces the emergence of these concepts and reconstructs the breaks that made it possible. Therefore, there is no such thing as an "earlier version" of a modern concept, but a different conceptual framework that defines the different objects of knowledge that may be evaluated in the light of later developments. E.g., the term 'electricity' is used within the physics of the eighteenth and nineteenth century, but the transforming configuration of knowledge meant the concept of electricity also changed dramatically.

    However, despite these revolutionary discontinuities Bachelard still believed in scientific progress. Even though each scientific framework rejects its predecessor as fundamentally flawed, the earlier framework may contain permanent achievements that are preserved as special cases within the subsequent frameworks (e.g., Newton’s laws of motions are special limit-cases of relativity).

    Bachelard thought the majority of philosophy should be rejected, given that they depend on outmoded scientific philosophy and that modern science cannot, nor ought it, be restricted to one single doctrine (whether it is idealism, realism or even positivism). The only scientific philosophy of the philosophy of physics is, for Bachelard, a "philosophy of No", which denies allegiance to any doctrine and advocates an 'openness' that coheres with the 'open-ended' and 'unfinished quality' of scientific progress itself. The negation is necessary because the scientific attitude must be flexible and adaptable in order to revamp his or her entire framework of reality. There is no destruction, because the philosophy of no actually consolidates what it supersedes.

    Bachelard based his philosophy of science on "non-Cartesian epistemology", (against the demand knowledge must be founded on the incorrigible intuitions of first principles) since all knowledge claims are subject to revision and open to new evidence. Particularly, Bachelard rejects naive realism that determines reality within the terms of the given of ordinary sense experience that ignores ontological constructs of scientific concepts/instrumentation. Nor did Bachelard endorse idealism, but instead, an "applied rationalism" that acknowledges the dynamic role of reason in the constituting objects of knowledge, while agreeing that any constituting act of reason must be directed towards an already (antecedently) given object. Both mathematics and the empirical world are complimentary: math should not be seen as the mere language of physical laws, nor should it be taken as an frozen system of ideas, for it is ‘committed,’ and the empirical world shouldn’t be merely a chaos of discrete quanta of data. The investigator does not passively discover the scientific hypotheses and facts, for he creates them: his rational powers and the physical world both construct a holistic reality beyond the naively empirical one.

    Gaston Bachelard II

    Despite denying the objective reality of the perceptual or imaginative worlds, Bachelard took the time to analyze their subjective and poetic significance. Bachelard's reputation owes more to his studies of poetic language, daydream, phenomenology and their application to instances in the history of science than his work on "anti-positivism" and "epistemological ruptures." The second Bachelard produced works ("L'Eau et les reves, Water and Dreams: an essay on the imagination of matter, La Terre et les rêveries du repos, La Terre et les rêveries de la volonté") on archetypal dreams and daydreams associated with the themes of water as repose, air as movement, and earth as will power/work and rest. Bachelard proposed a "law of four elements" where all images are related to earth/air/fire/water (Empedocles' fundamental forms of matter).

    Bachelard thought the projection of subjective values/interests in the experience of physical world were impediments to knowledge. In his "Le Nouvel espirit scientifique", what Bachelard described as the "psychoanalysis of knowledge" explained how the rise of objective/quantified science required depersonalization, abstraction, emotional restraint, and taciturnity. Not to discredit subjectivity, though, Bachelard thought highly of reverie and saw it as the source of great poetry, abject sentimentality, and imaginary physical theories. In the works on both reason and imagination, the creative role of the mind plays a crucial part. In art, "the subject projects his dream upon things," and in science, "above the subject, beyond the immediate object... is the project." He understood the condition of scientific productivity was an affective engagement with things.

    To be precise, "psychoanalysis" in Bachelard’s vocabulary does not invoke the Freudian analyses of sublimated drives but the disclosure of archetypes (Jung’s studies on alchemy influencing the interpretation of early chemical theories/practice of alchemy) that inspired the study of reverie, or daydreaming. Inasmuch a daydream is beyond the dreamer's control, there is a flicker of consciousness in daydreaming that generates poetic images as the daydreamer discovers an ideal world. The poetic image yields a sense of wonder and discloses an imaginary world of delight as well as universal archetypes, and allows us to read or listen to a poem as if we were hearing words for the first time. This poetic image is an expression of the basic human characteristic of 'imagining.' According to Bachelard, daydreaming is the function of Jung's "anima" (the female principle of repose) that allows us to reach the sleeping waters that lie within us when we are deep in reverie.

    In "La Psychanalyse du feu", the study of eighteenth century experiments with fire, Bachelard showed how the phenomenology of fire as the painful/dangerous/soothing/purifying/destructive/symbol of life and passion determined scientific discourse. The other studies on air, water, earth also as the subject of scientific inquiry have been deconstituted, being dreamt by eighteenth century. The books on imagination and poetic imagery analyze the significance of archetypal images.
    Teaser Paragraph: Publish Date: 06/17/2010 Article Image:
    By Awet Moges (2010)

    After 7 years, I was burned out by philosophy, yet I continued to haunt the philosophy section in search for anything radical and profound. Amidst the expected titles commonly found at any bookstore, sat A Short History of Decay. I pulled it off the shelf in the faint hopes of killing time until the cigar shop opened in 20 minutes. After a couple of hours disappeared savoring the salacious prose, I begrudgingly closed the book and hurried to the checkout counter, cackling in glee in the wonderful fortune of uncovering a new thinker that spoke blasphemous music to my eyes.

    Within a year, I had acquired the remaining books of Emil Cioran, and devoured them with extreme relish. In Cioran not only had I found a thinker after my heart, but also a kindred spirit who experienced chronic insomnia for 7 years, and poured the results of long white nights on page after page. I myself experienced severe insomnia where I could not tell the difference between being awake or asleep, and nothing ever felt real. When I go to sleep, my consciousness is at rest, and I begin a new life the next day. But when I stayed awake all night, there was no interruption of being conscious. No new life. In the morning I'm exactly the same as I was last night. Some of his writings cut cleanly through the flesh to the bone:



    A genius of apothegms who also doubles as a “monster of despair,” Cioran (1911 – 1995) remains the best-kept secret of intellectuals today. A self-exiled Romanian who wrote his best work in French, Cioran has carved a niche on the bookshelves as a “fanatic without convictions” with a wry wit and stylized prose that savages rationality with trenchant irony.

    First, I will highlight existentialist elements in Cioran's works to argue that he belongs in the canon of existentialism. Then I will expand on boredom and insomnia, the major concepts that pervades Cioran's books. Finally, I will juxtapose Cioran's thoughts against those of the other existentialists.



    For existentialism

    Many of the modern themes that recur in Cioran's work come from the garden of existentialism: despair, absurdity, alienation, irrationality of existence, the need of self awareness. His gnomic tone oscillates between Nietzsche and Schopenhauer – Cioran explodes with the lyricism of the former in his early works (Tears & Saints, On the Height of Despair), and gravitates towards the depraved cynicism of the latter in his mature works (A Short History of Decay, Temptation to Exist, Fall in Time, Drawn & Quartered, The Trouble with being Born). However, there remain deep differences: Cioran refrains from heroic postulations like the Ubermensch and Amor Fati of Nietzsche, or the metaphysical speculations and the slightly hypocritical recommendation of resignation of Schopenhauer.

    The decline of system building in philosophy in the early 19th century opened the way for new forms of discourse: ideologues and the reactionaries. Ideologues wrote anti-philosophical systems in the form of human sciences. Reactionaries on the other hand were a new kind of philosophizing that took autobiographical forms: personal, aphoristic, lyrical and anti-systematic. Cioran is the best example of this new way of writing of the 20th century.2

    I think it is legitimate to include Cioran with the other existentialists because he has carried out the premises of Existentialism Proper to its logical, if outrageous, conclusion. His early nihilistic work, On the Heights of Despair, deals with despair and lucid suffering in a way that evokes the bitter ravings of the Underground Man from Dostoevsky's Notes from the Underground. Written under the duress of suicidal insomnia, Despair embraces suffering, resignation, knowledge as sickness, and the absolute subjective experience.

    In a nutshell, Cioran's early philosophy is an “absolute lyricism” where his lucidity allows him to “discover and mercilessly expose the hollowness of all philosophical systems.”3 The opening essay of Despair, titled “On Being Lyrical” Cioran argues that one is being lyrical when “one's life beats to an essential rhythm and the experience is so intense that it synthesizes the entire meaning of one's personality. What is unique and specific in us is then realized in a form so expressive that the individual rises onto a universal plane.”4 One of the earmarks of existentialism is its reduction of philosophy to biography, and lyricism is an effective prose.5

    Cioran's relentlessly self-conscious writing deliberately opposes civilized writing where organic fears cannot be canceled by abstract constructs. Similar to Nietzsche's distinction between the Dionysian and the Socratic person, Cioran privileges the organic, suffering thinker over the philosopher or the abstract man: “Out of the shadow of the abstract man, who thinks for the pleasure of thinking, emerges the organic man, who thinks because of a vital imbalance, and who is beyond science and art.”6 The organic thinker transforms his passions into obsessions. “I like thought which preserves a whiff of flesh and blood, and I prefer a thousand times an idea rising from sexual tension or nervous depression to an empty abstraction.”7

    As Cioran matured, his nascent skepticism ripened and, in mercilessly demolition of his earlier idols, he criticized language itself. The main focus of Temptation to Exist was the complete severance between language and reality, and that shares much in common with Sartre's concept of nausea.8 The notion that concepts in language correspond to objects of reality is the foundation of western thought. However, Cioran instead saw language as a “sticky symbolic net,” an infinitely self-referential circular recession that distanced people from reality.

    Cioran expanded this focus in Fall in Time, where language exacerbates our metaphors for experience and gets in the way of being truly alive in the moment. Split off from originality, we can no longer exploit what made us different from animals:



    Thus, language has become an ouroboros, a vicious circle that signifies nothing but itself and has become the ultimate condition for man: “all speech hyperbole, all prose rhetorical, all poetry prosedemic, and all thought proleptic.”10 It is then obvious that a stylist like Cioran was too elusive to be frozen and packaged in convenient categories, as well as too complex and precise due to a decisive style that both emphasized and contradicted the ambiguity of his message. We can only highlight themes instead.



    Boredom / insomnia


    Throughout his work, Cioran never tires of boredom as a topic, which is paradoxically, an inexhaustible source of creativity, and a preliminary to his fatally incurable insomnia. Boredom is a baseline of "bare human existence" that demonstrates how we are all "embedded in time." Once we lose the comfortable illusions that shield us from the effects of experiencing the passage of time, boredom sets in and smears everything into undifferentiated blobs of drab grey. "Life is more and less than boredom, but it is in boredom and by boredom that we discern what life is worth"11 Boredom for existentialists is a fundamental mood that emphasizes the finitude of existence, and both Heidegger and Sartre claim boredom is the naked access to being. 12

    If you're not in pain, or happily distracted by some goal you've given yourself, you're left alone with life at its bare minimum. This mundane existence consists of absolutely nothing interesting, and nothing to do. "Boredom will reveal to things to us: our body and the nothingness of the world."13 In order to escape this experience of nothingness, we forget that we are merely physical husks and hurry to busy ourselves in any activity. For Cioran, boredom is one extreme swing of the pendulum; once there, we are compelled in the opposite direction, desperate to find anything to paper over the emptiness. "Life is our solution to boredom. Melancholy, sadness, despair, terror, and ecstasy all grow out of boredom's thick trunk."14

    However, even if something interesting or satisfying is found, it will eventually inspire feelings of futility and meaninglessness, and boredom returns with a vengeance. “No matter what you do, the starting point is boredom, and the end is self destruction.”15

    The first aphorism of The Trouble with Being Born: "3 in the morning. I realize this second, then this one, then the next: I draw up a balance sheet for every minute. And why all this? Because I was born."16 Everything that follows is framed by that experience: we are born into time, and we only realize that, once we take a step back from our mundane activity, time is the baseline of all  experience. "What should I do? Work for a social and political system, make a girl miserable? Hunt for weaknesses in philosophical systems, fight for moral and aesthetic ideals? It’s all too little."17

    There are experiences that amplify the dull echo boredom resonates through life, such as insomnia. It is true that boredom isn't identical to insomnia, but they both are pure access to the bare flow of time. Although Cioran focused on many other themes of existentialism with ennui, solitude, infirmity, and suicide, I think insomnia is his muse, and the key concept of his oeuvre.

    Cioran is probably the exemplar of insomnia, a walking poster boy of insomnia, having suffered it throughout his life. He even claims to not have slept for 50 years! In an interview with Michael Jakob, Cioran claimed his insomnia was the “greatest experience” of his life, for it was his defining insignia and his intellectual crucifixion.18 The only solution Cioran found for his severe insomnia was exhausting himself with long bicycle rides throughout the French countryside.

    While his books are merely autobiographies masquerading as analyses of decay, they explore the very personal fact of insomnia as a “form of heroism..[that] transforms each new day into a combat lost in advance.”19 The early Cioran regarded insomnia as a noble affliction, a disease of hyper-consciousness. The later Cioran glorified it: “To save the world 'grandeur' from officialdom, we should use it only apropos of insomnia or heresy.”20

    Cioran found sleeplessness instructive, in which it helped undo all certainties. But insomnia is hardly ever pleasant. Anyone who's stricken with it tries their damnedest to find a cure. Had pure conscious existence been a good in itself, then we would hardly be in a hurry to cure insomnia. If consciousness was truly pleasant we would regard insomniacs as fortunate, or even sacred.



    In opposition to Aristotle, Cioran claims that we are the animal who cannot sleep: “Why call [man] a rational animal when other animals are equally reasonable? But there is not another animal in the entire creation that wants to sleep but yet cannot.”22 At other times, Cioran regarded insomnia as a demon:


    Other philosophers argued that boredom proved that existence is inherently miserable,24 and Cioran appropriated this argument for insomnia. Given this, insomnia can also be seen as the secret of tapping into the pure feeling of time.


    At the end of A Short History of Decay, Cioran suggests that insomnia is an induction to a secret society of thinkers:

    Not only did Cioran survive insomnia, he took advantage of his conquest by making something of it. “When you waken with a start and long to get back to sleep, you must dismiss every impulse of thought, any shadow of an idea. For it is the formulated idea, the distinct idea, that is sleep's worst enemy.”27 Instead of going back to sleep, he got up and poured his thoughts on paper, in essays and aphorisms.



    Against existentialists


    Despite the preceding claims, Cioran is not a conventional existentialist. He often questions the validity of existence, even though he employs existential themes, and unleashes a bottomless pessimistic streak that would blanch even Schopenhauer. Cioran said most of us, throughout our lives, attempt to "keep deep down inside a certitude superior to all the others: life has no meaning, it cannot have any such thing."28 Institutions like education or religion, systems of thought like philosophy or science, works of expression like art or music are all ways of masking this inescapable truth, for they all seek to divert our attention from its shattering impact. So, how then can we live with it?

    Contra Schopenhauer, Cioran says we cannot escape agonizing ourselves with the awareness of the malady or mortification or curse of our existence. Our inquiring and scrutinizing mind give us no peace; only those living in frivolity and fabrication can avoid the constant agony. Observe your happy and content friends, and you'll draw no other conclusion. Once skepticism or nihilism takes prominence, this is the inevitable disease that follows: the symptoms, the existential feeling of this crisis of life. On the other hand, once Nietzsche freed the passions and imagination from subservience to reason and by restoring art to its rightful place, the universe became tolerable, even romantic.

    Contra Albert Camus, Cioran says we all should indeed kill ourselves. That is the only consistent way to accept the absurdity of our lives. Yet we foolishly aggravate the absurdity by cowardly refusing to commit mass suicide. Whoever attempts suicide has that flush of certainty that release is imminent, but it won't because absurdity lasts until the very final moment, and if the absurdity isn't followed through to its conclusion, the ensuing shame of being a failed suicide is even worse.

    Contra Sartre, Cioran says "the intoxication of freedom is only a shudder within a fatality, the form of [our] fate being no less regulated than that of a sonnet or a star."29 Freedom of will is another self-deception, an artifice of modernity that seeks to invert the void within ourselves. For people born only to experience the crushing inevitabilities of disappointment, suffering and death, a freedom defiantly thrown against the void is no answer at all. We are stuck between two irreconcilables – life and idea – and this ambiguity becomes our second nature. Thus, we suppose ourselves free, above and beyond the laws of nature or the mind.

    From “spermatozoon to sepulcher” we are pawns of a taunting fate that selects for some good fortune and others for bad by chance.30 Each life is a useless hyphen between birth and death. As evolutionary biology and scientific cosmology show, Homo sapiens is one more organic species doomed like the rest to extinction, and a mere fleeting flutter in the universe's surge to heat death. Much like Sartre's nausea, Cioran acknowledges a revolting disgust surging up from such realizations: "that negative superfluity which spares nothing... [and] shows us the inanity of life."31

    Cioran keeps lacerating the reader until she confesses her beliefs are tired myths, and in his remorseless destructions of such expired myths, he enriches and edifies the reader. When philosophy itself was young, Plato needed youth and beauty to advance the cause of philosophy, or he would have dismissed Socrates as just another sophist. He needed a martyr myth, because creation always involves destruction – whatever is introduced as new needs a martyr, especially when it promises change. Nowadays, philosophy has decayed and expired, and is in dire need for new myths and new martyrs to resurrect a new beauty, like a phoenix out of the ashes. Cioran, in his paradoxical way, has ignited the flames with his own intellectual crucifixion.



    Works Cited

    Cioran, Emil. Pe culmile disperarii. Bucharest: Fundatia Pentru Literatura si Arta “Regele Carol II,” 1934. Translated by Ilinca Zarifopol-Johnston as On the Height of Despair. (1996) Chicago: University of Chicago Press.
    __________. Lacrimi si sfinti. Bucharest: Humanitas (1937). Translated by Ilinca Zarifopol-Johnston as Tears & Saints. (1998) Chicago: University of Chicago Press.
    __________. Précis de décomposition. Paris: Gallimard, 1949.Translated by Richard Howard, Richard as A Short History of Decay. (1998) New York: Arcade Publishing.
    __________. La Tentation d’exister. Paris: Gallimard, 1956.translated by Richard Howard as The Temptation to Exist. (1998) Chicago: University of Chicago Press.
    __________. La Chute dans le temps. Paris: Gallimard, 1964.Translated by Richard Howard as The Fall in Time. (1970) Quadrangle Books.
    __________. Ecartèlement. Paris: Gallimard, 1979. Translated by Richard Howard as Drawn & Quartered. (1998) New York: Arcade Publishing.
    __________. De l’inconvénient d’être né. Paris: Gallimard, 1973. Translated by Richard Howard as The Trouble with being Born. (1998) New York: Arcade Publishing.
    __________. Aveux et anathèmes. Paris: Gallimard, 1987.Translated by Richard Howard as Anathemas and Admirations. (1998) New York: Arcade Publishing.

    1. On the Heights of Despair, Preface to the French translation
    2. Sontag, Susan. Introduction to Temptation to Exist, p. 11
    3. Zarifopol-Johnston, Ilinca. Introduction to On the Heights of Despair, p. xviii
    4. On the Heights of Despair, p. 4
    5. Nietzsche claims that all philosophy is the biography of the philosopher.
    6. On the Heights of Despair, p. 22
    7. On the Heights of Despair, p. 22
    8. Sartre's concept of the experience of absolute contingency, presented rather vividly in his seminal work, Nausea. For Sartre, nausea stresses the absurdity of contingency, where objects lose their labels or labels fail to attach themselves to objects. Words and objects are divided, and the object becomes strange, dense, and absurd. The experience of nausea leads to the realization that labels, words, are all human inventions that have very little to do with existence, other than practical purposes.
    9. Fall in Time, p. 133
    10. Newman, Charles. Introduction to Fall in Time, p. 13
    11. Drawn & Quartered, p. 139
    12. Heidegger says boredom “reveals what-is-in totality.” In other words, boredom removes the normal focus and cares about particular beings and diffuses one's awareness into a sense of Being-as-a-whole being revealed. For Sartre, profound boredom is a special type of nausea where it provides an access to the very being of things, and leads to the awareness of oneself as the source of meaning.
    13. Tears & Saints, p. 88
    14. Ibid, p. 89
    15. Ibid, p. 86
    16. The Trouble with Being Born, p. 3
    17. On the Heights of Despair, p. 43
    18. “What is that one crucifixion compared to the daily kind any insomniac endures?” Trouble with being Born, p. 14
    19. Cioran to Gabriel Liiceanu, Continents, p. 92
    20. The Trouble with Being Born, p. 81
    21. A Short History of Decay, p. 170
    22. On the Heights of Despair, p. 85
    23. Drawn and Quartered, p. 123
    24. Arthur Schopenhauer.
    25. On the Heights of Despair, p. 83
    26. A Short History of Decay, p. 169 - 170
    27. Anathemas and Admirations, p. 199
    28. A Short History of Decay, p. 105
    29. A Short History of Decay, p. 69
    30. Ibid, p. 46
    31. A Short History of Decay, p. 12
    Teaser Paragraph: Publish Date: 06/14/2010 Article Image:
    This page is a sitemap, which lists all the learning resources currently available at The Galilean Library. The content has all been written and submitted by our members so it always reflects the community's interests. You can help us expand this material by offering work of your own, from short articles or reviews to essays and lessons on any subject you choose. If you think a subject is missing then consider writing something to fill the gap and sharing it with others here and our readers all over the world.

    Interviews


    John Wilkins: Biology and Philosophy
    Per Ahlberg: Evolution and Palaeontology
    Bradley Monton: Debating the Philosophy of Science
    James Howard Kunstler: The Long Emergency
    Del Ratzsch: Science and Design
    John Dupré: The Disunity of Science
    Michael Ruse: Science and Religion
    Thomas Lessl: Science and Rhetoric
    Jolly Mathen: Incompleteness and Scientific Theories
    Gonzalo Munévar: Feyerabend and Beyond
    Keith Jenkins: Rethinking History
    Aviezer Tucker: Our knowledge of the past
    Stephen D. Snobelen: Newton Reconsidered

    Essays


    Introducing Philosophy series

    What is Philosophy?
    Doing Philosophy
    Metaphysics 1
    Logic
    Epistemology 1
    Philosophy of Science
    Aesthetics
    Reading Philosophy
    Political Philosophy
    Truth
    Ethics
    Postmodernism
    Free Will and Determinism
    Philosophy of Mind
    Philosophy of Religion, Part 1
    Philosophy of Religion, Part 2
    A guide to Logical Fallacies
    Analytic Philosophy
    Philosophy of History
    Metaphysics 2
    Epistemology 2
    Rhetoric

    [*]Philosophy


    Sisyphus Shrugged
    A taxonomy of fundamental ontologies, Part 1
    A taxonomy of fundamental ontologies, Part 2
    A taxonomy of fundamental ontologies, Part 3
    Schopenhauer's Philosophy
    Is human free will compatible with divine omniscience?
    Theological Fatalism, Part 1: Reply to Robert P. Taylor
    Theological Fatalism, Part 2: Reply to Robert P. Taylor
    Theological Fatalism, Part 3: Reply to Robert P. Taylor
    The Omniscient Book

    [*]History


    Hermeticism
    Minimalism and the rhetoric of misrepresentation
    Philosophy and the New Archaeology
    The complexity of Newton
    The Galileo Affair, Part 1: Introduction
    The Galileo Affair, Part 2: Non-intellectual contexts
    The Galileo Affair, Part 3: Intellectual contexts
    The Galileo Affair, Part 4: The trial and its development
    The Galileo Affair, Part 5: The aftermath
    Galilean Myths
    Galileo and the Bible

    [*]Philosophy of Science


    The Mechanical Philosophy and God
    Falsificationism
    Ockham's Razor
    Theory-ladenness
    Confirmation
    Underdetermination
    Proliferation
    Lakatos and the Demarcation Problem
    "Anything Goes": Feyerabend and Method
    Thomas Kuhn: Assassin of Logical Positivism or its double agent?

    [*]Literature and Film


    Borges in his parallel universes
    American Psycho Reinterpreted
    Kirilov's Dilemma
    Wag the dog
    Shyamalan's "The Village"
    Soderbergh's "Solaris"
    Tykwer's "Heaven"
    Kieślowski's Three Colours Trilogy

    [*]Art


    The Roots of Modern Art, Part 1: Introduction
    The Roots of Modern Art, Part 2: Impressionism
    The Roots of Modern Art, Part 3: Post-Impressionism (I) - Van Gogh and Colour
    The Roots of Modern Art, Part 4: Post-Impressionism (II) - Cezanne and Form
    The Roots of Modern Art, Part 5: What is Art? (I)
    The Roots of Modern Art, Part 6: What is Art? (II)
    The Roots of Modern Art, Part 7: Picasso (I)
    The Roots of Modern Art, Part 8: Picasso (II) - Guernica
    The Roots of Modern Art, Part 9: Against Art


    Reviews


    Cantor's "Inventing the Middle Ages"
    Monton's "Seeking God in Science"

    Recommended Reading


    Philosophy
    History and Philosophy of Science

    Articles


    How to learn a language
    How to do research

    ---
    Teaser Paragraph: Publish Date: 06/13/2010 Article Image:
    By Awet Moges (2010)

    At the end of the 1949 film, Sands of Iwo Jima, after the US soldiers survive a battle, Marine Sergeant John Stryker (John Wayne) tells his fellow comrades in the trench that he's never felt so good in his life. He asks them if they want a cigarette, and then he gets killed immediately by a sniper. Later, the others find a letter on his body that contains many things John Stryker planned to say, but never did. Absurd, I thought, when I first saw this movie. I was expecting a happy ending to the movie because the protagonists always survived the climax. I couldn't help but be reminded of that scene when I read Albert Camus’ essay on the absurd, The Myth of Sisyphus. In this essay I will break down the concepts of the absurd, suicide and eluding, and make a few observations of my own.



    Absurd

    In 1940, Albert Camus published one of the masterpieces of the 20th century thought in The Myth of Sisyphus, in which he developed the concept of the absurd in order to grapple with the meaning of life. The absurd entails three things. First, the world is characterized as irrational1 , and secondly, human beings yearn for clarity through reason or meaning. Third, the conflict between these two irreconcilable observations is known as the absurd. Fundamentally, the world is a product of random combination of events and circumstances, and we desire it to be otherwise. To be precise: the world is not absurd itself; instead, it is absurd that we seek rationality in an irrational world. Man tries to project sanity, order, or any form of rationality, on the world but always fails – and the absurd is the incontrovertible outcome.

    The feeling of the absurd can strike any time.2 We live our lives with goals and purpose, and the conviction that we're doing the right things. For the most part, we are content with this presumption of rationality. But every now and then, we become overly self-aware and horribly reminded of how much creatures of habit we all are. Our predictable actions become ridiculous, and we start to doubt whether we are free agents. The most familiar person we know suddenly becomes a stranger, and the world has become dense and strange.3

    Man is inclined to impose order, yet nothing about his projects has any justification, because the world does not provide support for what he does. The world is wholly indifferent to man's schemes, irrational, although man continues to try to make sense of it. Absurdity is the juxtaposition of two incompatible things, for it is “born of this confrontation between human need and the unreasonable silence of the world.”4

    It seems that Camus has exaggerated the duality to the point of a paradox and called it the absurd. More importantly, the dichotomy between the world and man rests on three assumptions: the universe needs to have a “human face” or it must be divinely ordered or that science is the final word of the world. More importantly, because science is a descriptive activity, then the world must be value-free. If one can contest any of these assumptions, then the Absurd is probably not a fundamental feature of human existence.
    Camus says the absurd forbids all attempts to find the meaning of life. There is no possibility for a meaning of life to be discovered, but that is not necessarily a depressing view. Life as absurdity makes sense when it is seen as a claim about the lack of compatibility between people and the world they live in. What are the consequences of living with the absurd? What logically follows from the idea of the absurd? There are two obvious options: self-destruction or self-preservation.



    Suicide

    The Myth of Sisyphus opens with a clear mission: “There is but one truly serious philosophical problem5 and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.”6 The absurd is Camus’ philosophical attempt at a solution for suicide. If life is truly meaningless, then how can anyone continue to live? What are the options for the person whose life has no meaning? He/she either commits the suicide of thought by inventing a world of meaning, of hope, of God, or commits physical suicide. Camus adds a third option: the absurd hero who accepts a world without meaning, without hope, and lives.7

    There are two aspects of suicide: one is the realization that life is absurd and the other is the destruction of the attachment to life. Camus notes that the body shrinks from annihilation. In order to destroy the attachment to life, there has to be a powerful rationale strong enough to blot out self-preservation, and they can number from humiliation, debilitating disease or despondency.

    Although Camus was interested in these obvious types of suicide,8 I found the metaphysical or virgin suicide far more fascinating, because “rarely is suicide committed ... through reflection.”9 The virgin suicide that lacks the aforementioned rationales is a “logically disposed” suicide because it is not motivated by some kind of emotional depression or even the fear of death. Camus devoted a chapter to Kirilov, from The Possessed, since the author Dostoevsky was also preoccupied with absurd reasoning, and how it affected the lives of his characters.
    Kirilov became disenchanted with the immortality of the soul and was researching on why people did not kill themselves. Kirilov said he wanted to take his life because that was his idea. Having an idea implies a motivation. Kirilov arrived at his idea with absurd reasoning by maintaining two contradictory beliefs: “I know God is necessary and must exist... I also know that he does not and cannot exist.”10

    Apparently, the paradoxical existence of God entails a logical suicide. For Kirilov, this realization was enough to kill himself, because he inferred that he was God: “If God does not exist, I am God.”11 However, Kirilov was not content to believe that he was God, for that was insufficient. To be God required Kirilov to kill himself. Absurd, indeed, but this is the crux: Kirilov realized divine freedom by bringing it down to earth. For several years he had sought the attribute of his divinity and he found it at last. The attribute is freedom. Drawing the final consequences of his divine freedom ended his slavery to immortality. He refused to maintain the universal delusion that everyone up to him in history, all men and women, had invented God in order not to kill themselves.12 Kirilov thought that was the summary of the entire history up to the moment of his metaphysical suicide.

    In short, Kirilov wanted to demonstrate his suicide to show others the yellow brick road. Not only was the suicide a metaphysical suicide, it was also pedagogical. Since Dostoevsky was a Christian whose Christian beliefs forbid suicide, because it is sinful, then Kirilov’s act was intended as a lesson. In the end Dostoevsky backed away from the absurd consequences, due to his faith in God. Camus forbids suicide for different reasons and gave us a solution: maintain absurdity by not denying it or adopting metaphysical delusions.
    Suicide is legitimate if the first premise of the Absurd is rejected. If the world is inescapably absurd, then to kill yourself is to act as if your suicide has meaning in a meaningless existence. Suicide confirms the absurd by agreeing to it.13 To live instead is to experience the absurd at all times, but never be reconciled with it. Camus insists that if we never reconcile with the absurd, we will never be free of it, but that will rule out suicide as the genuine experience of living an absurd life. If the absurd is not conceded, it is meaningful. Life has no meaning; it is inescapably absurd. The only thing is whether we can live with it or die with it.14



    Eluding15

    The instinct of self-preservation usually prevails, and although we live, we deny the absurd. Camus calls this avoidance the “act of eluding.”16 This eschewal manifests as hope: the hope for life after death or the hope that there is a meaning of life – but they both only delay the absurd. We elude the Absurd in order to avoid from being overwhelmed by it.

    When we experience the absurd, we continue to live – as long we convince ourselves that there is a divine mysterious plan, or agree that this world is absurd but there is a rational world after this one, or the world is absurd but only until science finds all the answers. Religions, science, philosophies all try to provide reasons and purposes for the universe, and explain away its irrational character.

    Scientists and rational philosophers favor the rational schemes of man in their nostalgia for certainties as they stand before the abyss. Religious thinkers instead point at the indifference of the world and they take a leap of faith. Camus credits existentialists like Leo Chestov, Soren Kierkegaard, and Karl Jaspers for recognizing the absurd, but in the end, they are too eager to flee towards transcendence in relief. For instance, when Chestov realized the “fundamental absurdity” of existence, he declared it to be God.17 Reason has failed, thus, we must trust God. Kierkegaard makes the same escape with his “leap of faith” from the starting blocks of the absurd. The intellect is sacrificed to an irrational God. Jaspers points at the numerous failures of reason, but only to conclude in a paradox, that this failure is transcendence itself. There is no meaning; therefore, there is ultimate meaning. These existential philosophers have raised the specter of the absurd, but they attempt to resolve this absurdity by taking a suicidal leap into transcendence.

    The aforementioned comfortable solutions reinstate the absurd because they prevent us from true authenticity by misrepresenting what we truly are. These denials are all philosophical suicide. Such denials are philosophical which means they are fictitious and illusory that hides the fact that life is absurd. They are suicidal, similar to physical suicide, for they deny or renounce life as absurdity. "Doctrines that explains everything to me also debilitate me at the same time. They relive me of the weight of my own life yet I must carry it alone." 18 Since the absurd is inherently a contradiction, every attempt to ignore or explain it away only tries to elude it. “[The] important thing... is not to be cured, but to live with one's ailments."19

    Camus' criticisms of the existential philosophers do not seem to be true refutations, for they seem to be complaints that their solution to the absurd only fail to meet his criteria of authenticity. This hardly persuades anyone of the cogency of his philosophy of the absurd. It seems to me that Camus does not seem as interested in their arguments, whether or not they are true, but whether one can live by them. He is more concerned with how one lives with his/her fundamental relationship with the world, the absurd. Thus, Camus avoids the more challenging task of proving the cogency of his case over rival philosophers. But that is irrelevant, as long the concept of the absurd is persuasive in itself, and whether it helps us to decide how to live.



    Absurd hero

    The philosophical and physical suicide both flee from the absurd. Instead, Camus insists upon a third option: embrace absurdity by refusing to commit suicide and live without a future or hope or illusion, or resignation as an absurd person. In other words, we ought to vigilantly maintain the absurd in order not to be crushed by it, and become “absurd” ourselves. He illustrated several examples as the absurd man20 and concludes the book with a chapter on Sisyphus from Greek mythology as the absurd hero. When Sisyphus was alive, he often defied and tricked the gods, and cheated even death. So they punished him by having him roll a boulder up a mountain over and over for all eternity.21

    Each time Sisyphus finally reaches the top of mountain, the boulder falls back down. Camus imagines him standing there. He is thoroughly conscious of the utter hopelessness of his situation. He has a choice: give in to despair? Let gods win? Mope over his fate? Or thumb his nose at the meaningless task and refuse to see it as punishment? Camus says he is saved by his scorn. He overcomes his situation by standing in revolt of it. Given that he does not accept anything more to life than the absurd situation, Camus says he has found happiness.22



    Conclusion

    On one hand, I am sympathetic to the dichotomy between the hopes of man and the indifferent universe, but at the same time, I could easily envision a far more horrifying existence. We as the human species may not have arrived at a perfect understanding of our world, but what would be the consequences if that ideal did become a reality? If we knew everything, won't life be utterly boring, and consequently, intolerable? There won't be any more challenges or excitement from life. Everything is already ordered down to the last minutiae, all future events already known in advance. A bottomless, infinite, and obscure world may cost us confusion and frustration, and the impossibility of ideal knowledge. But the cost of a perfectly known world might be infinitely greater. Since Camus is not interested in possible hypotheses that are even more absurd, but whether we in our current condition can live with the absurd, then this scenario is probably irrelevant.

    I am somewhat conflicted at this point. Despite the persuasiveness of Camus' arguments, I find the about-face from the Absurd as a paradox to a solution a dissatisfactory move. It seems far too similar to how the existential philosophers themselves escaped the absurd. Thus the assertion that we need to live with the absurd is an equally arbitrary move, no less another "leap of faith". The very rejection of suicide may be a compromise with the absurd, but at the same time, it seems a matter of choice, not a logical conclusion to a philosophical system. In the end, Camus stands tall as an existential thinker, despite his protests to the contrary.









    Footnotes

    1. Donald Crosby (1988) has characterized this as cosmic nihilism, where the universe lacks any sort of intelligibility or meaning. Camus is on the verge of making an existential nihilistic judgment that human existence is absurd, but his precise formulation of the concept of the absurd doesn't fall neatly into that category.
    2. Camus lists four types of feelings of which I mention one (mechanical and routine behavior). The other three are: the burden of time and the inevitable grave, the contingency of existence and the alien nature of things, and the fundamental isolation from other people.
    3. Camus, Albert (1983) p. 14
    4. Ibid, p. 28
    5. It's true that none of the arguments in philosophy come anywhere close in importance as finding a reason to live, or more precisely, the condition a person find him/herself in when he/she fails to find satisfactory reasons for living. This opening move has moved Camus beyond Philosophy Proper, towards theology and morality, in a religious direction.
    6. Camus (1983), p. 3
    7. Ibid, p. 10
    8. Ibid, p. 50. Camus is less interested in philosophical suicide than he is in physical suicide.
    9. Ibid, p. 5
    10. Dostoevsky, Fyodor. The Devils, p. 690
    11. Camus (1983) p. 106
    12. Ibid, p. 108
    13. Ibid, p. 54 “Suicide, like the leap [is not the overcoming absurdity] is acceptance at its extreme…”
    14. Ibid, p. 50
    15. The French term, “l’esquive” is more forceful than the English translation, “eluding,” (for it also means ‘dodging,’ ‘ducking,’ as well as ‘evading,’ or ‘escaping’) but unfortunately, there is not a better word available.
    16. Camus, (1983) p. 8
    17. Ibid, p. 34
    18. Ibid, p. 55
    19. Ibid, p. 38
    20. Don Juan, the actor, the conqueror and the creator.
    21. Long before I ever read The Myth of Sisyphus, I encountered an absurd hero in a video game called Chakan, the Forever Man. The main character was Chakan, a great warrior who challenged Death when it was his time to die. Death bet that if Chakan beat him, he would gain immortality. They fought, and sure enough, Chakan prevailed. However, his victory came with a curse: he would not be able to sleep, because every night, he would witness the pain he inflicted on his victims. Chakan would finally gain eternal sleep only after he had eradicated all evil. That was the game's premise, and after I defeated the game, gotten rid of all evil in the world, I awaited the final prize. Chakan tried to kill himself, only to hear the mocking laughter of Death. Death pointed at the countless stars and each of their planets was overrun with the same evil. Chakan was stuck on his world forever, alone. However, despite the absurd fate, I can imagine Chakan happy in the same fashion.
    22. Camus, (1983) p. 123


    Bibliography

    Camus, Albert. (1983). The Myth of Sisyphus. New York: Vintage International
    Crosby, Donald. (1988) The Specter of the Absurd. New York: State University of New York Press
    Dostoevsky, Fyodor. Tr. Katz, Michael (2008) The Devils. New York: Oxford University Press
    Teaser Paragraph: Publish Date: 06/02/2010 Article Image:
    By Awet Moges (2010)

    In the beginning, there was nothing but
    fuzzy logic, imaginary mathematics, and monolithic science.
    Then the philosophy gods said, “Let Kuhn be!” And all was light.

    Introduction
    There are only a handful of 20th century books that impacted the world, and Thomas Kuhn’s Structure of Scientific Revolutions1 (SSR hereafter) is one of them. The SSR has had a major impact on history, sociology and the philosophy of science and changed them more than any other book in the 20th century.2 This essay will break down the book’s initial reception and analyze its subsequent evolution. At first, readers declared the SSR to have pronounced the last rites of logical positivism,3 after Quine’s Two Dogmas of Empiricism supposedly dealt a crippling blow in 1951. However, reports of the demise of logical positivism may have been premature. Recently, careful readers like Michael Friedman and Reisch found enough affinities between Kuhn’s SSR and logical positivism to declare him a post-positivist who had far more in common with major logical positivists like Rudolf Carnap. First, this essay will list the theses of logical positivism, then the counter-theses introduced in SSR, and explain why recent scholars argued that Kuhn’s ideas were less radical than they appeared, and point at parallels in Carnap’s philosophy.

    About SSR
    The SSR was the “application of a study of history to problems within the philosophy of science”4 where Kuhn analyzed whether theory change in science had a rational account, i.e., how and why theories replace others. Prior to SSR, philosophers explained theory change in science in a progressive manner in which better theories replace existing ones (due to parsimonious or truthlike or instrumentally successful reasons). After SSR, philosophers divided themselves in two camps: the antagonists who charged Kuhn with relativism,5 and the proponents who interpreted Kuhn as a prophet of the new philosophy of science. Both parties relied on the myth that paints Kuhn as an assassin, the giant-killer of logical positivism.

    The so-called giant killer reputation has glorified Thomas Kuhn for debunking several of the main theses of logical positivism:




    Reductionism – an idea or proposition can be replaced by another idea or proposition that’s simpler. For positivism, all knowledge is reducible to scientific truths.
    Verificationism – the claim that the meaning of a proposition is the set of experiences that determines its truth. Thus an empirical proposition has meaning as long it has been verified or could be verified in principle. If a statement is neither analytic nor empirically verifiable, it is meaningless.
    Atomism – the metaphysical claim that all reality is composed of basic and indivisible particles that’s too small to be observed by the naked eye. Russell and Wittgenstein first developed this philosophy, which in turn influenced the logical positivists.
    ahistoricism – the idea that something is free or disconnected from history, or historical development. Logical positivism held scientific theories to be universal laws and law-like generalizations that are independent of history. Thus, scientific knowledge progresses linearly, and cumulatively.

    In the SSR, Kuhn proposed holism, theory-ladenness of observation, incommensurability, and emphasized the historical or social view of science. Upon a first glance, it would appear they are not compatible with the theses of logical positivism.

    Holism is the thesis that the whole has a philosophical and/or epistemic explanatory priority over the elements, members, or individuals that compose them. Therefore, a whole cannot be reduced to its bare essentials. Knowledge, contra positivism, cannot be reduced to scientific knowledge. For Kuhn, both theory and observation are interdependent in a holistic way, which introduces the problem of incommensurability, for choosing between competing paradigms cannot be solved by appealing to a theory-neutral factual language.

    For logical positivists, scientific observation is taken to be epistemically primary, for observation provides the raw material that serves as an “epistemologically secure foundation”6 for scientific knowledge. Moreover, observation grants a shared base for theory choice. Observation as perceptual experience is neither judgmental, nor is it dependent on judgments of any kind. Because observation is independent of judgment, it is a neutral judge that can decide between rival theories.7

    Kuhn argues against observation as a secure base for scientific knowledge, for it cannot decide between competing theories. The reason observation is useless is it is already affected by the very paradigm the observer works with. This leads to the notion of theory-ladenness8, which was first instituted by Norwood R. Hanson in 1951.9 Theory-ladenness is the idea that a concept or a term or a statement makes sense only in light of that particular theory. Observers do not make identical observations because what they see depends on what they know or believe.10 In other words, theory, tradition and expectations shape even experience. Every observational term already comes with theoretical baggage. If theory-ladenness is correct, then logical positivists cannot claim that a statement is a theoretically agnostic report of experience. Neither can they reduce a theory-laden term to the level of pure observation and produce a fact. If there's no theory-agnostic observational language, then how can any theory be evaluated without presupposing a paradigm? If all theories come from different paradigms then paradigms are incommensurable.

    Incommensurability is the concept that theories of different paradigms are not translatable because paradigms consist of different vocabularies where neither could be fully stated in the other, or could not be translated without distortion. For logical positivists, the comparison of theories only needs the translation of their effects in a neutral observation language. According to the incommensurable thesis, there is no neutral observation language at all to mediate between paradigms.

    In the SSR, Kuhn included many examples from the history of science where proponents from different paradigms failed to understand each other, and he defined this as incommensurability. For example, in physics, the Newton paradigm is not commensurable with its predecessor, the Aristotle paradigm. They lacked a common measure because their concepts and methods were different, and they focused on different problems. “..the scientist who embraces a new paradigm is like the man wearing inverted lenses.”11

    “We have already seen several reasons why proponents of competing paradigms must fail to make complete contact with each other’s viewpoints. Collectively these reasons have been described as the incommensurability of the pre and post-revolutionary normal science tradition.”12

    Kuhn argued that incommensurability was one reason why science does not progress cumulatively, in order to refute the notion of science as a constant moving towards an approximation to the truth. Science does not progress to a perfect ideal, but only away from the anomalies that plagues the current theory. Therefore, scientific progress is eliminative, rather than linear and instructive. 13

    There is no transcendental method for rational scientific progress. Kuhn instead developed a cyclical picture of scientific progress, where a mature science operates under a paradigm, and goes through periods of normal science. Then a crisis occurs when the paradigm declines in its usefulness, falls into serious doubt, and revolutionary science results when a new paradigm replaces the old one. Finally, the revolution “inaugurates a new period of normal science.”14 Given this picture, scientific knowledge cannot be accumulative.

    Normal science extends the “knowledge of those facts that the paradigm displays as particularly revealing, by increasing the extent of the match between those facts and the paradigm's predictions and by further articulation of the paradigm itself.”15 Normal science articulates the “phenomena and theories that the paradigm already supplies.”16 Kuhn characterized normal science as puzzle-solving, where results may not be spectacular but they can prove the success of a scientist. Normal science entails the existence of consensus among the community of scientists. They work on research that is based on a certain achievement they acknowledge as the foundation of its practice. 17

    When this consensus breaks down during crisis, it is rebuilt during the period of a revolutionary science. A crisis takes place when anomalies multiply and scientists begin to doubt the existing core theory.18 Normal research no longer works, and some scientists realize their paradigm has ceased to function adequately and needs to be replaced. Revolutionary science is defined as a “non-cumulative developmental episode in which an older paradigm is replaced ...by an incompatible new one.”19 Conservative defenders of the old paradigm take comfort in the past achievements of the normal science, and are reluctant to give it up. They hold out hope that it will eventually survive the crisis and solve the anomalies. Radical supporters of the new theory, despite its lack of track record, recognize its future promise. Scientists from two competing paradigms are unable to understand one another since their theories are incommensurable. “..the reception of a new paradigm often necessitates a redefinition of corresponding science. Some old problems may be relegated to another science or declared entirely 'unscientific.' Others that were previously non-existent or trivial may, with a new paradigm, become the very archetypes of significant scientific achievement. The normal-scientific tradition that emerges from a scientific revolution not only incompatible but often actually incommensurable with that which has gone before.” 20A new paradigm is accepted only if it is recognized as being superior in problem solving than the competition, and the shift to the new paradigm starts a scientific revolution.

    Paradigm is Kuhn's most notorious concept, for it is least precisely defined of them all. Roughly, paradigms provide the basis for normal science, and at the same time it limits the field of investigation by restricting questions and answers, and that conditions expectations. Therefore, a paradigm can affect observation, and cause the scientist to overlook anomalies, or wilfully ignore them. Kuhn defined paradigm in at least two senses: one, a global all-embracing “shared commitments of a scientific group” and the other, a “particularly important sort of commitment… a subset of the first.”21 The first definition seems to be the conscious obedience to methodology and rules, whereas the second seems to be an intuitive pattern recognition. Logical positivists would agree with the first definition, for they thought that science could be explained by the conscious obedience to methods and rules, but Kuhn's second definition denies this and proposes that exemplars serves as models for new scientists to develop their powers of pattern recognition.

    Kuhn called exemplars as the “most novel and least understood aspect” of SSR in the postscript to the second edition.22 He defines exemplars as a set of recurrent and quasi-standard illustrations of various theories in their conceptual, observational and instrumental applications. These are the community's paradigms, revealed in its textbooks, lectures and laboratory exercises.23 Kuhn pointed at great works like Copernicus' De Revolutionibus and Newton's Principa as the origin of a scientific paradigm. They became paradigms because they attracted scientists and persuaded them away from other competing theories, and they were sufficiently open-ended to leave enough problems to be solved.24 Kuhn's paradigm concept helps explain the context of discovery somewhat: working with exemplars help scientists to regard new problems as puzzle-solving and allows them to potentially discover solutions to their puzzles.

    One last point about positivism and Kuhn's rebellion: the foundation of the “received view” was the distinction between the discovery and justification of scientific theories.25 This distinction is essentially the distinction between psychology and epistemology, respectively. Discovery is about hunches or insights, which are psychological processes that are not beholden to conscious intention. These processes are subjective elements that come from non-rational, non-logical, and unconscious activity. Philosophers generally do not deem the context of discovery a worthwhile field of analysis, for psychologists are better suited to the task. Unsurprisingly, philosophers are far more concerned with the epistemology of scientific theories, in which they are more concerned with the reasons and arguments that support the idea. The context of justification is about rules that determine whether a hypothesis is acceptable. The problem is there are no rules that show the way to formulating the right hypothesis in the first place. Logical empiricists dismissed discoveries as irrational, for they thought discoveries were based on imaginative leaps or lucky accidents. Thus, there cannot be any logic of scientific discovery. The positivist is only concerned with “legitimizing [the discovery] scientifically, prove it objectively, and construct it logically”26

    Kuhn also rejected the distinction, and at the end of the introduction to the SSR he admitted to have violated the distinction between the “context of discovery” and the “context of justification.”27 Hoyningen-Huene said Kuhn rejected the distinction because he was committed to theory choice. Kuhn considered the justification of theory choice to belong to the context of discovery because theory choice depends on the commitments of the scientific community to a paradigm. The values or norms of a community is a sociological issue, so by erasing the distinction, Kuhn shifted the issue of justification from epistemology to sociology. 28 While Kuhn’s paradigm theory did erase the distinction between truth conditions of science and its historical period, this wasn’t foreign or contradictory to logical positivism.29

    Analysis of giant-killer reputation
    Was Kuhn truly a giant-killer? If so, did the practice of philosophy of science truly change appreciably after Kuhn? I.e., is verificationism now bunk? Not at all. We only did away with A. J. Ayer’s formulation, for it was incoherent and exceedingly simplistic.30 Verificationism lives on today but under different names such as confirmation. Another point to note is that the initial readers of SSR exaggerated the break between Kuhn and his predecessors.31 He retained some empirical commitments which is why he only broke away from certain elements of logical positivism with concepts like incommensurability, progress, and paradigms. However, some say Kuhn failed to go far enough, for he was not radical enough. The historian Michael Friedman claimed the Kuhnian revolution was not complete and he has tried to restore Kuhn as a positivist who only forced an partial transformation in logical positivism. Had Kuhn gone far enough, he would have pulled off a truly revolutionary break with the established philosophy of science of the times.

    Recent scholars have tried to rehabilitate the reputation of logical positivists with a careful attention to their work that dispelled many myths, particularly the one that Kuhn hammered the final nail in the coffin of positivism. Many scholars focused on the paragon of logical positivism, Rudolf Carnap, and found sufficient material to rehabilitate his reputation. George Reisch pointed out that Carnap's philosophy of science had much in common with Kuhn's normal science and paradigm concept. Michael Friedman rescued Carnap's philosophy from the unfair reputation of naïve empiricism and foundationalism.32 John Earman saw many affinities between Carnap and Kuhn with respect to semantic incommensurability.

    In the scholars' reevaluation of Carnap’s body of work, the natural transition of logical empiricism to post-positivism diminishes Kuhn’s giant-killer status. Reisch offered the letters between Carnap and Kuhn as evidence that there was no contention between them, thus he encourages us to draw the inference that there was no incompatibility between their philosophies.

    Similarities between Carnap and Kuhn are found in the Empiricism, Semantics and Ontology (ESO hereafter), where Carnap proposed the notion of linguistic framework. Some scholars33 argued that the linguistic framework could be interpreted as compatible with Kuhn's notion of paradigm, and the pragmatic nature of external questions is similar to Kuhn's value of theory choice. Carnap's linguistic framework theory is also compatible with Kuhnian theses: incommensurability, holism, and the theory-ladenness of observation. Therefore, they argue, Carnap's theory is close to Kuhn's theory of scientific revolution, normal science and paradigm.

    First of all, Carnap thought that all scientific theories were embedded within a linguistic framework. Carnap was chiefly concerned with existence problems in the ESO, and in order to allow scientists to discuss abstract entities without embarrassment, he divided the problems into internal questions and external ones. For Carnap, a linguistic framework is a set of linguistic conventions that determine how we decide questions about existence. A simple example for a linguistic system would be a mathematical system with axioms, and an existence question is answered with deductions from the axioms. Carnap called this existence question an internal question. On the other hand, an external question would be about the total system of entities34, for the linguistic framework presupposes them in order to ask and answer internal questions. We can judge internal questions according to the logical rules within the individual linguistic framework, but we cannot judge external questions for they do not presuppose any logical rules.35 For Carnap, the internal questions are distinct, clear-cut and philosophically uninteresting, whereas external questions, often ontological ones, are meaningless. Thus external questions should never be asked. At most we should only be concerned whether the linguistic framework is acceptable on pragmatic grounds.

    Where logical rules of a linguistic framework establish validity according to that framework, for Kuhn, a particular paradigm that regulates a normal science involves agreed-on rules that designate what counts as valid solutions for puzzle-solving problems. Where external questions, with respect to the linguistic framework under question, aren’t beholden to logical rules, but more so to pragmatic and conventional reasons, for Kuhn, a paradigm is replaced when revolutionary science changes the generally accepted rules that are in play during normal science, and requires a conversion.

    Once a linguistic framework is subbed for another, a revolution occurs, for the framework is defined by its rules. Changing them will change the scientific language, and brings on a revolution. These parallels between Kuhn and Carnap inspire scholars to claim that these philosophers shared similar views about science, and how scientific revolutions take place, whether it is paradigm change or lexical change.

    Objections
    While it is true that there are more affinities that the “received reading” has ignored or glossed over, however, those affinities are not constitutive of a clear compatibility. This “return” to a reconciliation is but a revisionist reading, because there are several reasons, raised by J. C. Pinto de Oliveira:


    It matters little that Kuhn had Carnap's support in their personal correspondence, for that hardly amounts to a clear endorsement of Kuhn's philosophy of science in the SSR.
    Carnap's complete silence about Kuhn in his later work, especially in his last book, Philosophical Foundations of Physics
    Carnap continued to distinguish between discovery and justification in his attempt to push a “logic of science” in the article “Logical Foundations of the Unity of Science.”
    Carnap considered Kuhn's SSR a work in the history of science, not the philosophy of science, and he himself admitted that he was ignorant of the history of science.

    Though Carnap claimed that the language change in his linguistic frameworks had much in common with scientific revolution, he did not go into detail about such revolutions because he thought epistemology or wissenschaftslogik had nothing to do with historical analysis. Carnap was concerned with formal problems, or how language applied to certain sciences. Whatever happened during periods of revolutionary science, he was only interested in the articulation of the logical structures of the two different languages. 36 However, history is not mere embellishment of an a priori structure of scientific rationality. Kuhn instead saw a philosophical quality in the analysis of history of science, and that is sufficient reason to refrain from lumping them together in a quasi-philosophical category.

    Conclusion
    Kuhn himself was a monumental paradigm in the philosophy of science, no doubt, but revisionist scholars went too far in the swinging of the pendulum against the death of the “received reading.” Their ace in the hole, Carnap's linguistic frameworks, only show superficial similarities between the later Carnap and Kuhn, and leaves the major tenets of logical positivism themselves untouched. The initial reading of Kuhn as the chief assassin of logical positivism was exceedingly simplistic, no question. But this does not excuse the equally dramatic swing to the antithetical position that tried to force Kuhn into a straitjacket that made him more germane to the descendants of logical positivism. I propose a middle solution that recognizes Kuhn may not have truly broken free from his ancestors in the philosophy of science, but his new vocabulary was sufficient in instituting a massive evolution that's still sending shock waves in the field that continues to be felt today. At any rate, scientists, like what Virgil advised Dante when it came to cranks, can only look at the philosophers squabble amongst themselves, and continue to do whatever they like.


    Footnotes
    1. The Arts and the Humanities claimed that the SSR was the most frequently cited book in the 20th century during the period of 1976 to 1983, and the Times Literary Supplement included it in “The Hundred Most Influential Books Since the Second World War.”
    2. Alexander Bird claims the SSR was not a philosophical text, but a “theoretical history” because the book became a paradigm for the philosophy of science, which revolutionized the field with a “theoretical history of science.” (Bird, 2000, p. viii)
    3. Suppes was the first to do so. This persists even today, with the Stanford Encyclopedia’s entry on Thomas Kuhn
    4. Newall, Paul. Kuhn. 2008
    5. Critics took issue with Kuhn for charging textbooks of science as dogma, for denying any possible objective criterion that could determine between competing paradigms, and for describing the shift to a new paradigm as a “conversion experience.” (Kuhn, 1996 p. 151)
    6. Bird, 2000, p. 97
    7. Bird, 2000, p. 98
    8. Theory-ladenness is the basis of confirmation holism, the idea that no single theory in science can be isolated in tests for it depends on other theories.
    9. In Patterns of Discovery, Hanson pointed out that observation was not as simple as the logical empiricists thought.
    10. Bird, 2000 p. 99
    11. Kuhn, 1996, p. 122
    12. Kuhn, 1996, p. 148
    13. Oberheim, Eric and Hoyningen-Huene, Paul. “The Incommensurability of Scientific Theories.” 2009
    14. Bird, 2000 p. 25
    15. Kuhn, 1996, p. 24
    16. Ibid
    17. Kuhn, 1996, p. 10
    18. Bird, 2000 p. 43
    19. Kuhn, 1996, p. 92
    20. Kuhn, 1996, p. 103
    21. Kuhn, The Essential Tension, p. 294
    22. Kuhn, 1996, p. 187
    23. Ibid, p. 43
    24. Kuhn, 1996, p. 10
    25. Hans Reichenbach introduced this distinction in 1938 in Experience and Prediction where he noted the concept of rational reconstruction was essentially about how they communicate thoughts, rather than how they are subjectively formed.
    26. Fleck, 1979, p. 22 Ludwik Fleck argued that the distinction between justification and discovery was exceedingly shallow, for the historical process of discovery mattered a great deal for epistemology. Fleck proposed a “thought-collective” and defined it as “a community of persons mutually exchanging ideas.” In order to discuss or exchange ideas, two people must possess the same vocabulary, and share many things in common – theories, facts, significance – i.e., beliefs and dispositions. Thus, the total knowledge of a community cannot be reduced to its individual members.
    27. Kuhn, 1996, p. 8
    28. Hoyningen-Huene, Paul. 2006 p. 127
    29. Otto Neurath compared science to at boat we are rebuilding while at sea. A caricature of logical positivism would use the skyscraper edifice instead to represent their conception of science.
    30. Alonzo Church and Carl Hempel also contributed to the decline of verificationism. Church heavily criticized the concept of verificationism in his review of Ayer's book Language, Truth and Logic in Journal of Symbolic Logic.
    31. It’s interesting to note that Kuhn, despite his giant-killer reputation, did not make many references to Logical Positivists in the SSR. Alexander Bird points out that in the 150 footnotes, only 13 were philosophers. The rest consisted of historians. (Bird, 2000 p. x)
    32. In Reconsidering Logical Positivism, Friedman argues that Carnap's Der logische Aufbau der Welt was not a program of naïve empiricism but instead a neo-kantian project that was concerned with the conditions for possible knowledge.
    33. Reisch, 1994, and Earman, 1993
    34. or the system of math, entities would be about numbers in general.
    35. Carnap writes that only philosophers raise external questions, especially questions about the reality of the world.
    36. Carnap, 1934, §72 “Philosophy replaced by Logic of Science”


    Bibliography

    Bird, Alexander. Thomas Kuhn Princeton University Press. Princeton, NJ. 2000

    Carnap, Rudolf. Logische Syntax der Sprache. 1934 (English Translation) The Logical Syntax of Language. London: Routledge. 1937

    Carnap, Rudolf. “Logical Foundations of the Unity of Science.” in International Encyclopedia of Unified Science. Vol. 1, no. 1. Chicago. Chicago University Press. 1938

    Carnap, Rudolf. “Empiricism, Semantics, and Ontology.” in Meaning and Necessity: A Study in Semantics and Modal Logic. University of Chicago Press. 1956

    Church, Alonzo. “Review of Ayer's Language, Truth and Logic.” Journal of Symbolic Logic. Vol. 14. 1949. p. 52-53.

    Earman, John. “Carnap, Kuhn, and the Philosophy of Scientific Methodology.” in World Changes. Horwich, P. (ed.) MIT Press. Cambridge, Massachusetts. 1993 p. 9 – 36

    Fleck, Ludwik. Genesis and Development of a Scientific Fact. Trenn, T. J. and Merton, R. K. (eds), F. Bradley (trans.), foreword by T. S. Kuhn. Chicago: University of Chicago Press. 1981. [Translation of Fleck 1935]

    Friedman, Michael. “The reevaluation of Logical Positivism.” Journal of Philosophy, Vol. 88, 1991. pp. 505 – 523.

    Friedman, Michael, Reconsidering Logical Positivism. Cambridge, UK. Cambridge University Press. 1999.

    Hanson, N. R. Patterns of Discovery. Cambridge. Cambridge University. 1958.

    Hoyningen-Huene, Paul. “Context of Discovery versus Context of Justification and Thomas Kuhn.” in Revisiting discovery and justification. ed. Schickore, Jutta and Steinle, Friedrich 2006.

    Irzik, Gurol and Grunberg, Teo. “Carnap and Kuhn: Arch Enemies or Close Allies?” The British Journal for the Philosophy of Science. Vol. 46, No. 3 September 1995. pp. 285 – 307

    Kuhn, Thomas. The Essential Tension. Selected Studies in Scientific Tradition and Change. Chicago: University of Chicago Press. 1977

    Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago. University of Chicago Press. 1996

    Newall, Paul. “Kuhn.” 2008 The Galilean Library. <http://academy.galilean-library.org/glossary.php?do=item&id=20 >

    Oberheim, Eric and Hoyningen-Huene, Paul. “The Incommensurability of Scientific Theories.” 2009 Stanford Encyclopedia of Philosophy. Stanford University. 25 February 2009 <http://plato.stanford.edu/entries/incommensurability>

    Oliveira, J. C. Pinto de. “Carnap, Kuhn, and revisionism: on the publication of Structure in Encyclopedia.” (4th version) 2007 Springer Science + Business Media B. V. 6 June 2007 (online)

    Reisch, George. “Did Kuhn Kill Logical Empiricism?” Philosophy of Science. 58 (2). 1991 p. 264 – 277.

    Reisch, George. “Planning Science: Otto Neurath and the International Encyclopedia of Unified Science.” British Journal for History of Science. 27 1994. p. 153 - 75

    Reisch, George. How the Cold War Transformed Philosophy of Science : To the Icy Slopes of Logic. New York: Cambridge University Press, 2005.

    How to do research

    By Godot, in Articles,

    Teaser Paragraph: Publish Date: 04/12/2010 Article Image:
    By Steve Nakoneshny (2010)

    All research begins with an idea. Whether the idea arose as a result from previous research, from a suggestion/direction provided by someone else or a eureka-like intuitive leap is ultimately irrelevant: the idea is the starting point for any endeavor. Since ideas that begin their life sufficiently robust to commence research are rare enough to be nonexistent, the next logical step is to further refine the idea. In some cases, the initial idea may be too narrow and will require fleshing out. In others, it will be necessary to pare away some of the extraneous details to reveal the kernel hidden within.

    The first step towards undertaking your research or refining your question/idea begins with a search of the available literature. Whether your ultimate goal is publishing in an academic journal, writing a paper for school or even simply increasing your personal knowledge, you really should take the time to seek out the extant body of literature on your pet subject to find out what's already been done. After all, if somebody else had the same idea as you and has already gone to the trouble of writing up their findings, there may be very little need for you to do the same. When such a scenario arises, your task is far from finished. You can read that work and see whether your idea was explored to your satisfaction.If it was and you disagree with the conclusions drawn or consider the work done to be sloppy, you can refocus your idea as a response to that other work. Perhaps the results of that research suitably explored your idea but raise further questions that you feel need to be addressed. This new direction becomes the focus for your investigations. Once again, you would see what the extant literature has to say (if anything) on your refined topic, ad nauseum until you have a very focused and attainable thesis. Yes, this process can very quite laborious and is frequently tedious but I feel that due diligence at an early stage results in less strife later on and also reduces the likelihood of you looking like an idiot for not knowing the topic material sufficiently well.

    The next step is to consolidate your sources of information that you will use as evidence/support in your research.Some of these will have been identified in the earlier process of refining your topic, but chances are you will be looking further afield for more data. To be effective, it will help if you create a search strategy to both keep you on track as well as provide an audit trail of where you've gone. This way, not only can you be sure not to duplicate your previous steps, but you can methodically show to others how you arrived at your end point (if need be). Keyword searches are the most obvious starting point. However, where you employ your searches will often be determined by the topic, target audience and quality of information you seek. University libraries have access to a great many print and electronic journals, not to mention a plethora of books geared for an academic audience. Public civic libraries also have excellent access to books and some journals and magazines that aren't geared for an academic audience.Simple web searches can yield many results, the calibre of which is sometimes dubious. When using sources that cite their references, sometime sit can help to follow those up directly. Not only will you get a better feel for what the original actually said, it too can point you down other search avenues.

    Working in an academic setting, I admit bias in my preferences for sources but that largely applies to work-related activities. If all you are hoping to generate is a working knowledge of a topic to discuss with your peers, there's nothing wrong with using Wikipedia, a magazine/newspaper article and a blog post or three. If you're hoping for a more exhaustive delve into a topic, you'll probably be best served by even a brief look at the academic literature.
    In contrast to how I've worked in the past, this year I have been introduced to using an evidence table as a tool to assist in the consolidation of all the material I've read for a given project. Rather than having to rely on memory to recall the pertinent details of a given source, the evidence table allows me to record publication details, keywords, main findings and my own comments in a spreadsheet which I can retrieve at my own convenience.

    So far, we started by identifying a topic of interest.and then refined the topic through a series of progressions. Based on our topic, target audience and desired depth of discussion, we then identified areas where we should commence our literature search. Then, with the appropriate sources identified and obtained, we set to the task of reading our source material and consolidating our notes into an evidence table.

    So what should be the next obvious step? Writing? No.

    I strongly recommend a period of reflection to think about what you've read thus far and to attempt to assimilate and internalise the knowledge thus gained. Even then, you should take some time to consider the structure of the paper you intend to write. What do you intend to say? What tone should you use? Given all that you've read and the tentative conclusions you have reached thus far, what points do you need to make and in what order do you need to make them? Once the general structure of your paper has taken shape in your mind, only then should you move on to putting your thoughts to paper.
    How you choose to write is best determined through trial and error. Maybe you prefer writing free-form (much like how I've written this) only to have to go back later to edit and insert headings etc. Maybe you prefer starting out with a more rigid framework to assist you in hitting all the points you wish to make. Both are perfectly valid techniques and both can be used to good effect in the appropriate setting. All you can do is play around and find out what works best for you. Any need you may have for further revisions of your paper will be determined by the purpose of your writing.
    Teaser Paragraph: Publish Date: 03/02/2010 Article Image:
    By Paul Newall (2010)

    Bradley Monton's new book, Seeking God in Science: An Atheist Defends Intelligent Design, is an exercise in the principle of charity. Rather than join the chorus of critics dismissing Intelligent Design (henceforth ID) as vacuous, a religious conspiracy or pseudoscience, Monton – himself the atheist of the subtitle – attempts to develop it into the strongest form possible and see if perhaps there is anything to it after all. Although doing so may win him few admirers, he sets to the task with enthusiasm and the result is a superb work of philosophy, engaging for specialists and lay readers alike.

    The main complaint at this endeavour is plainly that ID is not a scientific theory of greater or lesser repute but actually an intellectually-upgraded creationism, part of a wider programme that seeks to denigrate science and advance religion. Monton states at the outset that one of the ideas motivating his work is that "for the purposes of evaluating the doctrine of intelligent design, the cultural agenda of intelligent design proponents doesn't matter" (p12). The general principle – "bad people are capable of giving good arguments" (p13) – is obviously sound: it is ad hominem to argue otherwise and the truth or falsity of intelligent design is not logically related to its use in cultural or political debate. The religious persuasions, if any, of its proponents are also irrelevant: it should stand or fall on its own merits.

    However, the principle objection to Monton's work and the complaint of ID opponents is that debates do not work in this way. Monton closes his book by saying that he cares only about "getting at the truth" (p156) but so, presumably, do both ID advocates and opponents. Monton does not wish to "change minds with bad argumentation" (p156) but perhaps what needs to be questioned is not his conviction – hopefully the truth and good arguments will ultimately trump poor arguments, employed in support of some short-term goal – but the assumption that matters are actually (or ever) decided by better arguments and approximation to truth?

    Ideally, the resources and funding available to research would plentiful and easy to allocate, such that ID advocates and opponents alike would accept Monton's arguments and allow or get involved in some degree of work on ID, alongside evolutionary biology and other areas of science; in reality, programmes and departments already use whatever rhetoric they can muster to try to compete with one another and to convince governments and the public that their work deserves support over and above other possibilities. Such circumstances lead to exactly the situation we see: ID advocates seizing on Monton's work as aiding their own justifications for the validity of ID while ID opponents accuse him (wrongly but understandably) of supporting ID and having a detrimental impact on "real" science, even if both perspectives overstate the influence of a work of philosophy. In fairness to Monton, he explicitly disavows any interest in such issues and wants his book to be read on its own terms, and he is to be admired for doing so and for insisting that some people can and must be allowed to remain outside cultural debates and focus on philosophical arguments.

    Although Monton spends several pages looking at arguments against evolution, he quickly concludes that they are "among the weaker arguments that proponents of intelligent design give" (p28). Responding to the charge that he gives too much credit to ID in "counting non-evolution-based arguments for a designer as intelligent design arguments", he simply concedes that readers may think this if they wish; he bounds his investigation by saying that what he is really interested in are "the non-evolution-based arguments" (p29). He reviews some of the objections to ID on the grounds that it is simply a dressed-up version of creationism or else that the posited designer has to be the Christian God, showing (rather too easily, it should be said) that these do not follow from either what the ID proponents say or from the arguments ID opponents provide. For those skeptical of ID supporting anything other than God as the designer, Monton provides directed panspermia as an option, along with the possibility that we are living in a computer simulation (p41). As with many of his arguments involving possibilities, these do not need to be true; he simply requires alternatives that might be true in order to refute the insistence that some other circumstances must obtain. Ultimately, though, none of this matters: it should be possible to consider ID as an atheist, without the accompanying belief in God and without a desire to destroy evolution or science as a whole. Since Monton is an atheist, he can certainly try.

    The most interesting aspect of the book, for me, concerns the Dover court case. Monton has been severely critical of the decision of Judge John E. Jones III in his paper Is Intelligent Design Science? Dissecting the Dover Decision. Referencing Larry Laudan's earlier criticisms of attempts to oppose creationism via demarcation criteria, Monton addresses the specific criteria that Jones employed against ID. (Later in the book (p73), Monton agrees with Laudan that demarcation criteria do emotive work for us but little else, meaning that the correct response to "is ID science?" is to reject the demarcation problem altogether.)

    The first of Jones's claims is that the arguments against evolution made by ID proponents have been "refuted by the scientific community". Monton responds by noting that even if this is so, it has no impact on the "positive doctrines" (p50) of ID; in other words, the “non-evolution-based arguments” Monton is concerned with. More importantly, even if these arguments were also refuted, it simply does not follow that a refuted doctrine is no longer science (Monton gives the example of Newtonian physics, still taught in schools long after being refuted).

    Monton's reply to Jones's second assertion, that Michael Behe's argument for irreducible complexity is flawed, follows largely from the first. That a theory may have errors or have been refuted altogether is no argument that it is unscientific or that the same non-evolution-based arguments for ID are hopelessly flawed. Monton spends little time on these two points because the decision does not warrant anything further.

    Monton's third criticism relates to Jones's stipulation that science employs methodological naturalism. Monton provides a story – an implausible scenario, by his own admission, but a possible one nonetheless – according to which God is apparently using Morse code to contact scientists, telling them that if they perform their experiments in a specific way then He will cause a miracle. In this story, the scientists do as they are instructed and the miracles happen. The results do not prove that God exists but, according to Monton, they provide evidence for the hypothesis that He does. Since the hypothesis is clearly testable (the miracles could fail to materialise), it follows that the claim that "Supernaturalism is not allowed" in science (p52, due to Robert Pennock at the trial) is false. (As an aside, Monton says in the notes (n44, p162) that his arguments here are "mostly original with [him]". That this site was independently discussing similar things – for example, here – is therefore nice to know.)

    Monton goes on to object to the defense in the trial having not pushed Pennock to clarify his claim that Laudan endorses methodological naturalism (notwithstanding how clearly Laudan has stated his opposition to demarcation criteria like this). Monton says that the defense team "dropped the ball" (p56), an amusing analogy given that Michael Ruse has called Laudan a "Monday morning quarterback" (see here). Although Monton is correct about this, Del Ratzsch's argument about the inherent incompleteness of methodological naturalism might have been better employed here. Ratzsch has argued (see here) – and I have expanded (here) – that adopting naturalism methodologically either means our understanding of the world is inevitably skewed (if it turns out that there are non-natural elements to it, whatever they might be) or else reduces to philosophical naturalism if we assume (or declare) that science can only deal with the natural. Pennock seems to have had something similar in mind (pp64-65) – a "two truths" approach that I referred to in the linked discussion – in which we accept the skew. Monton registers his unease with this resolution of the difficulty and it is interesting that he does not interpret Pennock’s remarks as implying something akin to empirical adequacy or a form of instrumentalism, even though he references Bas van Fraassen on several occasions. Monton might also have used another argument: there simply is no "scientific method" employed in science, as opposed to methods, so invoking methodological naturalism when no one really knows what the methodology is supposed to be (or, for that matter, a coherent definition of what naturalism is) does not seem a very scientific approach.

    Another issue with Jones's proclamation is that it is not clear a priori what will come of adopting an hypothesis, even one involving the supernatural. Discussing Pennock's book Tower of Babel, Monton notes that if there is a supernatural cause or influence present, this does not imply stopping at "God did it". He also quotes Ken Miller in chapter 3 (p112), who argues that a theistic science would "cease to explore, because it already knows the answers". The problem is that this assumes that theists consider God a complete answer. Historically, scientists – or natural philosophers, more accurately – who believed in God, His creation and that God's existence could be inferred from the natural world did not cease investigating anything; instead, they sought to discover how God might have constructed the world, believing that in so doing they were granting greater glory to Him. Monton looks in particular at Newton because Pennock tries to suggest that Newton adopted methodological naturalism (p63); Monton provides a quote from Newton’s Opticks in support but really the matter is straightforward for Newton scholars (see my interview with Stephen Snobelen here): "it is now clear that some of Newton’s pre-existing theological and alchemical ideas actually helped inform some aspects of his natural philosophy or science". These kinds of motivations and influences are skimmed over in debate because the history of science fails to support anachronistic claims about the methodological ideas of early scientists and indeed actively undermines them – quite an irony given the criticism of ID advocates that they do not know enough about the subjects they get involved in.

    It is in the section entitled "some somewhat plausible intelligent design arguments" that Monton begins offering reasons why ID might be more credible than the evolution-based arguments suggest. The "somewhat plausible" category includes the fine-tuning argument, the kalam cosmological argument, an argument from the very existence of life at all and the simulation argument. Each of these is explained and worked through in detail, setting out what they appear to suggest regarding design but also noting possible shortfalls or areas of concern. Monton's objective here is not to provide a definitive argument for design but to show that these arguments are "somewhat plausible" or at least not easily dismissed.

    Perhaps the most interesting part of this treatment occurs with the third argument, looking at how it could be that life came about at all. Monton invokes his own discussion of the probability that life could come about in an infinite universe (see his Life is evidence for an infinite universe and his interview here) and concludes (p104) that "one shouldn't use the development of life from non-life to argue for the existence of a God-like designer". This is because his infinite universe argument suggests that any event with a non-zero probability is (almost) certain to come about, including the jump from non-life to life, however unlikely it might be. For Monton, the matter turns on an assessment of conditional probability: if we assume God exists, how likely is it that life would come about? Alternatively, given that God does not exist, how likely is life? Monton offers his own opinion that life is slightly more likely to happen given that God exists – hence a "somewhat plausible" argument – but that this likelihood is insufficient to stop him being an atheist.

    The final chapter ("an extra free bonus", Monton calls it (p133)) looks at whether ID should be taught in schools. Monton considers the shortcomings of school education as it stands; after all, the presupposition behind most criticisms of teaching ID is that science education otherwise functions as it should. Monton quotes Carl Weiman at the University of Colorado, who has argued (along with plenty of other educational reformists) that improving science education "requires abandoning the longstanding and widespread assumption that understanding science means simply learning a requisite body of facts and problem-solving recipes". Monton advocates "non-proselytizing teaching", by which he means not promoting orthodoxy or an unorthodox alternative but instead seeking to develop critical reasoning via "presenting them with the issues that we as a society debate now" (p149).

    Monton sets out the specifics of his ideas by considering some objections. The first is that allowing ID to be taught would be teaching religion, to which Monton replies that there is nothing explicitly or inherently religious about ID, even if it turns out that the designer really is God and no matter many ID advocates believe this. He also touches on the complaint that critical thinking – not religion – is what we should be teaching children, responding that if we want to achieve this then comparing and contrasting ID with orthodox theories would help facilitate it. Indeed, if we do not have a "bad" theory to teach in such classes, and if this "bad" theory is not expounded in sufficient detail to at least give students an idea of why it initially appears credible, then it is difficult to see how such critical thinking lessons could come about.

    The second complaint is that teaching ID misrepresents the status of ID in science, granting it false credibility. Monton says that some legitimate scientists – legitimate in the sense of their accreditation, at least – support ID, but no doubt ID opponents would retort that an endorsement of ID is enough to undermine their legitimacy. Ultimately, Monton does not need to tackle this complaint: he states that he would not wish to see ID taught if the lessons "pretend that it is widely scientifically respected" or else if it receives "equal time with mainstream scientific theories". The chief requirement remains the "intellectual development of the students" (p151).

    Skipping a step, the rejoinder that this would not be teaching genuine critical thinking is the fourth objection Monton looks at; that is, the claim that advocates of ID are opposed to critical thinking because it leads to the questioning of their religion, so they seek to restrict it to evolution and to creating a false impression that biologists actually believe there is something worth debating. Monton's reply here is a little weak: he wants all students – theists and atheists alike – to be challenged in school, even if it means their own beliefs are subjected to uncomfortable scrutiny, but it is unclear how this ideal translates into classroom practice. After all, ID opponents do not deny the value of critical thinking in the science classroom, but they do worry that ID-supporting teachers will skew their presentations or suggest a controversy or debate in biology where neither exist. That said, it is not at all obvious why ID-opposing teachers would not also be unsuitable and if we try to achieve control over the classroom to preclude inaccurate teaching then perhaps no one will want to teach any more?

    Returning to the third concern, Monton looks at the insistence that only the current consensus should be taught as science, to which he offers two responses. The first is simple: we already fail to teach the consensus because students learn about Newtonian physics, which was shown to be false and replaced by relativity. We might protest that Newtonian physics is still truthlike in most situations a student would encounter in a science class but the point is clear enough: deciding what is taught involves more than a straightforward invoking of consensus opinion. Monton also returns to Weiman's conception of education and wants to avoid "treating science as a monolithic body of facts" (p152): teaching the consensus view most of the time need not mean doing so always. ID opponents would doubtless reply that there is already insufficient time in the curriculum, which only goes to show that tinkering around the edges of education – and even discussing the inclusion of ID in these terms – is not really enough.

    The final objections are that the question of ID's validity is not really a controversy and that teaching it as such is asking too much of both students and teachers. Monton's counterarguments here are unconvincing: it may well be that some ID advocates are presenting science-based arguments for it and that more can be achieved in the classroom than we give credit for, but Monton wants students to know that "science is a dynamic enterprise" and holds that learning about scientific controversies "can give them a better understanding of how science actually happens" (p155). The problem is that ID needs more than a handful of scientists and "somewhat plausible" arguments for it if it is to replace some other potential controversy as a teaching tool. This, after all, is what ID opponents will argue: if we are going to teach how science "actually happens" and show students scientific controversies then there are more than enough options within mainstream science; we do not need to bring in ID. These controversies – such as progress within evolution or the debate over adaptation – exist precisely because science is not monolithic, even if its public rhetoric (usually to combat ID, sadly) may often suggest otherwise. ID would only be preferable (at the moment, anyway) if it has other merits that the other possibilities lack, such as it being a "socio-culturo-political" dispute (p154).

    Overall, the book is an easy read and a great success. Even if the reader objects to Monton’s claims, he gives a variety of engaging arguments – involving dartboards and the likes of Dr Evil – and his work is exactly the kind of applied philosophy that might help people appreciate why we study the subject in the first place. Moreover, it is hard not to admire his candour in envisaging an audience of interested observers (the "remnant" of Albert Jay Nock, perhaps?), rather than those "just looking for the latest salvo to defend their side in an ostensible culture war" (p157). It may not work out this way and some ID advocates may be critical of the false legitimacy Monton supposedly provides ID, but if philosophers of science are not both willing and able to engage in this kind of study then who will?
    Teaser Paragraph: Publish Date: 02/16/2010 Article Image:
    So you want to learn another language, but you aren't sure what method to use. We'll look at how to make a language learning notebook so that you can structure your time and attack grammar, vocabulary, and transcription. This is my simplified version of a Russian guide someone on the How to Learn Any Language forums translated into English.

    Even busy people can use this method. Do you have 30 minutes a day? If so, you can do this. And I guarantee everyone reading this has at least 90 minutes a day they can spare. If not, you should probably rethink your life a little bit. There are a lot of places where there is an opportunity to study we unfortunately do not take advantage of. Do you have a lunch break at work? Use some of that time. Waiting in line for 4 hours at the Department of Motor Vehicles? Use some of that time. Long bus ride? Dive into a grammar book.

    I'm going to mention this first, because I think you should do this first before making a language learning notebook:

    Learning another language is controversial, but generally once you get to an intermediate level, most people stop arguing about what method to use and just advise that you hear and see the language a lot, and attempt to speak it and write it yourself. I think when you first start learning a language, you should gather a bunch of media, like movies/music/TV shows in your target language. Radio is still the best if you're learning a somewhat obscure language. If you're reading this, you have access to the internet, so you'll be able to find something (unless you want to learn a dead language such as Sanskrit). Just google "Swahili radio", or even find out what “Swahili radio” is in Swahili. Listen to all that stuff, maybe have music in your target language playing softly in the background so you can get used to hearing it and cast away that foreign feel.

    While you're gathering media, you should casually read about the grammar so you can see how it differs from English, or any other languages you know if you were lucky enough to grow up in a multilingual household. Be prepared for anything and everything. If you're learning some exotic language that diverged from English's ancestor 10,000+ years ago, it's going to be shockingly different, so you have to learn how to stop thinking English. You will be inclined to call things you are learning silly or illogical. That's your English mind with all its biases and limitations kicking in. And of course, people learning English will be saying the same thing. That is their non-English mind impeding.

    School is terrible at teaching languages, because it attempts to impose a chronological order to the language acquisition process, i.e. Lesson 1, Lesson 2 etc. This is bad because there is no coherence between everything they're learning. Textbooks often seem to lie, or intentionally hold back information so you don't get hit with a bunch of information at the same time. I have a tagalog book that committed heresy by referring to verbal tenses. Unfortunately, tagalog doesn't have true verbal tenses, it has aspects and focuses. At a beginner level it might be okay to say that, but no doubt later you will get confused when you have to unlearn that in order to learn how to express subtle things in the language.

    Do not get a generic school textbook. I recommend getting at least 2 different grammar reference guides. For example I have this grammar reference book for Japanese called "Japanese Sentence Patterns for Effective Communication". It has around 140 different grammatical patterns you can read about. It structures the book like "Section 3, Expressing Giving and Receiving", then it has like 9 different grammatical patterns in that section concerning expressing and giving. There are around 12 sections with corresponding grammatical patterns concerning that topic. It shows an example sentence where it's used, then briefly explains it, then it gives more examples and a few sentences you can try to translate on your own (with the "answer"). The index is good because it organizes it by basically key words/phrases that are the crux of the grammar. Like if I encounter the kara particle and I'm not sure how/why that's being used in this sentence, I go to the index and it has "kara" and all the pages where it's focused on.

    kara 52, 83-84, 170, 190

    from 52


    because 83-84
    etc.

    (meaning that on pg 52, it explains how to express from with the particle kara). This is perfect, and you should look for a grammar reference that is similar to this. Make sure the index is thorough. You will be referring to it often.

    When you have your grammar reference guide, just casually go through the entire thing and see how the language is structured. Michel Thomas who made his famous audio courses said, "If you master the verbs, you master the language.” Pay special attention to the differences and similarities between your language and your target language. Even if the book is telling you in your face: use this verbal construction to express the present-progressive. Is that really true? How far can we take that statement?

    Example:

    You just learned how to express the present-progressive in Japanese. You inflect the verb by changing the plain form to the imperative form, then add -iru to the end. You also learned that the present-progressive can be used to express that the subject of the sentence remains in a certain state rather than actually performing an action. e.g. the picture is hanging on the wall. As opposed to The boy is running. It's fairly similar to English. However, there are "exceptions" to this rule, and by that I mean a native Japanese speaker isn't always going to use the present-progressive to express that a subject remains in a certain state, or that a subject is performing an action, sometimes they'll use the plain form for this. When you learn a new rule, you will over-generalize and there's no way to combat this other than remembering that 98-99% of your assumptions and generalizations will be false.

    Take this real world scenario: Someone explains something to you, or they might ask you if you understood (notice the past tense). In Japanese you have many choices: wakaru (infinitive form, to _____), wakatta (past tense), wakatteiru (present progressive). Some contexts even allow wakatteita (past progressive). This is very different from English, because it would be strange if a conversation went:

    "Do you understand?"
    "To understand."
    "?????”

    Just remember that even though a grammar guide might be telling you X, don't assume that it told you everything, and don't even assume that it's "correct". And at the same time, when you learn the rule, you're more than likely going to be tempted to use it incorrectly even if you have example sentences right there.

    Ok so, just casually read about your target language, maybe watch a few movies or something, get your grammar books/grammar guides ready, get a thick bilingual dictionary. If you can't find a bilingual dictionary on the internet, there will be one in print. If you can't find one in print, you're probably learning a dead language that 500 people speak in West Africa, in which case you don't need anyone's help since you're probably writing your thesis in college or something lmao.

    It's important that you get at least 2 grammar guides, no single source will be comprehensive enough. Preferably you should have 3+ at hand. It doesn't mean you need to slowly read each one, you should choose a primary one, then have the others at hand in case you need clarification or further discussion on difficult topics. Just look at the grammar for maybe a week, idk just w/e, try to understand even the really advanced stuff, you'll thank yourself later when you encounter it and you think, "Oh! I remember reading about that..." And also, you'll actually be able to pick out grammar points in a sentence that won't be so obvious to someone who is learning in school. You're going to encounter stuff that straddles the border between vocabulary and grammar, so for someone who hasn't taken/doesn't take the time to continually look at the entire language even if they have just started, they will miss a lot of things. They will also start looking up "words" in the dictionary that may be grammatically complex/untranslatable, so the dictionary wouldn't be enlightening.

    Example:

    You're trying to read a Japanese paragraph and you see this, "kare ha daigaku wo deteiku" This means 'he will leave the university". The verb here is "deteiku". A beginner that didn't recognize the verb deteiku would be tempted to look up deteiku in the dictionary. When they look it up, it won't be there.

    Let's see why, deteiku is actually the verb deru (to leave, exit) inflected to the imperative form (dete) with iku attached at the end. Iku usually means to go, but it can be used as something called an auxilliary verb. You attach iku to the end of the imperative form of a verb. When you do this, it means that the subject of the sentence is doing something that is moving away from the speaker either spatially or temporally.

    So if I said "kare ha daigaku wo deru", I am saying he will leave the university. But if I use deteiku instead of deru, I am emphasizing that he isn't just leaving the university, the subject of the sentence (he) is moving away from me either spatially or temporally. It eliminates the possibility that maybe he is leaving the university to come to my house, or it eliminates the possibility that I could be going with him, or whatever.

    There is no real equivalent in English, so I don't know if what I just said made much sense. I tried to explain this to someone who was actually learning Japanese, and I was unable to make him understand. But this only emphasizes my point that it's disadvantageous to learn one grammar point at a time and hammer it into your brain like they try to do in school. This doesn't work, because until you actually start using the language, you're going to forget everything several times. You're also going to develop tons of biases and over-generalize more than usual if you go too slowly.

    OK FINALLY, let's actually look at how to make your language learning notebook, lawl.

    language notebook

    Materials needed:

    - At least 2 grammar reference guides
    - A spiral/notebook. You can do this with looseleaf paper, just make sure you don't lose them, and keep it in order.
    - A bilingual dictionary (a language you are fluent in, and your target language)
    - Reading material in your target language

    What we're gonna do is take a paragraph in your target language, and break it down on a sheet of paper.



    In the top left, you should have a somewhat challenging paragraph (to you). Don't make it too challenging. If you just started learning, start with children's books or folklore or something. Later you can move to academic stuff and news or challenging literature. Make sure there's at least 7 or so unknown words (or just elusive words you aren't sure how to use) in the paragraph you chose. In advanced stages, this is hard since you will know a lot of vocabulary, at that point you wanna focus on elusive grammar. In beginning stages, it's anything.

    In the top right, you will put your attempted English translation (or whatever language you know extremely well) after analyzing the grammar and vocabulary in the paragraph.

    In the bottom left, this is where you will put thoughts you had, and very brief grammar you encountered in the sentence that was either challenging or new to you.

    In the bottom right, we will divide it the box into 2. We will fold along the red line. You'll need to cut or carefully tear the top part of the bottom right box so that you can fold along the red line. On the left side of the red line, you will put vocabulary you did not recognize. Write it vertically and number it. Now fold the box over, you should not be able to see the vocabulary words on the inside. On this outside part, you will put the translation. Remember to write down only 1 or 2 words. Never put like 7 possible translations (even though it's in the dictionary). Put the translation that is relevant to the context of the sentence, and perhaps 1 more possible word that doesn't seem to fit in the sentence just to remind yourself the word may be very fluid or have many meanings. Use this to try to memorize the vocabulary.

    If you're unable to give a good translation, or you can't pin down some tough grammar in a sentence, that's okay. Don't spend anymore time on it, move on to the next paragraph. As you complete pages, you'll pick up a little grammar here, a little grammar there, and eventually you'll be able to go back and fully understand what you could not understand before. Plus you can fix/touch up translation to make them "more correct".

    After completing a 100 page notebook, you will have mastered like 600-1000 words and quite a bit of grammar. At first it will go extremely slow when you're just learning your language. You might only get through with 1 page every 4 days or so if you have a busy life (probably even slower), but it speeds up as you learn more.

    The learning curve is flat at first, then suddenly you learn a lot until you hit a brickwall at an intermediate level, then it speeds up again once you've mastered around 3,000 words, at which point you're capable of reading a decent amount of stuff w/o a dictionary.

    You will learn your language efficiently if you use this method. Forget classes and textbooks. Many polyglots/linguists have used this "classic" method. It works.

    I was going to post a page from my notebook, but unfortunately my scanner is being unfriendly, so maybe later. You'll know what to do when you try it yourself though. Get creative. Try whatever works for you.

    If you find this method cumbersome (it will be if you've never tried to learn another language before), it might be a good idea to get a book with some exercises, and maybe work through that first so you know some of the language first.

    I recommend pairing this method with Arguelles' script learning method. At least if your target language is using something other than the roman alphabet:



    http://www.foreignlanguageexpertise.com/foreign_language_study.html#sfl

    2) Scriptorium foreign languages (Arabic, Sanskrit, Chinese)



    Arguelles reviewed a bunch of language learning book series, so I recommend you watch them yourself (he says some interesting things) and he has quite the impressive collection.

    Click here to see the videos.

    Inspiration for a rainy day:


    Teaser Paragraph: Publish Date: 06/30/2009 Article Image:
    By Brian Morton (2009)

    3.1 Fundamental Historicism

    Another position one could take is that being in its most fundamental nature is different in different periods of history. Perhaps being looks very type-like in the early moments of being, but once substances evolve substances dominate the rest of the history of being. Rather than thinking of any period as being a hybrid in which types and substances are equi-fundamental, we might think that types are more fundamental than objects in one epoch, but that objects are more fundamental than types in another epoch.

    This is one plausible way of trying to interpret medieval Chinese Neo-Confucianists, but I'm not convinced it's the right one. Contemporary pictures of the Big Bang, in which the first second of the universe is divided into epochs could be another good example of this kind of approach. For example, during the Grand Unification Epoch (say from around 10E-43 to 10E-35 seconds into history) it would make little sense to think of the world in terms of particles, and not much in terms of fields (there are still only 2 fields!) even if these are apt metaphors for later on, but gauge groups are already apt. But by the Quark epoch (say 10E-12 to 10E-6) the four fundamental fields have all become distinguishable (although the temperature is still too high for stable hadrons), thus field ontologies are apt, but thing ontologies are still probably not very appropriate (certainly hadrons are vastly more thing-like than quarks). But by the 2nd second, even thing-ontologies begin to become apt ways to describe reality.

    Still physicists don't seem to usually talk or think this way. Their goal is to examine "laws" of physics, thus those features which are invariant from epoch to epoch, so they don't like to think that being itself might alter fundamentally from age to age. That would bring into question a lot of the uniformitarian assumptions they need to make their observations in this age salient for making retrodictions about past ages. So you could interpret stories about the moments just after the Big Bang, as examples of historicism at the level of fundamental ontology, but it is not at all clear that that is the best way to understand why physicists are telling these stories.

    Another example of fundamental historicism might be the thought of Hegel. Again he's hard to interpret, and smart people fight about exactly what he is trying to say. But one way to read him is as a consummate historicist, for whom the philosophical categories, even knowledge, logic, and right, are evolving through time, so that what ought to count as knowledge varies from world historical age to world historical age (and in somewhat parallel from stage to stage in the evolution of the phenomenology of spirit). This appears to be true for Hegel, even of being at its most fundamental.

    Hegel believes in a single existing ultimate Being, the Absolute, which can make him look like a Monist. But for Hegel the Absolute is in actuality only in the future, world history is the process of the Absolute attaining full concrete actuality. Hegel is a monist-ontologist, but only about the future. Prior to the absolute embodiment of the Absolute, Hegel might appear to be a more traditional substance-ontologist, but this is misleading. "The living Substance is being which is in truth Subject, or, what is the same, is in truth actual only in so far as it is the movement of positing itself . . . it is the process of its own becoming (Phenomenology of Spirit, p. 10)." So is he a process-ontologist? His notion of "principle" makes him look like a type-ontologist. He elsewhere makes it look like the process of phenomenology or of world history are the coming-to-be of knowledge or consciousness of the Absolute, rather than the coming-to-be actual of the Absolute itself. So perhaps, being is fundamentally a Monist-ontology all along, but consciousness of being is a process of coming-to-be conscious of being, which progresses in stages.

    In the end, I'm not sure that Hegel is really a fundamental Historicist, any more than the physicists, or Neo-Confucians are. But if not this is a logical space in fundamental ontology that someone could move into if no one else already has.

    3.2 Fact-Ontologies

    Like Heidegger's 1926 aborted attempt at fundamental ontology, three other important 20th century approaches were pioneered in the 1920s: early Wittgenstein's 1921 fact-ontology, A.N. Whitehead's 1927 process-ontology, and Bohr and Heisenberg's Copenhagen ontology of 1927.

    For the Wittgenstein of the Tractatus, objects are real only as they actually exist and that is as components of states of affairs. The world at its most basic level is a collection of facts or states of affairs.



    "2.011 It is essential to things that they be possible constituents of states of affairs."

    If substances are basically noun-like beings, then states of affairs and facts are sentence-like beings.



    "2.06 The existence and non-existence of states of affairs is reality (we also call the existence of a state of affairs a positive fact, and their non-existence a negative fact.)"

    States of affairs, or facts, can also be called situations or even pictures. Indeed, for Wittgenstein, a grammatical statement is a kind of picture of a state of affairs. A sentence or state of affairs can be broken down into smaller components, names, predicates, functions, objects, positions, etc. But these smaller components only have meaning, sense, reality, or even possibility in the context of the sentences or states of affairs of which they are components (3.3).

    Anything smaller than a state of affairs is only real in a fact-ontology in the context of a state of affairs. Properties, for example, are perfectly sensical forms of being, but they are dependent in their being on states of affairs:



    "2.0231 The substance of the world can only determine a form, and not any material properties. For it is only by means of propositions that material properties are represented - only by the configuration of objects that they are produced."

    What makes a rose red are the configurations of objects which the rose figures in, or indeed could possibly figure in. Fact-ontologies also explain the phenomenon that motivated type-ontologies, that the concept of circle or mammal can be predicated of other things, or have other things predicated of it. All humans are mammals, and all mammals are animals, both make sense, but "mammals" is not a subsisting thing playing both roles; it is a commonality of our language being used in both pictures or statements asserting possible states of affairs do in fact obtain. Nor is a fact-ontology Monist. It makes sense to talk about the one-great-fact, "all that is the case," but Wittgenstein is confident that "1.2 The world divides into facts." There are other facts besides the one great all-encompassing fact.

    What the later Wittgenstein thinks is a source of controversy, but it sure looks to me like he partially backs away from a fact-ontology. What is supposed to be special about facts or states of affairs is that they are the locus of meaningfulness for names, and of sensicalness. But later Wittgenstein seems to worry that even a proposition is not enough context for sense and meaningfulness; you need a language-game and indeed, a language-game needs to be embedded in a way of life.

    Wittgenstein takes the attitude that thus we should live our ways of life, and try not to get too hung up on the anomalies created by the ways we talk about our ways of life. One could instead have argued that since facts and properties and types and functions and names and such are all non-fundamental aspects of being, derivative on ways of life, it is the ways of life which are the fundamental level of being. But I don't think he draws that conclusion, and I've never seen anyone assert a lifestyle-ontology (although come to think of it, Heidegger was drifting that way before he gave up).

    Another problem with fact-ontologies is how little they have to say about time and temporality. Wittgenstein is mostly interested in formal logic, where time does not really matter because, as he argues the process is always identical to the result.

    Fact-ontologies did not die with Wittgenstein though, later ontologists like Menzies 1989, Mellor 1995, etc. are all pretty close to the fact-ontologist picture. For Wittgenstein a fact is not a true proposition, but the aspect of the world that makes a true proposition true. Since the word gets used both ways in English, fact-ontologists usually need to make a terminological distinction to disambiguate. Mellor calls a true proposition a fact, and what makes it true a "facta." Menzies calls these abstract situations and real situations.

    3.3 Process-Ontologies

    It is traditional to attribute the beginning of process-ontology to Heraclitus. He does say "all things are in process ...", but he also says, "all things are One ..." and "everything taken together is whole but also not whole ..." and "to God all things are beautiful, good and just ..." and several other sayings about all things. On the question of Heraclitus' ontology I always recommend Richard Geldard's nuanced Remembering Heraclitus. Like Aristotle, Heraclitus is too nuanced to pigeon-hole comfortably.

    The ancient Chinese are far more plausible early process-ontologists, but they have little influence on the West in this regard until recently. Likewise, Leibniz is making some real stabs away from the substance and type-ontologies he is familiar with, but doesn't really wind up with process as his key notion. Henri Bergson, early Whitehead, and even Pierce and James have some foreshadowing of process-ontologies, but their formulae are often quite clunky. Bergson argues for duration as a form of qualitiative multiplicity. Early Whitehead, coming from a math background argues for a field-ontology in which objects are actually fields with both spatial and temporal extensions. I'm not going to pretend to understand Pierce's obsessively triadic story here. James, too, is clearly rebelling against substance-ontology in many places, but he doesn't really have anything coherent to replace it with.

    You can find other precursors. The Stoics insist that all existents were either actors or acted upon, but then asserted that all and only physical things fit this criterion, and fell back to substance-ontology. The Chinese reflection is sometimes interpreted as putting processes of change at the center of reality rather than categories of static being like nouns or adjectives. I've already mentioned the Yin-Yang school, the 5 elemental processes, and the Yi Jing. The text called Hung Fan ("Great Norm") does have some of this and it is expanded by Zou Yan. But, by the time the Yi Jing is interpreted by the Neo-Confucians, we have two distinct but inter-related layers of reality "the tao of every class of things" and "the tao of the transformation of all things." Change is one of the central metaphysical concerns for the Chinese thinkers, but so is type, its hard to say if we have quite a process-ontology even here, although maybe we do. At least by Whitehead's 1929 masterpiece Process and Reality, there can be no doubt that we have a distinctive process-ontology approach. Later folk like Hartshorne, Weiss, Samuel Alexander and C. L. Morgan and A. P. Ushenko are cited by Nicholas Rescher as process philosophers in his Stanford Encyclopedia of Philosophy article on "Process Philosophy" as well.

    The idea is that, rather than temporally enduring substances being the fundamental (along with ways for substances to be), the fundamental beings are processes and ways for processes to change. Change and temporality are constituent factors in some way of being, along with their neighbors, alternation, striving, novelty-emergence, and contingency. Field-Being thinker Lik Kuen Tong puts it well, "The world is not an assemblage of independent, substantial entities; nor is it reducible to a determinate totality of atomic facts. It is rather a Great Flow or Great Ocean of Becoming ..." or elsewhere "Field-Being philosophy is based on the fundamental intuition that Reality is Activity, not Entitivity."

    For Process-ontologies being is verb-like, it happens rather than is. Other ontological categories, like objects, properties, fields, functions, types, facts, etc., can all exist, but need to be re-worked as ways for processes to change. So, for example, my coffee cup on process-ontology is a stable pattern in the changes of the processes making it up. It is a collection of atoms moving, but moving in ways that constitute its temperature, crystalline structure, motion relative to my desk, etc. These atoms are themselves relatively stable patterns in the changing of the processes that make them up, the orbit of the electrons, the motion of the protons and neutrons, etc. Until we get down to a level of description where the patterns aren't even regular enough to make object-like metaphors tempting. Likewise, even on large scales, we can be tempted to object-talk when processes are behaving nice and predictably, but the more novelty emerges, the more tempting it is to revert to the process talk that is fundamental. Living animals, and trains of thought, are especially process-like because they are unpredictable, or we might say creative or surprising. Properties will then be relatively stable patterns in the behavior of the more object-like processes. My Coke can is red, and by "red" a process-ontologist means, "looks red to me" that is tends to make me alter my thought-processes in ways I have come to habitually label as red. But properties, too, depend on stability of the behavior of processes. If the processes start behaving especially novelly, I have to create new conceptual categories for properties. Suppose that, much to my surprise, my Coke can starts exhibiting the following behavior, it seems red to my right eye, but seems green to my left eye, so that my brain starts flummoxing around with how to interpret the visual signals in terms of my categories. In this case, process-ontology says I need to revise my system of approximation of processes into property-like stable patterns, because the patterns aren't quite as stable as I had previously thought.

    Process-ontology is motivated partly by 20th century grappling with the weirdness of sub-atomic "particle" behavior (which isn't very particle-like at all), but it's got lots of other motivations too. It's a way to try to take evolution seriously and build evolution into the overall picture of being. After all, natural kinds like species or genuses seemed like great exemplars of changeless ultimate being to advocates of Plato's theory of the forms, but the understanding that they change too, is part of what undercuts type-ontology in modern days. It's been tempting to theologians trying to reconcile God as being, with evolution and human freedom. It gives free-will a metaphysical basis that is hard to match in more deterministic ontologies.

    3.4 Trope-Ontologies

    In 1953 Donald Williams coined the term "trope" as a metaphysical category. Tropes have been described as "abstract particulars" (by Stout in 1923!) and "concrete universals." To use Michael Moore's example (Causal Relata, 2004), consider the claim "This dog is white." On a thing-ontology we are going to have two entities participating in the truth making of this claim, a particular, concrete thing, "this dog" and an abstract universal property "being white." A tropist asserts that there is another entity here, which is part of the truth-making of the claim, the particular whiteness of this particular dog. It is abstract in a sense (it is the whiteness of the dog, not the whole of the dog) and particular in a sense (it is the whiteness of THIS dog). It is concrete (not just any whiteness but a concrete whiteness) but still universal (it is inherently related to all other whiteness despite its concreteness). A weak tropist might think that the object, trope, and property are all truth-makers of the claim, basically just adding tropes into the traditional object/property ontology. But an "ardent tropist" thinks that tropes are the fundamental layer of being, and that objects and properties are derivative upon them. Objects become, on this account, collections of tropes, the dog is the sum of that dog's particular features. Properties become patterns of resemblances between tropes, whiteness is an abstraction of the similarities between all the different particular white tropes. Ardent tropists include D. Williams, K. Campbell (1990) and D. Ehrling (1997). Tropes are nice for trying to make sense of causality. As Campbell puts it "when you drop it, it is the weight of this particular brick, not bricks or weights in general, which break the bone in your particular left big toe." On the other hand, trope-ontologies have some trouble differentiating themselves from fact-ontologies. How is the particular whiteness of this particular dog, metaphysically or ontologically distinct from the fact that this particular dog is white?

    3.5 Coping-with-Quantum Ontologies

    The last great family of ontologies were also born in the 1920s the many attempts to cope with the weirdness of quantum mechanics results in formal ways. Heisenberg and Bohr, collaborating together in Copenhagen around 1927, became convinced that in order to make sense of quantum mechanics it was necessary to re-envision the fundamentals of ontology away from a pure object/predicate picture. Recent polls show that the Copenhagen interpretation is still the most popular interpretation among quantum mechanicists, but that it does have real competition.

    The main idea of the Copenhagen interpretation is that there is a "wave-function" of relative probabilities of any given physical system being in various alternate states of being. The wave function is a way of mathematically modeling several distinct ontological notions together, that of a system, a set of possible states of being for the system, and relative probabilities of being in those states of being. It assumes that descriptions of nature ought to be probabilistic all the way down, all the way to the most fundamental descriptions possible. However, the Copenhagen interpretation also assumes the fundamental reality of "wave-function collapse." It assumes that measuring devices are classical, and that they have and impose classical object/property metaphysics. When I measure the position of an electron in the system, its position goes from being a wave-function of various possible locations at various probability-levels, to being a specific definite location, at probability 1, although at the same times its wave-function for vector of motion becomes immeasurable. So, in the Copenhagen interpretation, being normally resides in an undifferentiated, highly probabilistic state, lacking in objects or properties, but when it is measured it becomes object-like and takes on traditional properties. In this picture properties are momentary results of measurement activities, and the normal state of things is to have a range of property-probabilities instead. It is unclear exactly to what extent the Copenhagen interpretation was intended to be an ontological position, rather than say an epistemological one (although it was intended to be fundamental). Bohr, for example, claimed "It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature." But he also said, "Nothing exists until it is measured." Likewise, Bohr and Heisenberg were not completely on the same page, and there are real disputes in how to interpret Bohr. Just how much of a Positivist was he? Just how far down the subjectivist road was he willing to go with Heisenberg? Jan Faye's Stanford Encyclopedia of Philosophy article on Bohr argues for a roughly Kantian interpretation of Bohr's thought. Indeed, Bohr seems to change his position over time, early on he speaks of Heisenberg's "uncertainty relation" as if the probabilities waves represented epistemic limitations, but later (after the Einstein-Podolsky-Rosen objections are raised) he speaks of the "indeterminacy relation" as if the problem is ontological rather than just epistemic.

    So regardless of what exactly Bohr or Heisenberg thought, there are a variety of ontological positions in the rough vicinity of their thought which have all become populated since their time, by people trying to cope with the results of quantum mechanical experiments. You could treat the wave functions as genuine descriptions of the being of nature (rather than just our knowledge of nature), or be agnostic on this point. You could treat the collapse of the wave-functions as a genuine ontological change in the structure of being, or merely as an alteration in our knowledge of being. You could even interpret wave-function collapse as being caused by the presence of a conscious observer, often called the "consciousness causes collapse" interpretation.

    There have been many attempts to square the results of quantum physics with more traditional ontologies. If we give up the notion that the wave-functions are complete descriptions of the system, there might be a hidden variable which determines which of the outcomes occurs (including the Bohm-deBroglie version), and we can re-admit determinism into the picture, and rest with a traditional substance-ontology. Indeed, the null interpretation, and pure instrumentalist interpretation can probably be thought of as leaving all the ontological possibilities open, including traditional substance-ontology. If we take the ensemble or statistical interpretation, the probability claims of the wave-function only hold for large groups of systems, not for each case; we can give a Frequentist reading of the probability claims and again we can have a traditional substance-ontology. Other pictures leave us in mildly non-traditional substance-ontologies. The Many Worlds and Many Minds interpretations of quantum mechanics, suspect that all the outcomes of the probability wave-functions occur, but that they do so in separate worlds or mental spaces. This obligates us to an awful lot of worlds (or mental locations), more than in many substance-ontologies, but it allows the ontological make-up of each world to be normal old objects and properties.

    But there are, so far, at least five other interpretations of the quantum mechanical results which leave us in fundamental ontological positions other than substance-ontology or Copenhagen ontology: consistent histories, Quantum Logic, Cramer's transactional interpretation, Van Fraassen's modal interpretation, and Rovelli's relational quantum mechanics. The Consistent histories interpretation advocated by Hartle and Gell-Mann in the late 20th century, and is often thought to clarify the Copenhagen interpretation, without being distinct from it. Here systems have multiple possible past-states (histories), but not all possible histories are consistent (i.e., obey the laws of classical probability). Quantum mechanics then becomes a set of constraints on the possible consistent histories of a system. In this picture, the issue is not so much that measuring causes a change in the system, as that measuring changes which of the possible histories are consistent with our information. Consistent histories allows objects to exist and to have properties in the present, and in each specific pasts, but which past is "the" past of an object becomes underdetermined, which is a fairly major departure from substance-ontology. Whether it is distinct from Copenhagen ontology or not is a trickier question.

    Quantum logic approaches were pioneered by Von Neumann and Birkhoff in 1936. In many ways it looks to me like a different formal approach to roughly the same picture as the consistent histories picture. We extend the Hamiltonian definition of an observable (a property) in light of the gauge group results since Hamilton's time and get a densely-defined self-adjoint operator A, on the Hilbert space of the quantum state (what is often called a spectral measure, the equivalent of an eigenvector for an arbitrarily large square matrix). Measurement yields a real number in a range. So imagine we ask for the velocity of a particle, we get a real number answer. So far, we have basically properties and states, with a slightly different underlying algebraic basis. But if we set up an array of propositions asking yes-no questions about the quantum state, and then look for the orthocomplement we get a weird result. For what solutions of q are (p or q) =1 and (p and q) =0? For a classical proposition system, only the set-complement of p, not p, fits these requirements. In a sense, claims have a unique negation. But for a lattice of projections in Van Neumann's definition of "property" there are an infinity of distinct solutions, "negations" of p. We have an infinity of distinct ways to deny a proposition. Or, to put the point in consistent history terms, we have an infinity of distinct ways for possible histories of the system to be inconsistent. It is as if we have a property p, but there are many logically distinct ways to fail to consistently have property p. Properties, in this picture, are not primary features of substances, but instead are features of worlds or histories or states, and this turns out to make a subtle weirdness in their logical and ontological structure.

    Cramer's Transactional interpretation of 1986 is a refinement of the Feynman, Wheeler 1945 position, and involves causality breaking down so that there are waves of information going forward in time and backward in time. In effect, before an event is about to occur (say a photon being detected by a detector) the photon makes an "offer," information going forward in time, and the detector makes a "confirmation" information going back in time, and the standing wave created by these two is the event. Objects and properties work fairly normally, but the holding of a property by an object, (the collapse of the wave-function) is temporally non-local and occurs along the whole "transaction" the temporal range over which the offer and confirmation waves are interacting.

    In Van Fraassen's "modal" interpretation from the early 70s is also quite distinct. Here the idea is to divide the notion of "state" into two distinct types, the "value state" and the "dynamic state." The value state determines the properties of the system, but it does not determine the possible future value states. The dynamic state determines which future value states are possible (and how likely they are) but does not determine the properties of the system if measured. The dynamic state wave-function never collapses, the value state wave-function never projects. The being of states is simply decoupled from the possibilities of becoming of the states. But when others turned to fitting this philosophical picture into the details of the current physics notation, they often strayed a bit. Kochen, for example, tried to tie his 1985 modal interpretation to the failure of the polar decomposition theorem, but winds up giving up on all intrinsic properties (value or dynamic), analyzing all properties in terms of relations (and thus pre-saging our next picture).

    In Rovelli's 1994 relational quantum mechanics the key idea is to do away with properties entirely and make due with relations instead, and let them be governed by Wheeler's quantum information theory. As Rovelli puts it "Quantum mechanics is a theory about the physical description of physical systems relative to other systems, and this is a complete description of the world." Indeed, even the notion of "state" gets cashed out in purely relational terms. Unless I am mistaken, Rovelli is arguing for what I call a lattice-ontology, and claiming that this move allows us to side-step the apparent problems with quantum mechanics.

    How to adjust our ontologies to cope with quantum mechanics is definitely still an on-going project. There are open research questions in many of these pictures, both on the algebraic and experimental fronts.

    3.6 Event-Ontologies?

    Events are things that happen. They are the ontologically equivalents of substantivized verbs, that is verbal forms that have been turned into nouns. A wedding is a great example, both ontologically and linguistically. The activity of "wedding" has been transformed into a noun "a wedding." As such events are not quite on a par with objects, properties, types, facts, or processes. A thing like a stone or a chair exists, it might exist over a duration of time, but it doesn't "happen" whereas a wedding or a battle "happens" rather than existing over a stretch of time. If you say, no actually the rock is really undergoing changes, chemical reactions, erosion and so on, over the time period, then you are in effect arguing for a process or event account of the rock, rather than a substance-ontology of it. Objects can move, events can't. Objects resist co-location in space (you can't have two different objects in the same spatial location), but events tolerate it (on most accounts). Likewise events don't seem to behave quite like facts. Caesar's death in 44BC in Rome was an event with temporal and spatial boundaries (perhaps fuzzy ones). But the fact that Caesar died in Rome in 44BC, is as true and existent today as it was then, it's atemporal in a funny sense. Its also vastly less determinate (its far more abstract) than the actual event of which it is a picture. Indeed, especially after Wittgenstein, it is very natural to think of facts as linguistic pictures of events. Nonetheless it has been very tempting for Analytic philosophers to give accounts of events or of facts which amount to assimilating or all but assimilating the two. In a sense, one of the big problems with Wittgenstein's fact, state of affairs, and situation talk, is its insensitivity to issues of temporality. So events become ways to try to compromise between quite atemporal fact-ontologies and even more temporal process-ontologies.

    There is lots of dispute on how to differentiate events, and these can lead to radically different pictures of events. Michael Moore likes to divide accounts events into 5 rough sub-varieties (Causal Relata 2004): extremely course grained (D. Williams 1953, Quine 1985), course grained (Anscombe 1963, Davidson 1980), Moderately fine grained (J. Thomson 1977, Thalberg 1977), fine grained (Goldman 1970, Kim 1973) and extremely fine grained-grained (Dretske 1977). For Kim for example, an event is just the exemplifying of a property by an object over a duration of time. Thus there are for Kim exactly as many distinct events in particular region of space/time as there are properties exemplified. For this picture, the object/property distinction will be central to ontology so we'll have more or less a thing-ontology, and events will be just a special class of things, the exemplifications of properties. Indeed, if you ask Kim how is an event: i.e., the exemplification of property P by object O, at time T and location L, ontologically or metaphysically distinct from the fact that "object O exemplified property P, at time T and location L" he has no answer. The two are identical. Events map one to one to facts, in metaphysically indiscernible ways for Kim. On the other end of the spectrum, Quine individuates events purely by their spatio-temporal boundaries. So the earth's spinning during duration D, is exactly the same event as the earth's cooling during duration D. For Quine too an event is basically just a kind of object, a region of space/time, and it has properties like any other object. If you think events can re-occur (say for example the sun rising every morning), then it looks like events are just a kind of property of some sort, perhaps a property of moments and intervals of time (Montague, 1969) or of cross-world classes of individuals (Lewis, 1986). An ardent tropist can even reduce events to tropes without requiring re-occurrence of events, perhaps the sun rising this morning is a simply a trope of the sun.

    In short, most 20th century Western pictures are going to want to have some role for events, but they often disagree wildly on what, and it is very easy to reduce events to other kinds of entities. I'm not aware of anyone trying to make events fundamental to ontology, but I can't think of any reason to rule it out. T. Parsons' brief 1991 "Tropes and Supervenience" briefly sketches a way to build tropes out of events and states, but doesn't get as far as claiming that events are fundamental.

    I hope you have enjoyed my brief survey of professional philosophical reflection on fundamental ontology. It is easy for Westerners to get trapped into some variation on a very old very standard fundamental ontology involving objects, properties and predicates, that probably goes back at least to the pre-historic proto-Indo-Europeans. Indeed, a traditional short taxonomy of fundamental ontologies simply distinguishes substance-ontology from all other pictures. I think there is a lot of robust variety in the other pictures, both those actually advanced by folks over the centuries, and those which are logically possible but where it is unclear if they have actually been advanced. And there are a lot of motivations for questioning or opposing the substance ontology: from theology, to multi-culturalism, to quantum physics, and beyond.